Compare commits

...

223 Commits

Author SHA1 Message Date
Fennerr
bcfdcbde30 Resolved some conflicts 2024-02-15 15:56:07 +02:00
Sergio Garcia
2f50aaa9c1 resolve conflicts 2024-01-16 11:16:11 +01:00
Nacho Rivera
537081a0f6 feat(AwsProvider): include new structure for AWS provider (#3252)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2024-01-16 10:51:15 +01:00
Sergio Garcia
2eb774bbc9 chore(manual status): change INFO to MANUAL status (#3254) 2024-01-16 10:45:00 +01:00
Sergio Garcia
5419117842 feat(status): add --status flag (#3238) 2024-01-16 10:44:39 +01:00
Sergio Garcia
e72831d428 feat(kubernetes): add Kubernetes provider (#3226)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2024-01-16 10:43:22 +01:00
Sergio Garcia
217b8ad250 fix(gcp): fix error in generating compliance (#3201) 2024-01-16 10:42:34 +01:00
Sergio Garcia
09b4548445 feat(compliance): execute all compliance by default (#3003)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2024-01-16 10:42:31 +01:00
Nacho Rivera
0d96583769 feat(CloudProvider): introduce global provider Azure&GCP (#3069) 2024-01-16 10:41:11 +01:00
Sergio Garcia
722fe0a1bc chore(sts-endpoint): deprecate --sts-endpoint-region (#3046)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2024-01-16 10:39:14 +01:00
Sergio Garcia
445821eceb feat(mute list): change allowlist to mute list (#3039)
Co-authored-by: Nacho Rivera <nachor1992@gmail.com>
2024-01-16 10:39:11 +01:00
Nacho Rivera
c3d129a4b2 chore(update): rebase from master (#3067)
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: r3drun3 <simone.ragonesi@sighup.io>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: John Mastron <14130495+mtronrd@users.noreply.github.com>
Co-authored-by: John Mastron <jmastron@jpl.nasa.gov>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: github-actions <noreply@github.com>
Co-authored-by: simone ragonesi <102741679+R3DRUN3@users.noreply.github.com>
Co-authored-by: Johnny Lu <johnny2lu@gmail.com>
Co-authored-by: Vajrala Venkateswarlu <59252985+venkyvajrala@users.noreply.github.com>
Co-authored-by: Ignacio Dominguez <ignacio.dominguez@zego.com>
2024-01-16 10:36:42 +01:00
Fennerr
028d29b8ff Added rich to poetry dependencies 2023-12-21 08:43:57 +02:00
Fennerr
b976cab926 Added rich to poetry dependencies 2023-12-21 08:43:47 +02:00
Fennerr
197a08ab94 Added --only-logs and some reordering 2023-12-21 08:34:49 +02:00
Fennerr
0d97780ade cleaned up execution manager,live display. Added metaclass 2023-12-20 23:22:18 +02:00
Fennerr
f2f922d7e8 fixed decorator to correctly handle args 2023-12-20 12:10:34 +02:00
Fennerr
606b4b5a66 merged threading progress 2023-12-20 11:58:39 +02:00
Fennerr
132056f4c1 some more progress 2023-12-20 11:52:32 +02:00
Fennerr
4845d6033b added progress decorator 2023-12-20 11:48:40 +02:00
Fennerr
57550e6984 initial switch 2023-12-20 11:42:48 +02:00
Fennerr
040b780af7 WIP: improved layout 2023-12-20 00:14:53 +02:00
Fennerr
abaa7855d7 Pull rebase from master 2023-12-19 21:55:18 +02:00
Fennerr
e9c6b35698 WIP: added verbose results and timer 2023-12-19 21:54:26 +02:00
Nacho Rivera
78505cb0a8 chore(sqs_...not_publicly_accessible): less restrictive condition test (#3211)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-19 16:53:19 +01:00
Fennerr
c92740869f WIP: centered results table 2023-12-19 14:13:59 +02:00
dependabot[bot]
f8d77d9a30 build(deps): bump google-auth-httplib2 from 0.1.1 to 0.2.0 (#3207)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 13:05:30 +01:00
Fennerr
49003fae08 WIP: added results table 2023-12-19 13:52:54 +02:00
Sergio Garcia
1a4887f028 chore(regions_update): Changes in regions for AWS services. (#3209)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-12-19 12:39:19 +01:00
dependabot[bot]
71042b5919 build(deps): bump mkdocs-material from 9.4.14 to 9.5.2 (#3206)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 12:39:10 +01:00
dependabot[bot]
435976800a build(deps-dev): bump moto from 4.2.11 to 4.2.12 (#3205)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 10:14:04 +01:00
dependabot[bot]
18f4c7205b build(deps-dev): bump coverage from 7.3.2 to 7.3.3 (#3204)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 08:55:14 +01:00
dependabot[bot]
06eeefb8bf build(deps-dev): bump pylint from 3.0.2 to 3.0.3 (#3203)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 08:30:45 +01:00
Fennerr
01f3c8656c WIP: improved layout 2023-12-18 23:33:43 +02:00
Sergio Garcia
1737d7cf42 fix(gcp): fix UnknownApiNameOrVersion error (#3202) 2023-12-18 14:32:33 +01:00
Fennerr
ba705406ff Moved all the check execution logic into execution manager 2023-12-18 14:37:41 +02:00
Fennerr
d8101acc9c Moved all the check execution logic into execution manager 2023-12-18 14:37:14 +02:00
dependabot[bot]
cd03fa6d46 build(deps): bump jsonschema from 4.18.0 to 4.20.0 (#3057)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-18 13:00:43 +01:00
Sergio Garcia
a10a73962e chore(regions_update): Changes in regions for AWS services. (#3200) 2023-12-18 07:21:18 +01:00
Pepe Fagoaga
99d6fee7a0 fix(iam): Handle NoSuchEntity in list_group_policies (#3197) 2023-12-15 14:04:59 +01:00
Nacho Rivera
c8831f0f50 chore(s3 bucket input validation): validates input bucket (#3198) 2023-12-15 13:37:41 +01:00
Pepe Fagoaga
fdeb523581 feat(securityhub): Send only FAILs but storing all in the output files (#3195) 2023-12-15 13:31:55 +01:00
Fennerr
126acc046a Added execution manager and live display 2023-12-15 12:28:25 +02:00
Sergio Garcia
9a868464ee chore(regions_update): Changes in regions for AWS services. (#3196)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-12-15 10:15:54 +01:00
Alexandros Gidarakos
051ec75e01 docs(cloudshell): Update AWS CloudShell installation steps (#3192)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-14 08:35:23 +01:00
Alexandros Gidarakos
fc3909491a docs(cloudshell): Add missing steps to workaround (#3191) 2023-12-14 08:18:24 +01:00
Fennerr
f324f27016 updated ui redesign implementation 2023-12-13 23:00:12 +02:00
Pepe Fagoaga
2437fe270c docs(cloudshell): Add workaround to clone from github (#3190) 2023-12-13 17:19:30 +01:00
Nacho Rivera
c937b193d0 fix(apigw_restapi_auth check): add method auth testing (#3183) 2023-12-13 16:20:09 +01:00
Fennerr
8b5c995486 fix(lambda): memory leakage with lambda function code (#3167)
Co-authored-by: Justin Moorcroft <justin.moorcroft@mwrcybersec.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-13 15:15:13 +01:00
Fennerr
5b80082491 merged issue-2516 2023-12-13 15:17:13 +02:00
Pepe Fagoaga
2ca4656ef9 fix(lambda): Do not use function.code 2023-12-13 13:53:36 +01:00
Pepe Fagoaga
cb4de850e9 fix(lambda): Do not use function.code 2023-12-13 13:51:53 +01:00
Fennerr
92e0d74055 Keeping the code seperate from the function obj 2023-12-13 14:51:17 +02:00
Fennerr
578b21f424 Fixed error log message 2023-12-13 14:36:59 +02:00
Fennerr
85c44f01c5 Initial progress 2023-12-13 14:23:49 +02:00
Pepe Fagoaga
fb5d6cfd7e refactor(lambda): fetch code 2023-12-13 13:16:13 +01:00
Pepe Fagoaga
1b3f830623 test(lambda): fix tests 2023-12-13 13:15:23 +01:00
Sergio Garcia
4410f2a582 chore(regions_update): Changes in regions for AWS services. (#3189)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-12-13 10:32:10 +01:00
Fennerr
0481435846 Made use of service thread_pool 2023-12-12 16:10:34 +02:00
Fennerr
bbb816868e docs(aws): Added debug information to inspect retries in API calls (#3186)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-12 14:07:33 +01:00
Fennerr
5554e2be1b Merge branch 'master' into issue-2516 2023-12-12 14:58:10 +02:00
Fennerr
e97e2e84fc Merge branch 'master' of https://github.com/prowler-cloud/prowler 2023-12-12 14:57:53 +02:00
Fennerr
19f38dbb63 Modified logging statements 2023-12-12 14:25:29 +02:00
Fennerr
2441cca810 fix(threading): Improved threading for the AWS Service (#3175)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-12 12:50:26 +01:00
Sergio Garcia
3c3dfb380b fix(gcp): improve logging messages (#3185) 2023-12-12 12:38:50 +01:00
Nacho Rivera
0f165f0bf0 chore(actions): add prowler 4.0 branch to actions (#3184) 2023-12-12 11:40:01 +01:00
Fennerr
06d9eccebd Added threading 2023-12-12 12:22:38 +02:00
Sergio Garcia
7fcff548eb chore(regions_update): Changes in regions for AWS services. (#3182)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-12-12 10:28:01 +01:00
dependabot[bot]
8fa7b9ba00 build(deps-dev): bump docker from 6.1.3 to 7.0.0 (#3180)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-12 10:27:49 +01:00
dependabot[bot]
b101e15985 build(deps-dev): bump bandit from 1.7.5 to 1.7.6 (#3179)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-12 09:53:03 +01:00
dependabot[bot]
b4e412a37f build(deps-dev): bump pylint from 3.0.2 to 3.0.3 (#3181)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-12 09:33:27 +01:00
dependabot[bot]
ac0e2bbdb2 build(deps): bump google-api-python-client from 2.109.0 to 2.110.0 (#3178)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-12 08:07:30 +01:00
Sergio Garcia
ba16330e20 feat(cognito): add Amazon Cognito service (#3060) 2023-12-11 14:35:00 +01:00
Pepe Fagoaga
c9cb9774c6 fix(aws_regions): Get enabled regions (#3095) 2023-12-11 14:09:39 +01:00
Pepe Fagoaga
7b5b14dbd0 refactor(cloudwatch): simplify logic (#3172) 2023-12-11 11:23:24 +01:00
Fennerr
bd13973cf5 docs(parallel-execution): Combining the output files (#3096) 2023-12-11 11:11:53 +01:00
Fennerr
a7f8656e89 chore(elb): Improve status in elbv2_insecure_ssl_ciphers (#3169) 2023-12-11 11:04:37 +01:00
Sergio Garcia
1be52fab06 chore(ens): do not apply recomendation type to score (#3058) 2023-12-11 10:53:26 +01:00
Pepe Fagoaga
c9baff1a7f fix(generate_regional_clients): Global is not needed anymore (#3162) 2023-12-11 10:50:15 +01:00
Fennerr
5dfd8460be Improved threading for EC2 2023-12-11 11:06:32 +02:00
Pepe Fagoaga
d1bc68086d fix(access-analyzer): Handle ValidationException (#3165) 2023-12-11 09:40:12 +01:00
Pepe Fagoaga
44a4c0670b fix(cloudtrail): Handle UnsupportedOperationException (#3166) 2023-12-11 09:38:23 +01:00
Pepe Fagoaga
4785056740 fix(elasticache): Handle CacheClusterNotFound (#3174) 2023-12-11 09:37:01 +01:00
Pepe Fagoaga
694aa448a4 fix(s3): Handle NoSuchBucket in the service (#3173) 2023-12-11 09:36:26 +01:00
Sergio Garcia
ee215b1ced chore(regions_update): Changes in regions for AWS services. (#3168) 2023-12-11 08:04:48 +01:00
Fennerr
f71052bcfe Update awslambda_service.py
fixed blank space + removed the if statement in __init__ which I should have previously removed
2023-12-06 09:19:44 +02:00
Fennerr
7bfdb8c1f3 Update awslambda_service.py
removed function param - it is not needed
2023-12-05 22:18:36 +02:00
Justin Moorcroft
dedb03cc6e initial fix for issue #2516 2023-12-05 22:11:32 +02:00
Nacho Rivera
018e87884c test(audit_info): missing workspace test (#3164) 2023-12-05 16:05:39 +01:00
Nacho Rivera
a81cbbc325 test(audit_info): refactor iam (#3163) 2023-12-05 15:59:53 +01:00
Pepe Fagoaga
3962c9d816 test(audit_info): refactor acm, account and access analyzer (#3097) 2023-12-05 15:09:14 +01:00
Pepe Fagoaga
e187875da5 test(audit_info): refactor guardduty (#3160) 2023-12-05 15:00:46 +01:00
Pepe Fagoaga
f0d1a799a2 test(audit_info): refactor cloudtrail (#3111) 2023-12-05 14:59:42 +01:00
Pepe Fagoaga
5452d535d7 test(audit_info): refactor ec2 (#3132) 2023-12-05 14:58:58 +01:00
Pepe Fagoaga
7a776532a8 test(aws_account_id): refactor (#3161) 2023-12-05 14:58:42 +01:00
Nacho Rivera
e704d57957 test(audit_info): refactor inspector2 (#3159) 2023-12-05 14:19:40 +01:00
Pepe Fagoaga
c9a6eb5a1a test(audit_info): refactor globalaccelerator (#3154) 2023-12-05 14:13:02 +01:00
Pepe Fagoaga
c071812160 test(audit_info): refactor glue (#3158) 2023-12-05 14:12:44 +01:00
Pepe Fagoaga
3f95ad9ada test(audit_info): refactor glacier (#3153) 2023-12-05 14:09:04 +01:00
Nacho Rivera
250f59c9f5 test(audit_info): refactor kms (#3157) 2023-12-05 14:05:56 +01:00
Nacho Rivera
c17bbea2c7 test(audit_info): refactor macie (#3156) 2023-12-05 13:59:08 +01:00
Nacho Rivera
0262f8757a test(audit_info): refactor neptune (#3155) 2023-12-05 13:48:32 +01:00
Nacho Rivera
dbc2c481dc test(audit_info): refactor networkfirewall (#3152) 2023-12-05 13:20:52 +01:00
Pepe Fagoaga
e432c39eec test(audit_info): refactor fms (#3151) 2023-12-05 13:18:28 +01:00
Pepe Fagoaga
7383ae4f9c test(audit_info): refactor elbv2 (#3148) 2023-12-05 13:18:06 +01:00
Pepe Fagoaga
d217e33678 test(audit_info): refactor emr (#3149) 2023-12-05 13:17:42 +01:00
Nacho Rivera
d1daceff91 test(audit_info): refactor opensearch (#3150) 2023-12-05 13:17:28 +01:00
Nacho Rivera
dbbd556830 test(audit_info): refactor organizations (#3147) 2023-12-05 12:59:22 +01:00
Nacho Rivera
d483f1d90f test(audit_info): refactor rds (#3146) 2023-12-05 12:51:22 +01:00
Nacho Rivera
80684a998f test(audit_info): refactor redshift (#3144) 2023-12-05 12:42:08 +01:00
Pepe Fagoaga
0c4f0fde48 test(audit_info): refactor elb (#3145) 2023-12-05 12:41:37 +01:00
Pepe Fagoaga
071115cd52 test(audit_info): refactor elasticache (#3142) 2023-12-05 12:41:11 +01:00
Nacho Rivera
9136a755fe test(audit_info): refactor resourceexplorer2 (#3143) 2023-12-05 12:28:38 +01:00
Nacho Rivera
6ff864fc04 test(audit_info): refactor route53 (#3141) 2023-12-05 12:28:12 +01:00
Nacho Rivera
828a6f4696 test(audit_info): refactor s3 (#3140) 2023-12-05 12:13:21 +01:00
Pepe Fagoaga
417aa550a6 test(audit_info): refactor eks (#3139) 2023-12-05 12:07:41 +01:00
Pepe Fagoaga
78ffc2e238 test(audit_info): refactor efs (#3138) 2023-12-05 12:07:21 +01:00
Pepe Fagoaga
c9f22db1b5 test(audit_info): refactor ecs (#3137) 2023-12-05 12:07:01 +01:00
Pepe Fagoaga
41da560b64 test(audit_info): refactor ecr (#3136) 2023-12-05 12:06:42 +01:00
Nacho Rivera
b49e0b95f7 test(audit_info): refactor shield (#3131) 2023-12-05 11:40:42 +01:00
Nacho Rivera
50ef2729e6 test(audit_info): refactor sagemaker (#3135) 2023-12-05 11:40:19 +01:00
Nacho Rivera
6a901bb7de test(audit_info): refactor secretsmanager (#3134) 2023-12-05 11:33:54 +01:00
Nacho Rivera
f0da63c850 test(audit_info): refactor shub (#3133) 2023-12-05 11:33:34 +01:00
Nacho Rivera
b861c1dd3c test(audit_info): refactor sns (#3128) 2023-12-05 11:05:27 +01:00
Nacho Rivera
45faa2e9e8 test(audit_info): refactor sqs (#3130) 2023-12-05 11:05:05 +01:00
Pepe Fagoaga
b2e1eed684 test(audit_info): refactor dynamodb (#3129) 2023-12-05 10:59:26 +01:00
Pepe Fagoaga
4018221da6 test(audit_info): refactor drs (#3127) 2023-12-05 10:59:09 +01:00
Pepe Fagoaga
28ec3886f9 test(audit_info): refactor documentdb (#3126) 2023-12-05 10:58:48 +01:00
Pepe Fagoaga
ed323f4602 test(audit_info): refactor dlm (#3124) 2023-12-05 10:58:31 +01:00
Pepe Fagoaga
f72d360384 test(audit_info): refactor directoryservice (#3123) 2023-12-05 10:58:09 +01:00
Nacho Rivera
682bba452b test(audit_info): refactor ssm (#3125) 2023-12-05 10:45:15 +01:00
Nacho Rivera
e2ce5ae2af test(audit_info): refactor ssmincidents (#3122) 2023-12-05 10:38:09 +01:00
Nacho Rivera
039a0da69e tests(audit_info): refactor trustedadvisor (#3120) 2023-12-05 10:30:54 +01:00
Pepe Fagoaga
c9ad12b87e test(audit_info): refactor config (#3121) 2023-12-05 10:30:13 +01:00
Pepe Fagoaga
094be2e2e6 test(audit_info): refactor codeartifact (#3117) 2023-12-05 10:17:08 +01:00
Pepe Fagoaga
1b3029d833 test(audit_info): refactor codebuild (#3118) 2023-12-05 10:17:02 +01:00
Nacho Rivera
d00d5e863b tests(audit_info): refactor vpc (#3119) 2023-12-05 10:16:51 +01:00
Pepe Fagoaga
3d19e89710 test(audit_info): refactor cloudwatch (#3116) 2023-12-05 10:04:45 +01:00
Pepe Fagoaga
247cd6fc44 test(audit_info): refactor cloudfront (#3110) 2023-12-05 10:04:07 +01:00
Pepe Fagoaga
ba244c887f test(audit_info): refactor cloudformation (#3105) 2023-12-05 10:03:50 +01:00
Pepe Fagoaga
f77d92492a test(audit_info): refactor backup (#3104) 2023-12-05 10:03:32 +01:00
Pepe Fagoaga
1b85af95c0 test(audit_info): refactor athena (#3101) 2023-12-05 10:03:11 +01:00
Pepe Fagoaga
9236f5d058 test(audit_info): refactor autoscaling (#3102) 2023-12-05 10:02:54 +01:00
dependabot[bot]
39ba8cd230 build(deps-dev): bump freezegun from 1.2.2 to 1.3.1 (#3109)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-05 09:51:57 +01:00
Nacho Rivera
e67328945f test(audit_info): refactor waf (#3115) 2023-12-05 09:51:37 +01:00
Nacho Rivera
bcee2b0b6d test(audit_info): refactor wafv2 (#3114) 2023-12-05 09:51:20 +01:00
Nacho Rivera
be9a1b2f9a test(audit_info): refactor wellarchitected (#3113) 2023-12-05 09:40:31 +01:00
dependabot[bot]
4f9c2aadc2 build(deps-dev): bump moto from 4.2.10 to 4.2.11 (#3108)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-05 09:34:13 +01:00
Pepe Fagoaga
25d419ac7f test(audit_info): refactor appstream (#3100) 2023-12-05 09:33:53 +01:00
Pepe Fagoaga
57cfb508f1 test(audit_info): refactor apigateway (#3098) 2023-12-05 09:33:20 +01:00
Pepe Fagoaga
c88445f90d test(audit_info): refactor apigatewayv2 (#3099) 2023-12-05 09:32:31 +01:00
Nacho Rivera
9b6d6c3a42 test(audit_info): refactor workspaces (#3112) 2023-12-05 09:32:13 +01:00
Pepe Fagoaga
d26c1405ce test(audit_info): refactor awslambda (#3103) 2023-12-05 09:18:23 +01:00
dependabot[bot]
4bb35ab92d build(deps): bump slack-sdk from 3.26.0 to 3.26.1 (#3107)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-05 08:39:26 +01:00
dependabot[bot]
cdd983aa04 build(deps): bump google-api-python-client from 2.108.0 to 2.109.0 (#3106)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-05 08:12:57 +01:00
Nacho Rivera
e83ce86eb3 fix(docs): typo in reporting/csv (#3094) 2023-12-04 10:20:57 +01:00
Nacho Rivera
bcc590a3ee chore(actions): not launch linters for mkdocs.yml (#3093) 2023-12-04 09:57:18 +01:00
Fennerr
5fdffb93d1 docs(parallel-execution): How to execute it in parallel (#3091)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-04 09:48:46 +01:00
Nacho Rivera
db20b2c04f fix(docs): csv fields (#3092)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-04 09:46:20 +01:00
Nacho Rivera
4e037c0f43 fix(send_to_s3_bucket): don't kill exec when fail (#3088) 2023-12-01 13:25:59 +01:00
Nacho Rivera
fdcc2ac5cb revert(clean local dirs): delete clean local dirs output feature (#3087) 2023-12-01 12:26:59 +01:00
William
9099bd79f8 fix(vpc_different_regions): Handle if there are no VPC (#3081)
Co-authored-by: William Brady <will@crofton.cloud>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-01 11:44:23 +01:00
Pepe Fagoaga
a01683d8f6 refactor(severities): Define it in one place (#3086) 2023-12-01 11:39:35 +01:00
Pepe Fagoaga
6d2b2a9a93 refactor(load_checks_to_execute): Refactor function and add tests (#3066) 2023-11-30 17:41:14 +01:00
Sergio Garcia
de4166bf0d chore(regions_update): Changes in regions for AWS services. (#3079) 2023-11-29 11:21:06 +01:00
dependabot[bot]
1cbef30788 build(deps): bump cryptography from 41.0.4 to 41.0.6 (#3078)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-29 08:17:34 +01:00
Nacho Rivera
89c6e27489 fix(trustedadvisor): handle missing checks dict key (#3075) 2023-11-28 10:37:24 +01:00
Sergio Garcia
f74ffc530d chore(regions_update): Changes in regions for AWS services. (#3074)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-28 10:22:29 +01:00
dependabot[bot]
441d4d6a38 build(deps-dev): bump moto from 4.2.9 to 4.2.10 (#3073)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-28 09:57:56 +01:00
dependabot[bot]
3c6b9d63a6 build(deps): bump slack-sdk from 3.24.0 to 3.26.0 (#3072) 2023-11-28 09:21:46 +01:00
dependabot[bot]
254d8616b7 build(deps-dev): bump pytest-xdist from 3.4.0 to 3.5.0 (#3071) 2023-11-28 09:06:23 +01:00
dependabot[bot]
d3bc6fda74 build(deps): bump mkdocs-material from 9.4.10 to 9.4.14 (#3070)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-28 08:46:49 +01:00
Nacho Rivera
e4a5d9376f fix(clean local output dirs): change function description (#3068)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-11-27 14:55:34 +01:00
Nacho Rivera
523605e3e7 fix(set_azure_audit_info): assign correct logging when no auth (#3063) 2023-11-27 11:00:22 +01:00
Nacho Rivera
ed33fac337 fix(gcp provider): move generate_client for consistency (#3064) 2023-11-27 10:31:40 +01:00
Sergio Garcia
bf0e62aca5 chore(regions_update): Changes in regions for AWS services. (#3065)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-27 10:30:12 +01:00
Nacho Rivera
60c0b79b10 fix(outputs): initialize_file_descriptor is called dynamically (#3050) 2023-11-21 16:05:26 +01:00
Sergio Garcia
f9d2e7aa93 chore(regions_update): Changes in regions for AWS services. (#3059)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-21 11:07:08 +01:00
dependabot[bot]
0646748e24 build(deps): bump google-api-python-client from 2.107.0 to 2.108.0 (#3056)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-21 09:31:25 +01:00
dependabot[bot]
f6408e9df7 build(deps-dev): bump moto from 4.2.8 to 4.2.9 (#3055)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-21 08:14:00 +01:00
dependabot[bot]
5769bc815c build(deps): bump mkdocs-material from 9.4.8 to 9.4.10 (#3054)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-21 07:51:27 +01:00
dependabot[bot]
5a3e3e9b1f build(deps): bump slack-sdk from 3.23.0 to 3.24.0 (#3053) 2023-11-21 07:31:15 +01:00
Pepe Fagoaga
26cbafa204 fix(deps): Add missing jsonschema (#3052) 2023-11-20 18:41:39 +01:00
Sergio Garcia
d14541d1de fix(json-ocsf): add profile only for AWS provider (#3051) 2023-11-20 17:00:36 +01:00
Sergio Garcia
3955ebd56c chore(python): update python version constraint <3.12 (#3047) 2023-11-20 14:49:09 +01:00
Ignacio Dominguez
e212645cf0 fix(codeartifact): solve dependency confusion check (#2999)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-11-20 14:48:46 +01:00
Sergio Garcia
db9c1c24d3 chore(moto): install all moto dependencies (#3048) 2023-11-20 13:44:53 +01:00
Vajrala Venkateswarlu
0a305c281f feat(custom_checks_metadata): Add checks metadata overide for severity (#3038)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-11-20 10:44:47 +01:00
Sergio Garcia
43c96a7875 chore(regions_update): Changes in regions for AWS services. (#3045)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-20 10:15:32 +01:00
Sergio Garcia
3a93aba7d7 chore(release): update Prowler Version to 3.11.3 (#3044)
Co-authored-by: github-actions <noreply@github.com>
2023-11-16 17:07:14 +01:00
Sergio Garcia
3d563356e5 fix(json): check if profile is None (#3043) 2023-11-16 13:52:07 +01:00
Johnny Lu
9205ef30f8 fix(securityhub): findings not being imported or archived in non-aws partitions (#3040)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-11-16 11:27:28 +01:00
Sergio Garcia
19c2dccc6d chore(regions_update): Changes in regions for AWS services. (#3042)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-16 11:09:41 +01:00
Sergio Garcia
8f819048ed chore(release): update Prowler Version to 3.11.2 (#3037)
Co-authored-by: github-actions <noreply@github.com>
2023-11-15 09:07:57 +01:00
Sergio Garcia
3a3bb44f11 fix(GuardDuty): only execute checks if GuardDuty enabled (#3028) 2023-11-14 14:14:05 +01:00
Nacho Rivera
f8e713a544 feat(azure regions): support non default azure region (#3013)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-11-14 13:17:48 +01:00
Pepe Fagoaga
573f1eba56 fix(securityhub): Use enabled_regions instead of audited_regions (#3029) 2023-11-14 12:57:54 +01:00
simone ragonesi
a36be258d8 chore: modify latest version msg (#3036)
Signed-off-by: r3drun3 <simone.ragonesi@sighup.io>
2023-11-14 12:11:55 +01:00
Sergio Garcia
690ec057c3 fix(ec2_securitygroup_not_used): check if security group is associated (#3026) 2023-11-14 12:03:01 +01:00
dependabot[bot]
2681feb1f6 build(deps): bump azure-storage-blob from 12.18.3 to 12.19.0 (#3034)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 11:47:42 +01:00
Sergio Garcia
e662adb8c5 chore(regions_update): Changes in regions for AWS services. (#3035)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-14 11:47:24 +01:00
Sergio Garcia
c94bd96c93 chore(args): make compatible severity and services arguments (#3024) 2023-11-14 11:26:53 +01:00
dependabot[bot]
6d85433194 build(deps): bump alive-progress from 3.1.4 to 3.1.5 (#3033)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 09:41:32 +01:00
dependabot[bot]
7a6092a779 build(deps): bump google-api-python-client from 2.106.0 to 2.107.0 (#3032)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 09:16:00 +01:00
dependabot[bot]
4c84529aed build(deps-dev): bump pytest-xdist from 3.3.1 to 3.4.0 (#3031)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 08:48:02 +01:00
Sergio Garcia
512d3e018f chore(accessanalyzer): include service in allowlist_non_default_regions (#3025) 2023-11-14 08:00:17 +01:00
dependabot[bot]
c6aff985c9 build(deps-dev): bump moto from 4.2.7 to 4.2.8 (#3030)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 07:54:34 +01:00
Sergio Garcia
7fadf31a2b chore(release): update Prowler Version to 3.11.1 (#3021)
Co-authored-by: github-actions <noreply@github.com>
2023-11-10 12:53:07 +01:00
Sergio Garcia
e7d098ed1e chore(regions_update): Changes in regions for AWS services. (#3020)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-10 11:34:44 +01:00
Sergio Garcia
21fba27355 fix(iam): do not list tags for inline policies (#3014) 2023-11-10 09:51:19 +01:00
John Mastron
74e37307f7 fix(SQS): fix invalid SQS ARNs (#3016)
Co-authored-by: John Mastron <jmastron@jpl.nasa.gov>
2023-11-10 09:33:18 +01:00
Sergio Garcia
d9d7c009a5 fix(rds): check if engines exist in region (#3012) 2023-11-10 09:20:36 +01:00
Pepe Fagoaga
2220cf9733 refactor(allowlist): Simplify and handle corner cases (#3019) 2023-11-10 09:11:52 +01:00
Pepe Fagoaga
3325b72b86 fix(iam-sqs): Handle exceptions for non-existent resources (#3010) 2023-11-08 14:06:45 +01:00
Sergio Garcia
9182d56246 chore(regions_update): Changes in regions for AWS services. (#3011)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-08 10:42:23 +01:00
Nacho Rivera
299ece19a8 fix(clean local output dirs): clean dirs when output to s3 (#2997) 2023-11-08 10:05:24 +01:00
Sergio Garcia
0a0732d7c0 docs(gcp): update GCP permissions (#3008) 2023-11-07 14:06:22 +01:00
Sergio Garcia
28011d97a9 chore(regions_update): Changes in regions for AWS services. (#3007)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-07 11:04:45 +01:00
Sergio Garcia
e71b0d1b6a chore(regions_update): Changes in regions for AWS services. (#3001)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-07 11:04:36 +01:00
John Mastron
ec01b62a82 fix(aws): check all conditions in IAM policy parser (#3006)
Co-authored-by: John Mastron <jmastron@jpl.nasa.gov>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-11-07 10:40:34 +01:00
dependabot[bot]
12b45c6896 build(deps): bump google-api-python-client from 2.105.0 to 2.106.0 (#3005)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-07 09:45:51 +01:00
dependabot[bot]
51c60dd4ee build(deps): bump mkdocs-material from 9.4.7 to 9.4.8 (#3004)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-07 09:02:02 +01:00
425 changed files with 12956 additions and 16008 deletions

View File

@@ -28,6 +28,7 @@ jobs:
README.md
docs/**
permissions/**
mkdocs.yml
- name: Install poetry
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |

View File

@@ -136,26 +136,16 @@ Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-clo
=== "AWS CloudShell"
Prowler can be easely executed in AWS CloudShell but it has some prerequsites to be able to to so. AWS CloudShell is a container running with `Amazon Linux release 2 (Karoo)` that comes with Python 3.7, since Prowler requires Python >= 3.9 we need to first install a newer version of Python. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
After the migration of AWS CloudShell from Amazon Linux 2 to Amazon Linux 2023 [[1]](https://aws.amazon.com/about-aws/whats-new/2023/12/aws-cloudshell-migrated-al2023/) [2](https://docs.aws.amazon.com/cloudshell/latest/userguide/cloudshell-AL2023-migration.html), there is no longer a need to manually compile Python 3.9 as it's already included in AL2023. Prowler can thus be easily installed following the Generic method of installation via pip. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
_Requirements_:
* First install all dependences and then Python, in this case we need to compile it because there is not a package available at the time this document is written:
```
sudo yum -y install gcc openssl-devel bzip2-devel libffi-devel
wget https://www.python.org/ftp/python/3.9.16/Python-3.9.16.tgz
tar zxf Python-3.9.16.tgz
cd Python-3.9.16/
./configure --enable-optimizations
sudo make altinstall
python3.9 --version
cd
```
* Open AWS CloudShell `bash`.
_Commands_:
* Once Python 3.9 is available we can install Prowler from pip:
```
pip3.9 install prowler
pip install prowler
prowler -v
```

View File

@@ -32,3 +32,14 @@ Prowler's AWS Provider uses the Boto3 [Standard](https://boto3.amazonaws.com/v1/
- Retry attempts on nondescriptive, transient error codes. Specifically, these HTTP status codes: 500, 502, 503, 504.
- Any retry attempt will include an exponential backoff by a base factor of 2 for a maximum backoff time of 20 seconds.
## Notes for validating retry attempts
If you are making changes to Prowler, and want to validate if requests are being retried or given up on, you can take the following approach
* Run prowler with `--log-level DEBUG` and `--log-file debuglogs.txt`
* Search for retry attempts using `grep -i 'Retry needed' debuglogs.txt`
This is based off of the [AWS documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/retries.html#checking-retry-attempts-in-your-client-logs), which states that if a retry is performed, you will see a message starting with "Retry needed".
You can determine the total number of calls made using `grep -i 'Sending http request' debuglogs.txt | wc -l`

View File

@@ -1,26 +1,26 @@
# AWS CloudShell
Prowler can be easily executed in AWS CloudShell but it has some prerequisites to be able to to so. AWS CloudShell is a container running with `Amazon Linux release 2 (Karoo)` that comes with Python 3.7, since Prowler requires Python >= 3.9 we need to first install a newer version of Python. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
- First install all dependences and then Python, in this case we need to compile it because there is not a package available at the time this document is written:
```
sudo yum -y install gcc openssl-devel bzip2-devel libffi-devel
wget https://www.python.org/ftp/python/3.9.16/Python-3.9.16.tgz
tar zxf Python-3.9.16.tgz
cd Python-3.9.16/
./configure --enable-optimizations
sudo make altinstall
python3.9 --version
cd
```
- Once Python 3.9 is available we can install Prowler from pip:
```
pip3.9 install prowler
```
- Now enjoy Prowler:
```
## Installation
After the migration of AWS CloudShell from Amazon Linux 2 to Amazon Linux 2023 [[1]](https://aws.amazon.com/about-aws/whats-new/2023/12/aws-cloudshell-migrated-al2023/) [[2]](https://docs.aws.amazon.com/cloudshell/latest/userguide/cloudshell-AL2023-migration.html), there is no longer a need to manually compile Python 3.9 as it's already included in AL2023. Prowler can thus be easily installed following the Generic method of installation via pip. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
```shell
pip install prowler
prowler -v
prowler
```
- To download the results from AWS CloudShell, select Actions -> Download File and add the full path of each file. For the CSV file it will be something like `/home/cloudshell-user/output/prowler-output-123456789012-20221220191331.csv`
## Download Files
To download the results from AWS CloudShell, select Actions -> Download File and add the full path of each file. For the CSV file it will be something like `/home/cloudshell-user/output/prowler-output-123456789012-20221220191331.csv`
## Clone Prowler from Github
The limited storage that AWS CloudShell provides for the user's home directory causes issues when installing the poetry dependencies to run Prowler from GitHub. Here is a workaround:
```shell
git clone https://github.com/prowler-cloud/prowler.git
cd prowler
pip install poetry
mkdir /tmp/pypoetry
poetry config cache-dir /tmp/pypoetry
poetry shell
poetry install
python prowler.py -v
```

View File

@@ -0,0 +1,187 @@
# Parallel Execution
The strategy used here will be to execute Prowler once per service. You can modify this approach as per your requirements.
This can help for really large accounts, but please be aware of AWS API rate limits:
1. **Service-Specific Limits**: Each AWS service has its own rate limits. For instance, Amazon EC2 might have different rate limits for launching instances versus making API calls to describe instances.
2. **API Rate Limits**: Most of the rate limits in AWS are applied at the API level. Each API call to an AWS service counts towards the rate limit for that service.
3. **Throttling Responses**: When you exceed the rate limit for a service, AWS responds with a throttling error. In AWS SDKs, these are typically represented as `ThrottlingException` or `RateLimitExceeded` errors.
For information on Prowler's retrier configuration please refer to this [page](https://docs.prowler.cloud/en/latest/tutorials/aws/boto3-configuration/).
> Note: You might need to increase the `--aws-retries-max-attempts` parameter from the default value of 3. The retrier follows an exponential backoff strategy.
## Linux
Generate a list of services that Prowler supports, and populate this info into a file:
```bash
prowler aws --list-services | awk -F"- " '{print $2}' | sed '/^$/d' > services
```
Make any modifications for services you would like to skip scanning by modifying this file.
Then create a new PowerShell script file `parallel-prowler.sh` and add the following contents. Update the `$profile` variable to the AWS CLI profile you want to run Prowler with.
```bash
#!/bin/bash
# Change these variables as needed
profile="your_profile"
account_id=$(aws sts get-caller-identity --profile "${profile}" --query 'Account' --output text)
echo "Executing in account: ${account_id}"
# Maximum number of concurrent processes
MAX_PROCESSES=5
# Loop through the services
while read service; do
echo "$(date '+%Y-%m-%d %H:%M:%S'): Starting job for service: ${service}"
# Run the command in the background
(prowler -p "$profile" -s "$service" -F "${account_id}-${service}" --ignore-unused-services --only-logs; echo "$(date '+%Y-%m-%d %H:%M:%S') - ${service} has completed") &
# Check if we have reached the maximum number of processes
while [ $(jobs -r | wc -l) -ge ${MAX_PROCESSES} ]; do
# Wait for a second before checking again
sleep 1
done
done < ./services
# Wait for all background processes to finish
wait
echo "All jobs completed"
```
Output will be stored in the `output/` folder that is in the same directory from which you executed the script.
## Windows
Generate a list of services that Prowler supports, and populate this info into a file:
```powershell
prowler aws --list-services | ForEach-Object {
# Capture lines that are likely service names
if ($_ -match '^\- \w+$') {
$_.Trim().Substring(2)
}
} | Where-Object {
# Filter out empty or null lines
$_ -ne $null -and $_ -ne ''
} | Set-Content -Path "services"
```
Make any modifications for services you would like to skip scanning by modifying this file.
Then create a new PowerShell script file `parallel-prowler.ps1` and add the following contents. Update the `$profile` variable to the AWS CLI profile you want to run prowler with.
Change any parameters you would like when calling prowler in the `Start-Job -ScriptBlock` section. Note that you need to keep the `--only-logs` parameter, else some encoding issue occurs when trying to render the progress-bar and prowler won't successfully execute.
```powershell
$profile = "your_profile"
$account_id = Invoke-Expression -Command "aws sts get-caller-identity --profile $profile --query 'Account' --output text"
Write-Host "Executing Prowler in $account_id"
# Maximum number of concurrent jobs
$MAX_PROCESSES = 5
# Read services from a file
$services = Get-Content -Path "services"
# Array to keep track of started jobs
$jobs = @()
foreach ($service in $services) {
# Start the command as a job
$job = Start-Job -ScriptBlock {
prowler -p ${using:profile} -s ${using:service} -F "${using:account_id}-${using:service}" --ignore-unused-services --only-logs
$endTimestamp = Get-Date -Format "yyyy-MM-dd HH:mm:ss"
Write-Output "${endTimestamp} - $using:service has completed"
}
$jobs += $job
Write-Host "$(Get-Date -Format 'yyyy-MM-dd HH:mm:ss') - Starting job for service: $service"
# Check if we have reached the maximum number of jobs
while (($jobs | Where-Object { $_.State -eq 'Running' }).Count -ge $MAX_PROCESSES) {
Start-Sleep -Seconds 1
# Check for any completed jobs and receive their output
$completedJobs = $jobs | Where-Object { $_.State -eq 'Completed' }
foreach ($completedJob in $completedJobs) {
Receive-Job -Job $completedJob -Keep | ForEach-Object { Write-Host $_ }
$jobs = $jobs | Where-Object { $_.Id -ne $completedJob.Id }
Remove-Job -Job $completedJob
}
}
}
# Check for any remaining completed jobs
$remainingCompletedJobs = $jobs | Where-Object { $_.State -eq 'Completed' }
foreach ($remainingJob in $remainingCompletedJobs) {
Receive-Job -Job $remainingJob -Keep | ForEach-Object { Write-Host $_ }
Remove-Job -Job $remainingJob
}
Write-Host "$(Get-Date -Format 'yyyy-MM-dd HH:mm:ss') - All jobs completed"
```
Output will be stored in `C:\Users\YOUR-USER\Documents\output\`
## Combining the output files
Guidance is provided for the CSV file format. From the ouput directory, execute either the following Bash or PowerShell script. The script will collect the output from the CSV files, only include the header from the first file, and then output the result as CombinedCSV.csv in the current working directory.
There is no logic implemented in terms of which CSV files it will combine. If you have additional CSV files from other actions, such as running a quick inventory, you will need to move that out of the current (or any nested) directory, or move the output you want to combine into its own folder and run the script from there.
```bash
#!/bin/bash
# Initialize a variable to indicate the first file
firstFile=true
# Find all CSV files and loop through them
find . -name "*.csv" -print0 | while IFS= read -r -d '' file; do
if [ "$firstFile" = true ]; then
# For the first file, keep the header
cat "$file" > CombinedCSV.csv
firstFile=false
else
# For subsequent files, skip the header
tail -n +2 "$file" >> CombinedCSV.csv
fi
done
```
```powershell
# Get all CSV files from current directory and its subdirectories
$csvFiles = Get-ChildItem -Recurse -Filter "*.csv"
# Initialize a variable to track if it's the first file
$firstFile = $true
# Loop through each CSV file
foreach ($file in $csvFiles) {
if ($firstFile) {
# For the first file, keep the header and change the flag
$combinedCsv = Import-Csv -Path $file.FullName
$firstFile = $false
} else {
# For subsequent files, skip the header
$tempCsv = Import-Csv -Path $file.FullName
$combinedCsv += $tempCsv | Select-Object * -Skip 1
}
}
# Export the combined data to a new CSV file
$combinedCsv | Export-Csv -Path "CombinedCSV.csv" -NoTypeInformation
```
## TODO: Additional Improvements
Some services need to instantiate another service to perform a check. For instance, `cloudwatch` will instantiate Prowler's `iam` service to perform the `cloudwatch_cross_account_sharing_disabled` check. When the `iam` service is instantiated, it will perform the `__init__` function, and pull all the information required for that service. This provides an opportunity for an improvement in the above script to group related services together so that the `iam` services (or any other cross-service references) isn't repeatedily instantiated by grouping dependant services together. A complete mapping between these services still needs to be further investigated, but these are the cross-references that have been noted:
* inspector2 needs lambda and ec2
* cloudwatch needs iam
* dlm needs ec2

View File

@@ -43,46 +43,71 @@ Hereunder is the structure for each of the supported report formats by Prowler:
![HTML Output](../img/output-html.png)
### CSV
The following are the columns present in the CSV format:
CSV format has a set of common columns for all the providers, and then provider specific columns.
The common columns are the following:
- ASSESSMENT_START_TIME
- FINDING_UNIQUE_ID
- PROVIDER
- CHECK_ID
- CHECK_TITLE
- CHECK_TYPE
- STATUS
- STATUS_EXTENDED
- SERVICE_NAME
- SUBSERVICE_NAME
- SEVERITY
- RESOURCE_TYPE
- RESOURCE_DETAILS
- RESOURCE_TAGS
- DESCRIPTION
- RISK
- RELATED_URL
- REMEDIATION_RECOMMENDATION_TEXT
- REMEDIATION_RECOMMENDATION_URL
- REMEDIATION_RECOMMENDATION_CODE_NATIVEIAC
- REMEDIATION_RECOMMENDATION_CODE_TERRAFORM
- REMEDIATION_RECOMMENDATION_CODE_CLI
- REMEDIATION_RECOMMENDATION_CODE_OTHER
- COMPLIANCE
- CATEGORIES
- DEPENDS_ON
- RELATED_TO
- NOTES
And then by the provider specific columns:
#### AWS
- PROFILE
- ACCOUNT_ID
- ACCOUNT_NAME
- ACCOUNT_EMAIL
- ACCOUNT_ARN
- ACCOUNT_ORG
- ACCOUNT_TAGS
- REGION
- CHECK_ID
- CHECK_TITLE
- CHECK_TYPE
- STATUS
- STATUS_EXTENDED
- SERVICE_NAME
- SUBSERVICE_NAME
- SEVERITY
- RESOURCE_ID
- RESOURCE_ARN
- RESOURCE_TYPE
- RESOURCE_DETAILS
- RESOURCE_TAGS
- DESCRIPTION
- COMPLIANCE
- RISK
- RELATED_URL
- REMEDIATION_RECOMMENDATION_TEXT
- REMEDIATION_RECOMMENDATION_URL
- REMEDIATION_RECOMMENDATION_CODE_NATIVEIAC
- REMEDIATION_RECOMMENDATION_CODE_TERRAFORM
- REMEDIATION_RECOMMENDATION_CODE_CLI
- REMEDIATION_RECOMMENDATION_CODE_OTHER
- CATEGORIES
- DEPENDS_ON
- RELATED_TO
- NOTES
- ACCOUNT_NAME
- ACCOUNT_EMAIL
- ACCOUNT_ARN
- ACCOUNT_ORG
- ACCOUNT_TAGS
- REGION
- RESOURCE_ID
- RESOURCE_ARN
#### AZURE
- TENANT_DOMAIN
- SUBSCRIPTION
- RESOURCE_ID
- RESOURCE_NAME
#### GCP
- PROJECT_ID
- LOCATION
- RESOURCE_ID
- RESOURCE_NAME
> Since Prowler v3 the CSV column delimiter is the semicolon (`;`)
### JSON

View File

@@ -41,6 +41,7 @@ nav:
- Custom Metadata: tutorials/custom-checks-metadata.md
- Ignore Unused Services: tutorials/ignore-unused-services.md
- Pentesting: tutorials/pentesting.md
- Parallel Execution: tutorials/parallel-execution.md
- Developer Guide: developer-guide/introduction.md
- AWS:
- Authentication: tutorials/aws/authentication.md

296
poetry.lock generated
View File

@@ -295,18 +295,18 @@ files = [
[[package]]
name = "bandit"
version = "1.7.5"
version = "1.7.6"
description = "Security oriented static analyser for python code."
optional = false
python-versions = ">=3.7"
python-versions = ">=3.8"
files = [
{file = "bandit-1.7.5-py3-none-any.whl", hash = "sha256:75665181dc1e0096369112541a056c59d1c5f66f9bb74a8d686c3c362b83f549"},
{file = "bandit-1.7.5.tar.gz", hash = "sha256:bdfc739baa03b880c2d15d0431b31c658ffc348e907fe197e54e0389dd59e11e"},
{file = "bandit-1.7.6-py3-none-any.whl", hash = "sha256:36da17c67fc87579a5d20c323c8d0b1643a890a2b93f00b3d1229966624694ff"},
{file = "bandit-1.7.6.tar.gz", hash = "sha256:72ce7bc9741374d96fb2f1c9a8960829885f1243ffde743de70a19cee353e8f3"},
]
[package.dependencies]
colorama = {version = ">=0.3.9", markers = "platform_system == \"Windows\""}
GitPython = ">=1.0.1"
GitPython = ">=3.1.30"
PyYAML = ">=5.3.1"
rich = "*"
stevedore = ">=1.20.0"
@@ -649,63 +649,63 @@ files = [
[[package]]
name = "coverage"
version = "7.3.2"
version = "7.3.3"
description = "Code coverage measurement for Python"
optional = false
python-versions = ">=3.8"
files = [
{file = "coverage-7.3.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d872145f3a3231a5f20fd48500274d7df222e291d90baa2026cc5152b7ce86bf"},
{file = "coverage-7.3.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:310b3bb9c91ea66d59c53fa4989f57d2436e08f18fb2f421a1b0b6b8cc7fffda"},
{file = "coverage-7.3.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f47d39359e2c3779c5331fc740cf4bce6d9d680a7b4b4ead97056a0ae07cb49a"},
{file = "coverage-7.3.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:aa72dbaf2c2068404b9870d93436e6d23addd8bbe9295f49cbca83f6e278179c"},
{file = "coverage-7.3.2-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:beaa5c1b4777f03fc63dfd2a6bd820f73f036bfb10e925fce067b00a340d0f3f"},
{file = "coverage-7.3.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:dbc1b46b92186cc8074fee9d9fbb97a9dd06c6cbbef391c2f59d80eabdf0faa6"},
{file = "coverage-7.3.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:315a989e861031334d7bee1f9113c8770472db2ac484e5b8c3173428360a9148"},
{file = "coverage-7.3.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:d1bc430677773397f64a5c88cb522ea43175ff16f8bfcc89d467d974cb2274f9"},
{file = "coverage-7.3.2-cp310-cp310-win32.whl", hash = "sha256:a889ae02f43aa45032afe364c8ae84ad3c54828c2faa44f3bfcafecb5c96b02f"},
{file = "coverage-7.3.2-cp310-cp310-win_amd64.whl", hash = "sha256:c0ba320de3fb8c6ec16e0be17ee1d3d69adcda99406c43c0409cb5c41788a611"},
{file = "coverage-7.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ac8c802fa29843a72d32ec56d0ca792ad15a302b28ca6203389afe21f8fa062c"},
{file = "coverage-7.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:89a937174104339e3a3ffcf9f446c00e3a806c28b1841c63edb2b369310fd074"},
{file = "coverage-7.3.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e267e9e2b574a176ddb983399dec325a80dbe161f1a32715c780b5d14b5f583a"},
{file = "coverage-7.3.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2443cbda35df0d35dcfb9bf8f3c02c57c1d6111169e3c85fc1fcc05e0c9f39a3"},
{file = "coverage-7.3.2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4175e10cc8dda0265653e8714b3174430b07c1dca8957f4966cbd6c2b1b8065a"},
{file = "coverage-7.3.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:0cbf38419fb1a347aaf63481c00f0bdc86889d9fbf3f25109cf96c26b403fda1"},
{file = "coverage-7.3.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:5c913b556a116b8d5f6ef834038ba983834d887d82187c8f73dec21049abd65c"},
{file = "coverage-7.3.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:1981f785239e4e39e6444c63a98da3a1db8e971cb9ceb50a945ba6296b43f312"},
{file = "coverage-7.3.2-cp311-cp311-win32.whl", hash = "sha256:43668cabd5ca8258f5954f27a3aaf78757e6acf13c17604d89648ecc0cc66640"},
{file = "coverage-7.3.2-cp311-cp311-win_amd64.whl", hash = "sha256:e10c39c0452bf6e694511c901426d6b5ac005acc0f78ff265dbe36bf81f808a2"},
{file = "coverage-7.3.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:4cbae1051ab791debecc4a5dcc4a1ff45fc27b91b9aee165c8a27514dd160836"},
{file = "coverage-7.3.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:12d15ab5833a997716d76f2ac1e4b4d536814fc213c85ca72756c19e5a6b3d63"},
{file = "coverage-7.3.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3c7bba973ebee5e56fe9251300c00f1579652587a9f4a5ed8404b15a0471f216"},
{file = "coverage-7.3.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fe494faa90ce6381770746077243231e0b83ff3f17069d748f645617cefe19d4"},
{file = "coverage-7.3.2-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6e9589bd04d0461a417562649522575d8752904d35c12907d8c9dfeba588faf"},
{file = "coverage-7.3.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:d51ac2a26f71da1b57f2dc81d0e108b6ab177e7d30e774db90675467c847bbdf"},
{file = "coverage-7.3.2-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:99b89d9f76070237975b315b3d5f4d6956ae354a4c92ac2388a5695516e47c84"},
{file = "coverage-7.3.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:fa28e909776dc69efb6ed975a63691bc8172b64ff357e663a1bb06ff3c9b589a"},
{file = "coverage-7.3.2-cp312-cp312-win32.whl", hash = "sha256:289fe43bf45a575e3ab10b26d7b6f2ddb9ee2dba447499f5401cfb5ecb8196bb"},
{file = "coverage-7.3.2-cp312-cp312-win_amd64.whl", hash = "sha256:7dbc3ed60e8659bc59b6b304b43ff9c3ed858da2839c78b804973f613d3e92ed"},
{file = "coverage-7.3.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f94b734214ea6a36fe16e96a70d941af80ff3bfd716c141300d95ebc85339738"},
{file = "coverage-7.3.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:af3d828d2c1cbae52d34bdbb22fcd94d1ce715d95f1a012354a75e5913f1bda2"},
{file = "coverage-7.3.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:630b13e3036e13c7adc480ca42fa7afc2a5d938081d28e20903cf7fd687872e2"},
{file = "coverage-7.3.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c9eacf273e885b02a0273bb3a2170f30e2d53a6d53b72dbe02d6701b5296101c"},
{file = "coverage-7.3.2-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d8f17966e861ff97305e0801134e69db33b143bbfb36436efb9cfff6ec7b2fd9"},
{file = "coverage-7.3.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:b4275802d16882cf9c8b3d057a0839acb07ee9379fa2749eca54efbce1535b82"},
{file = "coverage-7.3.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:72c0cfa5250f483181e677ebc97133ea1ab3eb68645e494775deb6a7f6f83901"},
{file = "coverage-7.3.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:cb536f0dcd14149425996821a168f6e269d7dcd2c273a8bff8201e79f5104e76"},
{file = "coverage-7.3.2-cp38-cp38-win32.whl", hash = "sha256:307adb8bd3abe389a471e649038a71b4eb13bfd6b7dd9a129fa856f5c695cf92"},
{file = "coverage-7.3.2-cp38-cp38-win_amd64.whl", hash = "sha256:88ed2c30a49ea81ea3b7f172e0269c182a44c236eb394718f976239892c0a27a"},
{file = "coverage-7.3.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b631c92dfe601adf8f5ebc7fc13ced6bb6e9609b19d9a8cd59fa47c4186ad1ce"},
{file = "coverage-7.3.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:d3d9df4051c4a7d13036524b66ecf7a7537d14c18a384043f30a303b146164e9"},
{file = "coverage-7.3.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5f7363d3b6a1119ef05015959ca24a9afc0ea8a02c687fe7e2d557705375c01f"},
{file = "coverage-7.3.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2f11cc3c967a09d3695d2a6f03fb3e6236622b93be7a4b5dc09166a861be6d25"},
{file = "coverage-7.3.2-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:149de1d2401ae4655c436a3dced6dd153f4c3309f599c3d4bd97ab172eaf02d9"},
{file = "coverage-7.3.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:3a4006916aa6fee7cd38db3bfc95aa9c54ebb4ffbfc47c677c8bba949ceba0a6"},
{file = "coverage-7.3.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:9028a3871280110d6e1aa2df1afd5ef003bab5fb1ef421d6dc748ae1c8ef2ebc"},
{file = "coverage-7.3.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:9f805d62aec8eb92bab5b61c0f07329275b6f41c97d80e847b03eb894f38d083"},
{file = "coverage-7.3.2-cp39-cp39-win32.whl", hash = "sha256:d1c88ec1a7ff4ebca0219f5b1ef863451d828cccf889c173e1253aa84b1e07ce"},
{file = "coverage-7.3.2-cp39-cp39-win_amd64.whl", hash = "sha256:b4767da59464bb593c07afceaddea61b154136300881844768037fd5e859353f"},
{file = "coverage-7.3.2-pp38.pp39.pp310-none-any.whl", hash = "sha256:ae97af89f0fbf373400970c0a21eef5aa941ffeed90aee43650b81f7d7f47637"},
{file = "coverage-7.3.2.tar.gz", hash = "sha256:be32ad29341b0170e795ca590e1c07e81fc061cb5b10c74ce7203491484404ef"},
{file = "coverage-7.3.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d874434e0cb7b90f7af2b6e3309b0733cde8ec1476eb47db148ed7deeb2a9494"},
{file = "coverage-7.3.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ee6621dccce8af666b8c4651f9f43467bfbf409607c604b840b78f4ff3619aeb"},
{file = "coverage-7.3.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1367aa411afb4431ab58fd7ee102adb2665894d047c490649e86219327183134"},
{file = "coverage-7.3.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1f0f8f0c497eb9c9f18f21de0750c8d8b4b9c7000b43996a094290b59d0e7523"},
{file = "coverage-7.3.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:db0338c4b0951d93d547e0ff8d8ea340fecf5885f5b00b23be5aa99549e14cfd"},
{file = "coverage-7.3.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d31650d313bd90d027f4be7663dfa2241079edd780b56ac416b56eebe0a21aab"},
{file = "coverage-7.3.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:9437a4074b43c177c92c96d051957592afd85ba00d3e92002c8ef45ee75df438"},
{file = "coverage-7.3.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:9e17d9cb06c13b4f2ef570355fa45797d10f19ca71395910b249e3f77942a837"},
{file = "coverage-7.3.3-cp310-cp310-win32.whl", hash = "sha256:eee5e741b43ea1b49d98ab6e40f7e299e97715af2488d1c77a90de4a663a86e2"},
{file = "coverage-7.3.3-cp310-cp310-win_amd64.whl", hash = "sha256:593efa42160c15c59ee9b66c5f27a453ed3968718e6e58431cdfb2d50d5ad284"},
{file = "coverage-7.3.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:8c944cf1775235c0857829c275c777a2c3e33032e544bcef614036f337ac37bb"},
{file = "coverage-7.3.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:eda7f6e92358ac9e1717ce1f0377ed2b9320cea070906ece4e5c11d172a45a39"},
{file = "coverage-7.3.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3c854c1d2c7d3e47f7120b560d1a30c1ca221e207439608d27bc4d08fd4aeae8"},
{file = "coverage-7.3.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:222b038f08a7ebed1e4e78ccf3c09a1ca4ac3da16de983e66520973443b546bc"},
{file = "coverage-7.3.3-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ff4800783d85bff132f2cc7d007426ec698cdce08c3062c8d501ad3f4ea3d16c"},
{file = "coverage-7.3.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:fc200cec654311ca2c3f5ab3ce2220521b3d4732f68e1b1e79bef8fcfc1f2b97"},
{file = "coverage-7.3.3-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:307aecb65bb77cbfebf2eb6e12009e9034d050c6c69d8a5f3f737b329f4f15fb"},
{file = "coverage-7.3.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:ffb0eacbadb705c0a6969b0adf468f126b064f3362411df95f6d4f31c40d31c1"},
{file = "coverage-7.3.3-cp311-cp311-win32.whl", hash = "sha256:79c32f875fd7c0ed8d642b221cf81feba98183d2ff14d1f37a1bbce6b0347d9f"},
{file = "coverage-7.3.3-cp311-cp311-win_amd64.whl", hash = "sha256:243576944f7c1a1205e5cd658533a50eba662c74f9be4c050d51c69bd4532936"},
{file = "coverage-7.3.3-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:a2ac4245f18057dfec3b0074c4eb366953bca6787f1ec397c004c78176a23d56"},
{file = "coverage-7.3.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:f9191be7af41f0b54324ded600e8ddbcabea23e1e8ba419d9a53b241dece821d"},
{file = "coverage-7.3.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:31c0b1b8b5a4aebf8fcd227237fc4263aa7fa0ddcd4d288d42f50eff18b0bac4"},
{file = "coverage-7.3.3-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ee453085279df1bac0996bc97004771a4a052b1f1e23f6101213e3796ff3cb85"},
{file = "coverage-7.3.3-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1191270b06ecd68b1d00897b2daddb98e1719f63750969614ceb3438228c088e"},
{file = "coverage-7.3.3-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:007a7e49831cfe387473e92e9ff07377f6121120669ddc39674e7244350a6a29"},
{file = "coverage-7.3.3-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:af75cf83c2d57717a8493ed2246d34b1f3398cb8a92b10fd7a1858cad8e78f59"},
{file = "coverage-7.3.3-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:811ca7373da32f1ccee2927dc27dc523462fd30674a80102f86c6753d6681bc6"},
{file = "coverage-7.3.3-cp312-cp312-win32.whl", hash = "sha256:733537a182b5d62184f2a72796eb6901299898231a8e4f84c858c68684b25a70"},
{file = "coverage-7.3.3-cp312-cp312-win_amd64.whl", hash = "sha256:e995efb191f04b01ced307dbd7407ebf6e6dc209b528d75583277b10fd1800ee"},
{file = "coverage-7.3.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:fbd8a5fe6c893de21a3c6835071ec116d79334fbdf641743332e442a3466f7ea"},
{file = "coverage-7.3.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:50c472c1916540f8b2deef10cdc736cd2b3d1464d3945e4da0333862270dcb15"},
{file = "coverage-7.3.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2e9223a18f51d00d3ce239c39fc41410489ec7a248a84fab443fbb39c943616c"},
{file = "coverage-7.3.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f501e36ac428c1b334c41e196ff6bd550c0353c7314716e80055b1f0a32ba394"},
{file = "coverage-7.3.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:475de8213ed95a6b6283056d180b2442eee38d5948d735cd3d3b52b86dd65b92"},
{file = "coverage-7.3.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:afdcc10c01d0db217fc0a64f58c7edd635b8f27787fea0a3054b856a6dff8717"},
{file = "coverage-7.3.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:fff0b2f249ac642fd735f009b8363c2b46cf406d3caec00e4deeb79b5ff39b40"},
{file = "coverage-7.3.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:a1f76cfc122c9e0f62dbe0460ec9cc7696fc9a0293931a33b8870f78cf83a327"},
{file = "coverage-7.3.3-cp38-cp38-win32.whl", hash = "sha256:757453848c18d7ab5d5b5f1827293d580f156f1c2c8cef45bfc21f37d8681069"},
{file = "coverage-7.3.3-cp38-cp38-win_amd64.whl", hash = "sha256:ad2453b852a1316c8a103c9c970db8fbc262f4f6b930aa6c606df9b2766eee06"},
{file = "coverage-7.3.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3b15e03b8ee6a908db48eccf4e4e42397f146ab1e91c6324da44197a45cb9132"},
{file = "coverage-7.3.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:89400aa1752e09f666cc48708eaa171eef0ebe3d5f74044b614729231763ae69"},
{file = "coverage-7.3.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c59a3e59fb95e6d72e71dc915e6d7fa568863fad0a80b33bc7b82d6e9f844973"},
{file = "coverage-7.3.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9ede881c7618f9cf93e2df0421ee127afdfd267d1b5d0c59bcea771cf160ea4a"},
{file = "coverage-7.3.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f3bfd2c2f0e5384276e12b14882bf2c7621f97c35320c3e7132c156ce18436a1"},
{file = "coverage-7.3.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:7f3bad1a9313401ff2964e411ab7d57fb700a2d5478b727e13f156c8f89774a0"},
{file = "coverage-7.3.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:65d716b736f16e250435473c5ca01285d73c29f20097decdbb12571d5dfb2c94"},
{file = "coverage-7.3.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a702e66483b1fe602717020a0e90506e759c84a71dbc1616dd55d29d86a9b91f"},
{file = "coverage-7.3.3-cp39-cp39-win32.whl", hash = "sha256:7fbf3f5756e7955174a31fb579307d69ffca91ad163467ed123858ce0f3fd4aa"},
{file = "coverage-7.3.3-cp39-cp39-win_amd64.whl", hash = "sha256:cad9afc1644b979211989ec3ff7d82110b2ed52995c2f7263e7841c846a75348"},
{file = "coverage-7.3.3-pp38.pp39.pp310-none-any.whl", hash = "sha256:d299d379b676812e142fb57662a8d0d810b859421412b4d7af996154c00c31bb"},
{file = "coverage-7.3.3.tar.gz", hash = "sha256:df04c64e58df96b4427db8d0559e95e2df3138c9916c96f9f6a4dd220db2fdb7"},
]
[package.dependencies]
@@ -716,34 +716,34 @@ toml = ["tomli"]
[[package]]
name = "cryptography"
version = "41.0.4"
version = "41.0.6"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
optional = false
python-versions = ">=3.7"
files = [
{file = "cryptography-41.0.4-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:80907d3faa55dc5434a16579952ac6da800935cd98d14dbd62f6f042c7f5e839"},
{file = "cryptography-41.0.4-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:35c00f637cd0b9d5b6c6bd11b6c3359194a8eba9c46d4e875a3660e3b400005f"},
{file = "cryptography-41.0.4-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cecfefa17042941f94ab54f769c8ce0fe14beff2694e9ac684176a2535bf9714"},
{file = "cryptography-41.0.4-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e40211b4923ba5a6dc9769eab704bdb3fbb58d56c5b336d30996c24fcf12aadb"},
{file = "cryptography-41.0.4-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:23a25c09dfd0d9f28da2352503b23e086f8e78096b9fd585d1d14eca01613e13"},
{file = "cryptography-41.0.4-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:2ed09183922d66c4ec5fdaa59b4d14e105c084dd0febd27452de8f6f74704143"},
{file = "cryptography-41.0.4-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:5a0f09cefded00e648a127048119f77bc2b2ec61e736660b5789e638f43cc397"},
{file = "cryptography-41.0.4-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:9eeb77214afae972a00dee47382d2591abe77bdae166bda672fb1e24702a3860"},
{file = "cryptography-41.0.4-cp37-abi3-win32.whl", hash = "sha256:3b224890962a2d7b57cf5eeb16ccaafba6083f7b811829f00476309bce2fe0fd"},
{file = "cryptography-41.0.4-cp37-abi3-win_amd64.whl", hash = "sha256:c880eba5175f4307129784eca96f4e70b88e57aa3f680aeba3bab0e980b0f37d"},
{file = "cryptography-41.0.4-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:004b6ccc95943f6a9ad3142cfabcc769d7ee38a3f60fb0dddbfb431f818c3a67"},
{file = "cryptography-41.0.4-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:86defa8d248c3fa029da68ce61fe735432b047e32179883bdb1e79ed9bb8195e"},
{file = "cryptography-41.0.4-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:37480760ae08065437e6573d14be973112c9e6dcaf5f11d00147ee74f37a3829"},
{file = "cryptography-41.0.4-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:b5f4dfe950ff0479f1f00eda09c18798d4f49b98f4e2006d644b3301682ebdca"},
{file = "cryptography-41.0.4-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:7e53db173370dea832190870e975a1e09c86a879b613948f09eb49324218c14d"},
{file = "cryptography-41.0.4-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:5b72205a360f3b6176485a333256b9bcd48700fc755fef51c8e7e67c4b63e3ac"},
{file = "cryptography-41.0.4-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:93530900d14c37a46ce3d6c9e6fd35dbe5f5601bf6b3a5c325c7bffc030344d9"},
{file = "cryptography-41.0.4-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:efc8ad4e6fc4f1752ebfb58aefece8b4e3c4cae940b0994d43649bdfce8d0d4f"},
{file = "cryptography-41.0.4-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:c3391bd8e6de35f6f1140e50aaeb3e2b3d6a9012536ca23ab0d9c35ec18c8a91"},
{file = "cryptography-41.0.4-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:0d9409894f495d465fe6fda92cb70e8323e9648af912d5b9141d616df40a87b8"},
{file = "cryptography-41.0.4-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:8ac4f9ead4bbd0bc8ab2d318f97d85147167a488be0e08814a37eb2f439d5cf6"},
{file = "cryptography-41.0.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:047c4603aeb4bbd8db2756e38f5b8bd7e94318c047cfe4efeb5d715e08b49311"},
{file = "cryptography-41.0.4.tar.gz", hash = "sha256:7febc3094125fc126a7f6fb1f420d0da639f3f32cb15c8ff0dc3997c4549f51a"},
{file = "cryptography-41.0.6-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:0f27acb55a4e77b9be8d550d762b0513ef3fc658cd3eb15110ebbcbd626db12c"},
{file = "cryptography-41.0.6-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:ae236bb8760c1e55b7a39b6d4d32d2279bc6c7c8500b7d5a13b6fb9fc97be35b"},
{file = "cryptography-41.0.6-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afda76d84b053923c27ede5edc1ed7d53e3c9f475ebaf63c68e69f1403c405a8"},
{file = "cryptography-41.0.6-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:da46e2b5df770070412c46f87bac0849b8d685c5f2679771de277a422c7d0b86"},
{file = "cryptography-41.0.6-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:ff369dd19e8fe0528b02e8df9f2aeb2479f89b1270d90f96a63500afe9af5cae"},
{file = "cryptography-41.0.6-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:b648fe2a45e426aaee684ddca2632f62ec4613ef362f4d681a9a6283d10e079d"},
{file = "cryptography-41.0.6-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:5daeb18e7886a358064a68dbcaf441c036cbdb7da52ae744e7b9207b04d3908c"},
{file = "cryptography-41.0.6-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:068bc551698c234742c40049e46840843f3d98ad7ce265fd2bd4ec0d11306596"},
{file = "cryptography-41.0.6-cp37-abi3-win32.whl", hash = "sha256:2132d5865eea673fe6712c2ed5fb4fa49dba10768bb4cc798345748380ee3660"},
{file = "cryptography-41.0.6-cp37-abi3-win_amd64.whl", hash = "sha256:48783b7e2bef51224020efb61b42704207dde583d7e371ef8fc2a5fb6c0aabc7"},
{file = "cryptography-41.0.6-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:8efb2af8d4ba9dbc9c9dd8f04d19a7abb5b49eab1f3694e7b5a16a5fc2856f5c"},
{file = "cryptography-41.0.6-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:c5a550dc7a3b50b116323e3d376241829fd326ac47bc195e04eb33a8170902a9"},
{file = "cryptography-41.0.6-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:85abd057699b98fce40b41737afb234fef05c67e116f6f3650782c10862c43da"},
{file = "cryptography-41.0.6-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:f39812f70fc5c71a15aa3c97b2bbe213c3f2a460b79bd21c40d033bb34a9bf36"},
{file = "cryptography-41.0.6-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:742ae5e9a2310e9dade7932f9576606836ed174da3c7d26bc3d3ab4bd49b9f65"},
{file = "cryptography-41.0.6-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:35f3f288e83c3f6f10752467c48919a7a94b7d88cc00b0668372a0d2ad4f8ead"},
{file = "cryptography-41.0.6-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:4d03186af98b1c01a4eda396b137f29e4e3fb0173e30f885e27acec8823c1b09"},
{file = "cryptography-41.0.6-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:b27a7fd4229abef715e064269d98a7e2909ebf92eb6912a9603c7e14c181928c"},
{file = "cryptography-41.0.6-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:398ae1fc711b5eb78e977daa3cbf47cec20f2c08c5da129b7a296055fbb22aed"},
{file = "cryptography-41.0.6-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:7e00fb556bda398b99b0da289ce7053639d33b572847181d6483ad89835115f6"},
{file = "cryptography-41.0.6-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:60e746b11b937911dc70d164060d28d273e31853bb359e2b2033c9e93e6f3c43"},
{file = "cryptography-41.0.6-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:3288acccef021e3c3c10d58933f44e8602cf04dba96d9796d70d537bb2f4bbc4"},
{file = "cryptography-41.0.6.tar.gz", hash = "sha256:422e3e31d63743855e43e5a6fcc8b4acab860f560f9321b0ee6269cc7ed70cc3"},
]
[package.dependencies]
@@ -794,13 +794,13 @@ graph = ["objgraph (>=1.7.2)"]
[[package]]
name = "docker"
version = "6.1.3"
version = "7.0.0"
description = "A Python library for the Docker Engine API."
optional = false
python-versions = ">=3.7"
python-versions = ">=3.8"
files = [
{file = "docker-6.1.3-py3-none-any.whl", hash = "sha256:aecd2277b8bf8e506e484f6ab7aec39abe0038e29fa4a6d3ba86c3fe01844ed9"},
{file = "docker-6.1.3.tar.gz", hash = "sha256:aa6d17830045ba5ef0168d5eaa34d37beeb113948c413affe1d5991fc11f9a20"},
{file = "docker-7.0.0-py3-none-any.whl", hash = "sha256:12ba681f2777a0ad28ffbcc846a69c31b4dfd9752b47eb425a274ee269c5e14b"},
{file = "docker-7.0.0.tar.gz", hash = "sha256:323736fb92cd9418fc5e7133bc953e11a9da04f4483f828b527db553f1e7e5a3"},
]
[package.dependencies]
@@ -808,10 +808,10 @@ packaging = ">=14.0"
pywin32 = {version = ">=304", markers = "sys_platform == \"win32\""}
requests = ">=2.26.0"
urllib3 = ">=1.26.0"
websocket-client = ">=0.32.0"
[package.extras]
ssh = ["paramiko (>=2.4.3)"]
websockets = ["websocket-client (>=1.3.0)"]
[[package]]
name = "dparse"
@@ -911,13 +911,13 @@ pyflakes = ">=3.1.0,<3.2.0"
[[package]]
name = "freezegun"
version = "1.2.2"
version = "1.3.1"
description = "Let your Python tests travel through time"
optional = false
python-versions = ">=3.6"
python-versions = ">=3.7"
files = [
{file = "freezegun-1.2.2-py3-none-any.whl", hash = "sha256:ea1b963b993cb9ea195adbd893a48d573fda951b0da64f60883d7e988b606c9f"},
{file = "freezegun-1.2.2.tar.gz", hash = "sha256:cd22d1ba06941384410cd967d8a99d5ae2442f57dfafeff2fda5de8dc5c05446"},
{file = "freezegun-1.3.1-py3-none-any.whl", hash = "sha256:065e77a12624d05531afa87ade12a0b9bdb53495c4573893252a055b545ce3ea"},
{file = "freezegun-1.3.1.tar.gz", hash = "sha256:48984397b3b58ef5dfc645d6a304b0060f612bcecfdaaf45ce8aff0077a6cb6a"},
]
[package.dependencies]
@@ -995,13 +995,13 @@ grpcio-gcp = ["grpcio-gcp (>=0.2.2,<1.0dev)"]
[[package]]
name = "google-api-python-client"
version = "2.108.0"
version = "2.110.0"
description = "Google API Client Library for Python"
optional = false
python-versions = ">=3.7"
files = [
{file = "google-api-python-client-2.108.0.tar.gz", hash = "sha256:6396efca83185fb205c0abdbc1c2ee57b40475578c6af37f6d0e30a639aade99"},
{file = "google_api_python_client-2.108.0-py2.py3-none-any.whl", hash = "sha256:9d1327213e388943ebcd7db5ce6e7f47987a7e6874e3e1f6116010eea4a0e75d"},
{file = "google-api-python-client-2.110.0.tar.gz", hash = "sha256:1f825e48c7fdc3c96ad6aac179cb73c3755dfff41d16487fa7130e5efcfe7b76"},
{file = "google_api_python_client-2.110.0-py2.py3-none-any.whl", hash = "sha256:55e7ebd6079e34934b6751537eb13447110351ae3792a724a33825d7b671ba13"},
]
[package.dependencies]
@@ -1037,13 +1037,13 @@ requests = ["requests (>=2.20.0,<3.0.0dev)"]
[[package]]
name = "google-auth-httplib2"
version = "0.1.1"
version = "0.2.0"
description = "Google Authentication Library: httplib2 transport"
optional = false
python-versions = "*"
files = [
{file = "google-auth-httplib2-0.1.1.tar.gz", hash = "sha256:c64bc555fdc6dd788ea62ecf7bccffcf497bf77244887a3f3d7a5a02f8e3fc29"},
{file = "google_auth_httplib2-0.1.1-py2.py3-none-any.whl", hash = "sha256:42c50900b8e4dcdf8222364d1f0efe32b8421fb6ed72f2613f12f75cc933478c"},
{file = "google-auth-httplib2-0.2.0.tar.gz", hash = "sha256:38aa7badf48f974f1eb9861794e9c0cb2a0511a4ec0679b1f886d108f5640e05"},
{file = "google_auth_httplib2-0.2.0-py2.py3-none-any.whl", hash = "sha256:b65a0a2123300dd71281a7bf6e64d65a0759287df52729bdd1ae2e47dc311a3d"},
]
[package.dependencies]
@@ -1275,13 +1275,13 @@ files = [
[[package]]
name = "jsonschema"
version = "4.18.0"
version = "4.20.0"
description = "An implementation of JSON Schema validation for Python"
optional = false
python-versions = ">=3.8"
files = [
{file = "jsonschema-4.18.0-py3-none-any.whl", hash = "sha256:b508dd6142bd03f4c3670534c80af68cd7bbff9ea830b9cf2625d4a3c49ddf60"},
{file = "jsonschema-4.18.0.tar.gz", hash = "sha256:8caf5b57a990a98e9b39832ef3cb35c176fe331414252b6e1b26fd5866f891a4"},
{file = "jsonschema-4.20.0-py3-none-any.whl", hash = "sha256:ed6231f0429ecf966f5bc8dfef245998220549cbbcf140f913b7464c52c3b6b3"},
{file = "jsonschema-4.20.0.tar.gz", hash = "sha256:4f614fd46d8d61258610998997743ec5492a648b33cf478c1ddc23ed4598a5fa"},
]
[package.dependencies]
@@ -1577,13 +1577,13 @@ min-versions = ["babel (==2.9.0)", "click (==7.0)", "colorama (==0.4)", "ghp-imp
[[package]]
name = "mkdocs-material"
version = "9.4.10"
version = "9.5.2"
description = "Documentation that simply works"
optional = true
python-versions = ">=3.8"
files = [
{file = "mkdocs_material-9.4.10-py3-none-any.whl", hash = "sha256:207c4ebc07faebb220437d2c626edb0c9760c82ccfc484500bd3eb30dfce988c"},
{file = "mkdocs_material-9.4.10.tar.gz", hash = "sha256:421adedaeaa461dcaf55b8d406673934ade3d4f05ed9819e4cc7b4ee1d646a62"},
{file = "mkdocs_material-9.5.2-py3-none-any.whl", hash = "sha256:6ed0fbf4682491766f0ec1acc955db6901c2fd424c7ab343964ef51b819741f5"},
{file = "mkdocs_material-9.5.2.tar.gz", hash = "sha256:ca8b9cd2b3be53e858e5a1a45ac9668bd78d95d77a30288bb5ebc1a31db6184c"},
]
[package.dependencies]
@@ -1633,13 +1633,13 @@ test = ["pytest", "pytest-cov"]
[[package]]
name = "moto"
version = "4.2.9"
version = "4.2.12"
description = ""
optional = false
python-versions = ">=3.7"
files = [
{file = "moto-4.2.9-py2.py3-none-any.whl", hash = "sha256:c85289d13d15d5274d0a643381af1f1b03d7ee88f0943c9d2d6c28e6177a298a"},
{file = "moto-4.2.9.tar.gz", hash = "sha256:24de81eeaa450a20b57c5cdf9a757ea5216bddc7db798e335d2de1f2376bf324"},
{file = "moto-4.2.12-py2.py3-none-any.whl", hash = "sha256:bdcad46e066a55b7d308a786e5dca863b3cba04c6239c6974135a48d1198b3ab"},
{file = "moto-4.2.12.tar.gz", hash = "sha256:7c4d37f47becb4a0526b64df54484e988c10fde26861fc3b5c065bc78800cb59"},
]
[package.dependencies]
@@ -1655,7 +1655,7 @@ Jinja2 = ">=2.10.1"
jsondiff = {version = ">=1.1.2", optional = true, markers = "extra == \"all\""}
multipart = {version = "*", optional = true, markers = "extra == \"all\""}
openapi-spec-validator = {version = ">=0.5.0", optional = true, markers = "extra == \"all\""}
py-partiql-parser = {version = "0.4.2", optional = true, markers = "extra == \"all\""}
py-partiql-parser = {version = "0.5.0", optional = true, markers = "extra == \"all\""}
pyparsing = {version = ">=3.0.7", optional = true, markers = "extra == \"all\""}
python-dateutil = ">=2.1,<3.0.0"
python-jose = {version = ">=3.1.0,<4.0.0", extras = ["cryptography"], optional = true, markers = "extra == \"all\""}
@@ -1668,29 +1668,29 @@ werkzeug = ">=0.5,<2.2.0 || >2.2.0,<2.2.1 || >2.2.1"
xmltodict = "*"
[package.extras]
all = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "multipart", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.4.2)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
all = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "multipart", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.0)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
apigateway = ["PyYAML (>=5.1)", "ecdsa (!=0.15)", "openapi-spec-validator (>=0.5.0)", "python-jose[cryptography] (>=3.1.0,<4.0.0)"]
apigatewayv2 = ["PyYAML (>=5.1)"]
appsync = ["graphql-core"]
awslambda = ["docker (>=3.0.0)"]
batch = ["docker (>=3.0.0)"]
cloudformation = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.4.2)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
cloudformation = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.0)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
cognitoidp = ["ecdsa (!=0.15)", "python-jose[cryptography] (>=3.1.0,<4.0.0)"]
ds = ["sshpubkeys (>=3.1.0)"]
dynamodb = ["docker (>=3.0.0)", "py-partiql-parser (==0.4.2)"]
dynamodbstreams = ["docker (>=3.0.0)", "py-partiql-parser (==0.4.2)"]
dynamodb = ["docker (>=3.0.0)", "py-partiql-parser (==0.5.0)"]
dynamodbstreams = ["docker (>=3.0.0)", "py-partiql-parser (==0.5.0)"]
ebs = ["sshpubkeys (>=3.1.0)"]
ec2 = ["sshpubkeys (>=3.1.0)"]
efs = ["sshpubkeys (>=3.1.0)"]
eks = ["sshpubkeys (>=3.1.0)"]
glue = ["pyparsing (>=3.0.7)"]
iotdata = ["jsondiff (>=1.1.2)"]
proxy = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=2.5.1)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "multipart", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.4.2)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
resourcegroupstaggingapi = ["PyYAML (>=5.1)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.4.2)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "sshpubkeys (>=3.1.0)"]
proxy = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=2.5.1)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "multipart", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.0)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
resourcegroupstaggingapi = ["PyYAML (>=5.1)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "graphql-core", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.0)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "sshpubkeys (>=3.1.0)"]
route53resolver = ["sshpubkeys (>=3.1.0)"]
s3 = ["PyYAML (>=5.1)", "py-partiql-parser (==0.4.2)"]
s3crc32c = ["PyYAML (>=5.1)", "crc32c", "py-partiql-parser (==0.4.2)"]
server = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "flask (!=2.2.0,!=2.2.1)", "flask-cors", "graphql-core", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.4.2)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
s3 = ["PyYAML (>=5.1)", "py-partiql-parser (==0.5.0)"]
s3crc32c = ["PyYAML (>=5.1)", "crc32c", "py-partiql-parser (==0.5.0)"]
server = ["PyYAML (>=5.1)", "aws-xray-sdk (>=0.93,!=0.96)", "cfn-lint (>=0.40.0)", "docker (>=3.0.0)", "ecdsa (!=0.15)", "flask (!=2.2.0,!=2.2.1)", "flask-cors", "graphql-core", "jsondiff (>=1.1.2)", "openapi-spec-validator (>=0.5.0)", "py-partiql-parser (==0.5.0)", "pyparsing (>=3.0.7)", "python-jose[cryptography] (>=3.1.0,<4.0.0)", "setuptools", "sshpubkeys (>=3.1.0)"]
ssm = ["PyYAML (>=5.1)"]
xray = ["aws-xray-sdk (>=0.93,!=0.96)", "setuptools"]
@@ -1854,17 +1854,17 @@ signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
[[package]]
name = "openapi-schema-validator"
version = "0.6.0"
version = "0.6.2"
description = "OpenAPI schema validation for Python"
optional = false
python-versions = ">=3.8.0,<4.0.0"
files = [
{file = "openapi_schema_validator-0.6.0-py3-none-any.whl", hash = "sha256:9e95b95b621efec5936245025df0d6a7ffacd1551e91d09196b3053040c931d7"},
{file = "openapi_schema_validator-0.6.0.tar.gz", hash = "sha256:921b7c1144b856ca3813e41ecff98a4050f7611824dfc5c6ead7072636af0520"},
{file = "openapi_schema_validator-0.6.2-py3-none-any.whl", hash = "sha256:c4887c1347c669eb7cded9090f4438b710845cd0f90d1fb9e1b3303fb37339f8"},
{file = "openapi_schema_validator-0.6.2.tar.gz", hash = "sha256:11a95c9c9017912964e3e5f2545a5b11c3814880681fcacfb73b1759bb4f2804"},
]
[package.dependencies]
jsonschema = ">=4.18.0,<5.0.0"
jsonschema = ">=4.19.1,<5.0.0"
jsonschema-specifications = ">=2023.5.2,<2024.0.0"
rfc3339-validator = "*"
@@ -2015,17 +2015,17 @@ files = [
[[package]]
name = "py-partiql-parser"
version = "0.4.2"
version = "0.5.0"
description = "Pure Python PartiQL Parser"
optional = false
python-versions = "*"
files = [
{file = "py-partiql-parser-0.4.2.tar.gz", hash = "sha256:9c99d545be7897c6bfa97a107f6cfbcd92e359d394e4f3b95430e6409e8dd1e1"},
{file = "py_partiql_parser-0.4.2-py3-none-any.whl", hash = "sha256:f3f34de8dddf65ed2d47b4263560bbf97be1ecc6bd5c61da039ede90f26a10ce"},
{file = "py-partiql-parser-0.5.0.tar.gz", hash = "sha256:427a662e87d51a0a50150fc8b75c9ebb4a52d49129684856c40c88b8c8e027e4"},
{file = "py_partiql_parser-0.5.0-py3-none-any.whl", hash = "sha256:dc454c27526adf62deca5177ea997bf41fac4fd109c5d4c8d81f984de738ba8f"},
]
[package.extras]
dev = ["black (==22.6.0)", "flake8", "mypy (==0.971)", "pytest"]
dev = ["black (==22.6.0)", "flake8", "mypy", "pytest"]
[[package]]
name = "pyasn1"
@@ -2173,13 +2173,13 @@ tests = ["coverage[toml] (==5.0.4)", "pytest (>=6.0.0,<7.0.0)"]
[[package]]
name = "pylint"
version = "3.0.2"
version = "3.0.3"
description = "python code static checker"
optional = false
python-versions = ">=3.8.0"
files = [
{file = "pylint-3.0.2-py3-none-any.whl", hash = "sha256:60ed5f3a9ff8b61839ff0348b3624ceeb9e6c2a92c514d81c9cc273da3b6bcda"},
{file = "pylint-3.0.2.tar.gz", hash = "sha256:0d4c286ef6d2f66c8bfb527a7f8a629009e42c99707dec821a03e1b51a4c1496"},
{file = "pylint-3.0.3-py3-none-any.whl", hash = "sha256:7a1585285aefc5165db81083c3e06363a27448f6b467b3b0f30dbd0ac1f73810"},
{file = "pylint-3.0.3.tar.gz", hash = "sha256:58c2398b0301e049609a8429789ec6edf3aabe9b6c5fec916acd18639c16de8b"},
]
[package.dependencies]
@@ -2189,7 +2189,7 @@ dill = [
{version = ">=0.2", markers = "python_version < \"3.11\""},
{version = ">=0.3.6", markers = "python_version >= \"3.11\""},
]
isort = ">=4.2.5,<6"
isort = ">=4.2.5,<5.13.0 || >5.13.0,<6"
mccabe = ">=0.6,<0.8"
platformdirs = ">=2.2.0"
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
@@ -2289,13 +2289,13 @@ pytest = "*"
[[package]]
name = "pytest-xdist"
version = "3.4.0"
version = "3.5.0"
description = "pytest xdist plugin for distributed testing, most importantly across multiple CPUs"
optional = false
python-versions = ">=3.7"
files = [
{file = "pytest-xdist-3.4.0.tar.gz", hash = "sha256:3a94a931dd9e268e0b871a877d09fe2efb6175c2c23d60d56a6001359002b832"},
{file = "pytest_xdist-3.4.0-py3-none-any.whl", hash = "sha256:e513118bf787677a427e025606f55e95937565e06dfaac8d87f55301e57ae607"},
{file = "pytest-xdist-3.5.0.tar.gz", hash = "sha256:cbb36f3d67e0c478baa57fa4edc8843887e0f6cfc42d677530a36d7472b32d8a"},
{file = "pytest_xdist-3.5.0-py3-none-any.whl", hash = "sha256:d075629c7e00b611df89f490a5063944bee7a4362a5ff11c7cc7824a03dfce24"},
]
[package.dependencies]
@@ -2631,17 +2631,17 @@ six = "*"
[[package]]
name = "rich"
version = "13.3.5"
version = "13.7.0"
description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal"
optional = false
python-versions = ">=3.7.0"
files = [
{file = "rich-13.3.5-py3-none-any.whl", hash = "sha256:69cdf53799e63f38b95b9bf9c875f8c90e78dd62b2f00c13a911c7a3b9fa4704"},
{file = "rich-13.3.5.tar.gz", hash = "sha256:2d11b9b8dd03868f09b4fffadc84a6a8cda574e40dc90821bd845720ebb8e89c"},
{file = "rich-13.7.0-py3-none-any.whl", hash = "sha256:6da14c108c4866ee9520bbffa71f6fe3962e193b7da68720583850cd4548e235"},
{file = "rich-13.7.0.tar.gz", hash = "sha256:5cb5123b5cf9ee70584244246816e9114227e0b98ad9176eede6ad54bf5403fa"},
]
[package.dependencies]
markdown-it-py = ">=2.2.0,<3.0.0"
markdown-it-py = ">=2.2.0"
pygments = ">=2.13.0,<3.0.0"
[package.extras]
@@ -2946,18 +2946,18 @@ files = [
[[package]]
name = "slack-sdk"
version = "3.24.0"
version = "3.26.1"
description = "The Slack API Platform SDK for Python"
optional = false
python-versions = ">=3.6.0"
files = [
{file = "slack_sdk-3.24.0-py2.py3-none-any.whl", hash = "sha256:cae64f0177a53d34cca59cc691d4535edd18929843a936b97cea421db9e4fbfe"},
{file = "slack_sdk-3.24.0.tar.gz", hash = "sha256:741ea5381e65f4407d24ed81203912cbd6bfe807a6704b1d3c5ad346c86000b6"},
{file = "slack_sdk-3.26.1-py2.py3-none-any.whl", hash = "sha256:f80f0d15f0fce539b470447d2a07b03ecdad6b24f69c1edd05d464cf21253a06"},
{file = "slack_sdk-3.26.1.tar.gz", hash = "sha256:d1600211eaa37c71a5f92daf4404074c3e6b3f5359a37c93c818b39d88ab4ca0"},
]
[package.extras]
optional = ["SQLAlchemy (>=1.4,<3)", "aiodns (>1.0)", "aiohttp (>=3.7.3,<4)", "boto3 (<=2)", "websocket-client (>=1,<2)", "websockets (>=10,<11)"]
testing = ["Flask (>=1,<2)", "Flask-Sockets (>=0.2,<1)", "Jinja2 (==3.0.3)", "Werkzeug (<2)", "black (==22.8.0)", "boto3 (<=2)", "click (==8.0.4)", "flake8 (>=5,<6)", "itsdangerous (==1.1.0)", "moto (>=3,<4)", "psutil (>=5,<6)", "pytest (>=6.2.5,<7)", "pytest-asyncio (<1)", "pytest-cov (>=2,<3)"]
testing = ["Flask (>=1,<2)", "Flask-Sockets (>=0.2,<1)", "Jinja2 (==3.0.3)", "Werkzeug (<2)", "black (==22.8.0)", "boto3 (<=2)", "click (==8.0.4)", "flake8 (>=5.0.4,<7)", "itsdangerous (==1.1.0)", "moto (>=3,<4)", "psutil (>=5,<6)", "pytest (>=7.0.1,<8)", "pytest-asyncio (<1)", "pytest-cov (>=2,<3)"]
[[package]]
name = "smmap"
@@ -3182,22 +3182,6 @@ files = [
[package.extras]
watchmedo = ["PyYAML (>=3.10)"]
[[package]]
name = "websocket-client"
version = "1.5.1"
description = "WebSocket client for Python with low level API options"
optional = false
python-versions = ">=3.7"
files = [
{file = "websocket-client-1.5.1.tar.gz", hash = "sha256:3f09e6d8230892547132177f575a4e3e73cfdf06526e20cc02aa1c3b47184d40"},
{file = "websocket_client-1.5.1-py3-none-any.whl", hash = "sha256:cdf5877568b7e83aa7cf2244ab56a3213de587bbe0ce9d8b9600fc77b455d89e"},
]
[package.extras]
docs = ["Sphinx (>=3.4)", "sphinx-rtd-theme (>=0.5)"]
optional = ["python-socks", "wsaccel"]
test = ["websockets"]
[[package]]
name = "werkzeug"
version = "3.0.1"
@@ -3337,4 +3321,4 @@ docs = ["mkdocs", "mkdocs-material"]
[metadata]
lock-version = "2.0"
python-versions = ">=3.9,<3.12"
content-hash = "5eeda2c0549c1a40ebedefe766f0d7e27e78ed123aaacb3e42d242271774b1da"
content-hash = "ec078424ecc4e6c85d759cf88c4db94cf4a46021c33e6fe0b4a95072e1aa4c0f"

View File

@@ -7,13 +7,11 @@ import sys
from colorama import Fore, Style
from prowler.config.config import get_available_compliance_frameworks
from prowler.lib.banner import print_banner
from prowler.lib.check.check import (
bulk_load_checks_metadata,
bulk_load_compliance_frameworks,
exclude_checks_to_run,
exclude_services_to_run,
execute_checks,
list_categories,
list_checks_json,
list_services,
@@ -31,6 +29,7 @@ from prowler.lib.check.custom_checks_metadata import (
parse_custom_checks_metadata_file,
update_checks_metadata,
)
from prowler.lib.check.managers import ExecutionManager
from prowler.lib.cli.parser import ProwlerArgumentParser
from prowler.lib.logger import logger, set_logging_config
from prowler.lib.outputs.compliance.compliance import display_compliance_table
@@ -39,6 +38,7 @@ from prowler.lib.outputs.json import close_json
from prowler.lib.outputs.outputs import extract_findings_statistics
from prowler.lib.outputs.slack import send_slack_message
from prowler.lib.outputs.summary_table import display_summary_table
from prowler.lib.ui.live_display import live_display
from prowler.providers.aws.lib.s3.s3 import send_to_s3_bucket
from prowler.providers.aws.lib.security_hub.security_hub import (
batch_send_to_security_hub,
@@ -78,8 +78,10 @@ def prowler():
compliance_framework = args.compliance
custom_checks_metadata_file = args.custom_checks_metadata_file
if not args.no_banner:
print_banner(args)
live_display.initialize(args)
# if not args.no_banner:
# print_banner(args)
# We treat the compliance framework as another output format
if compliance_framework:
@@ -196,14 +198,16 @@ def prowler():
# Execute checks
findings = []
if len(checks_to_execute):
findings = execute_checks(
execution_manager = ExecutionManager(
checks_to_execute,
provider,
audit_info,
audit_output_options,
custom_checks_metadata,
)
findings = execution_manager.execute_checks()
else:
logger.error(
"There are no checks to execute. Please, check your input arguments"

View File

@@ -22,6 +22,10 @@ gcp_logo = "https://user-images.githubusercontent.com/38561120/235928332-eb4accd
orange_color = "\033[38;5;208m"
banner_color = "\033[1;92m"
# Severities
valid_severities = ["critical", "high", "medium", "low", "informational"]
# Statuses
finding_statuses = ["PASS", "FAIL", "MANUAL"]
# Compliance

View File

@@ -10,16 +10,16 @@ from pkgutil import walk_packages
from types import ModuleType
from typing import Any
from alive_progress import alive_bar
from colorama import Fore, Style
import prowler
from prowler.config.config import orange_color
from prowler.lib.check.compliance_models import load_compliance_framework
from prowler.lib.check.custom_checks_metadata import update_check_metadata
from prowler.lib.check.managers import ExecutionManager
from prowler.lib.check.models import Check, load_check_metadata
from prowler.lib.logger import logger
from prowler.lib.outputs.outputs import report
from prowler.lib.ui.live_display import live_display
from prowler.lib.utils.utils import open_file, parse_json_file
from prowler.providers.aws.lib.mutelist.mutelist import mutelist_findings
from prowler.providers.common.common import get_global_provider
@@ -108,14 +108,20 @@ def exclude_services_to_run(
# Load checks from checklist.json
def parse_checks_from_file(input_file: str, provider: str) -> set:
checks_to_execute = set()
with open_file(input_file) as f:
json_file = parse_json_file(f)
"""parse_checks_from_file returns a set of checks read from the given file"""
try:
checks_to_execute = set()
with open_file(input_file) as f:
json_file = parse_json_file(f)
for check_name in json_file[provider]:
checks_to_execute.add(check_name)
for check_name in json_file[provider]:
checks_to_execute.add(check_name)
return checks_to_execute
return checks_to_execute
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
# Load checks from custom folder
@@ -311,7 +317,7 @@ def print_checks(
def parse_checks_from_compliance_framework(
compliance_frameworks: list, bulk_compliance_frameworks: dict
) -> list:
"""Parse checks from compliance frameworks specification"""
"""parse_checks_from_compliance_framework returns a set of checks from the given compliance_frameworks"""
checks_to_execute = set()
try:
for framework in compliance_frameworks:
@@ -489,47 +495,57 @@ def execute_checks(
print(
f"{Style.BRIGHT}Executing {checks_num} {check_noun}, please wait...{Style.RESET_ALL}\n"
)
with alive_bar(
total=len(checks_to_execute),
ctrl_c=False,
bar="blocks",
spinner="classic",
stats=False,
enrich_print=False,
) as bar:
for check_name in checks_to_execute:
# Recover service from check name
service = check_name.split("_")[0]
bar.title = (
f"-> Scanning {orange_color}{service}{Style.RESET_ALL} service"
execution_manager = ExecutionManager(provider, checks_to_execute)
total_checks = execution_manager.total_checks_per_service()
completed_checks = {service: 0 for service in total_checks}
service_findings = []
for service, check_name in execution_manager.execute_checks():
try:
check_findings = execute(
service,
check_name,
provider,
audit_output_options,
audit_info,
services_executed,
checks_executed,
custom_checks_metadata,
)
try:
check_findings = execute(
service,
check_name,
provider,
audit_output_options,
audit_info,
services_executed,
checks_executed,
custom_checks_metadata,
)
all_findings.extend(check_findings)
all_findings.extend(check_findings)
service_findings.extend(check_findings)
# Update the completed checks count
completed_checks[service] += 1
# If check does not exists in the provider or is from another provider
except ModuleNotFoundError:
logger.error(
f"Check '{check_name}' was not found for the {provider.upper()} provider"
)
except Exception as error:
logger.error(
f"{check_name} - {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
bar()
bar.title = f"-> {Fore.GREEN}Scan completed!{Style.RESET_ALL}"
# Check if all checks for the service are completed
if completed_checks[service] == total_checks[service]:
# All checks for the service are completed
# Add a summary table or perform other actions
live_display.add_results_for_service(service, service_findings)
# Clear service_findings
service_findings = []
# If check does not exists in the provider or is from another provider
except ModuleNotFoundError:
logger.error(
f"Check '{check_name}' was not found for the {provider.upper()} provider"
)
except Exception as error:
logger.error(
f"{check_name} - {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return all_findings
def create_check_service_dict(checks_to_execute):
output = {}
for check_name in checks_to_execute:
service = check_name.split("_")[0]
if service not in output.keys():
output[service] = []
output[service].append(check_name)
return output
def execute(
service: str,
check_name: str,
@@ -611,22 +627,32 @@ def update_audit_metadata(
)
def recover_checks_from_service(service_list: list, provider: str) -> list:
checks = set()
service_list = [
"awslambda" if service == "lambda" else service for service in service_list
]
for service in service_list:
modules = recover_checks_from_provider(provider, service)
if not modules:
logger.error(f"Service '{service}' does not have checks.")
def recover_checks_from_service(service_list: list, provider: str) -> set:
"""
Recover all checks from the selected provider and service
else:
for check_module in modules:
# Recover check name and module name from import path
# Format: "providers.{provider}.services.{service}.{check_name}.{check_name}"
check_name = check_module[0].split(".")[-1]
# If the service is present in the group list passed as parameters
# if service_name in group_list: checks_from_arn.add(check_name)
checks.add(check_name)
return checks
Returns a set of checks from the given services
"""
try:
checks = set()
service_list = [
"awslambda" if service == "lambda" else service for service in service_list
]
for service in service_list:
service_checks = recover_checks_from_provider(provider, service)
if not service_checks:
logger.error(f"Service '{service}' does not have checks.")
else:
for check in service_checks:
# Recover check name and module name from import path
# Format: "providers.{provider}.services.{service}.{check_name}.{check_name}"
check_name = check[0].split(".")[-1]
# If the service is present in the group list passed as parameters
# if service_name in group_list: checks_from_arn.add(check_name)
checks.add(check_name)
return checks
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)

View File

@@ -0,0 +1,48 @@
import ast
import os
import pathlib
from prowler.lib.logger import logger
class ImportFinder(ast.NodeVisitor):
def __init__(self, provider):
self.imports = set()
self.provider = provider
def visit_ImportFrom(self, node):
if node.module and f"prowler.providers.{self.provider}.services" in node.module:
for name in node.names:
if "_client" in name.name:
self.imports.add(name.name)
self.generic_visit(node)
def analyze_check_file(file_path, provider):
# Prase the check file
with open(file_path, "r") as file:
node = ast.parse(file.read(), filename=file_path)
finder = ImportFinder(provider)
finder.visit(node)
return list(finder.imports)
def get_dependencies_for_checks(provider, checks_dict):
current_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
prowler_dir = current_directory.parent.parent
check_dependencies = {}
for service_name, checks in checks_dict.items():
check_dependencies[service_name] = {}
for check_name in checks:
relative_path = f"providers/{provider}/services/{service_name}/{check_name}/{check_name}.py"
check_file_path = prowler_dir / relative_path
if not check_file_path.exists():
logger.error(
f"{check_name} does not exist at {relative_path}! Cannot determine service dependencies"
)
continue
clients = analyze_check_file(str(check_file_path), provider)
check_dependencies[service_name][check_name] = clients
return check_dependencies

View File

@@ -1,5 +1,6 @@
from colorama import Fore, Style
from prowler.config.config import valid_severities
from prowler.lib.check.check import (
parse_checks_from_compliance_framework,
parse_checks_from_file,
@@ -10,7 +11,6 @@ from prowler.lib.logger import logger
# Generate the list of checks to execute
# PENDING Test for this function
def load_checks_to_execute(
bulk_checks_metadata: dict,
bulk_compliance_frameworks: dict,
@@ -22,73 +22,93 @@ def load_checks_to_execute(
categories: set,
provider: str,
) -> set:
"""Generate the list of checks to execute based on the cloud provider and input arguments specified"""
checks_to_execute = set()
"""Generate the list of checks to execute based on the cloud provider and the input arguments given"""
try:
# Local subsets
checks_to_execute = set()
check_aliases = {}
check_severities = {key: [] for key in valid_severities}
check_categories = {}
# Handle if there are checks passed using -c/--checks
if check_list:
for check_name in check_list:
checks_to_execute.add(check_name)
# First, loop over the bulk_checks_metadata to extract the needed subsets
for check, metadata in bulk_checks_metadata.items():
# Aliases
for alias in metadata.CheckAliases:
check_aliases[alias] = check
# Handle if there are some severities passed using --severity
elif severities:
for check in bulk_checks_metadata:
# Check check's severity
if bulk_checks_metadata[check].Severity in severities:
checks_to_execute.add(check)
if service_list:
checks_to_execute = (
recover_checks_from_service(service_list, provider) & checks_to_execute
)
# Severities
if metadata.Severity:
check_severities[metadata.Severity].append(check)
# Handle if there are checks passed using -C/--checks-file
elif checks_file:
try:
# Categories
for category in metadata.Categories:
if category not in check_categories:
check_categories[category] = []
check_categories[category].append(check)
# Handle if there are checks passed using -c/--checks
if check_list:
for check_name in check_list:
checks_to_execute.add(check_name)
# Handle if there are some severities passed using --severity
elif severities:
for severity in severities:
checks_to_execute.update(check_severities[severity])
if service_list:
checks_to_execute = (
recover_checks_from_service(service_list, provider)
& checks_to_execute
)
# Handle if there are checks passed using -C/--checks-file
elif checks_file:
checks_to_execute = parse_checks_from_file(checks_file, provider)
except Exception as e:
logger.error(f"{e.__class__.__name__}[{e.__traceback__.tb_lineno}] -- {e}")
# Handle if there are services passed using -s/--services
elif service_list:
checks_to_execute = recover_checks_from_service(service_list, provider)
# Handle if there are services passed using -s/--services
elif service_list:
checks_to_execute = recover_checks_from_service(service_list, provider)
# Handle if there are compliance frameworks passed using --compliance
elif compliance_frameworks:
try:
# Handle if there are compliance frameworks passed using --compliance
elif compliance_frameworks:
checks_to_execute = parse_checks_from_compliance_framework(
compliance_frameworks, bulk_compliance_frameworks
)
except Exception as e:
logger.error(f"{e.__class__.__name__}[{e.__traceback__.tb_lineno}] -- {e}")
# Handle if there are categories passed using --categories
elif categories:
for cat in categories:
for check in bulk_checks_metadata:
# Check check's categories
if cat in bulk_checks_metadata[check].Categories:
checks_to_execute.add(check)
# Handle if there are categories passed using --categories
elif categories:
for category in categories:
checks_to_execute.update(check_categories[category])
# If there are no checks passed as argument
else:
try:
# If there are no checks passed as argument
else:
# Get all check modules to run with the specific provider
checks = recover_checks_from_provider(provider)
except Exception as e:
logger.error(f"{e.__class__.__name__}[{e.__traceback__.tb_lineno}] -- {e}")
else:
for check_info in checks:
# Recover check name from import path (last part)
# Format: "providers.{provider}.services.{service}.{check_name}.{check_name}"
check_name = check_info[0]
checks_to_execute.add(check_name)
# Get Check Aliases mapping
check_aliases = {}
for check, metadata in bulk_checks_metadata.items():
for alias in metadata.CheckAliases:
check_aliases[alias] = check
# Check Aliases
checks_to_execute = update_checks_to_execute_with_aliases(
checks_to_execute, check_aliases
)
return checks_to_execute
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
def update_checks_to_execute_with_aliases(
checks_to_execute: set, check_aliases: dict
) -> set:
"""update_checks_to_execute_with_aliases returns the checks_to_execute updated using the check aliases."""
# Verify if any input check is an alias of another check
for input_check in checks_to_execute:
if (
@@ -101,5 +121,4 @@ def load_checks_to_execute(
print(
f"\nUsing alias {Fore.YELLOW}{input_check}{Style.RESET_ALL} for check {Fore.YELLOW}{check_aliases[input_check]}{Style.RESET_ALL}...\n"
)
return checks_to_execute

View File

@@ -3,9 +3,9 @@ import sys
import yaml
from jsonschema import validate
from prowler.config.config import valid_severities
from prowler.lib.logger import logger
valid_severities = ["critical", "high", "medium", "low", "informational"]
custom_checks_metadata_schema = {
"type": "object",
"properties": {

View File

@@ -0,0 +1,369 @@
import importlib
import os
import sys
import traceback
from types import ModuleType
from typing import Any, Set
from colorama import Fore, Style
from prowler.lib.check.check_to_client_mapper import get_dependencies_for_checks
from prowler.lib.check.custom_checks_metadata import update_check_metadata
from prowler.lib.check.models import Check
from prowler.lib.logger import logger
from prowler.lib.outputs.outputs import report
from prowler.lib.ui.live_display import live_display
from prowler.providers.aws.lib.mutelist.mutelist import mutelist_findings
from prowler.providers.common.common import get_global_provider
from prowler.providers.common.models import Audit_Metadata
from prowler.providers.common.outputs import Provider_Output_Options
class ExecutionManager:
def __init__(
self,
checks_to_execute: list,
provider: str,
audit_info: Any,
audit_output_options: Provider_Output_Options,
custom_checks_metadata: Any,
):
self.checks_to_execute = checks_to_execute
self.provider = provider
self.audit_info = audit_info
self.audit_output_options = audit_output_options
self.custom_checks_metadata = custom_checks_metadata
self.live_display = live_display
self.live_display.start()
self.loaded_clients = {} # defaultdict(lambda: False)
self.check_dict = self.create_check_service_dict(checks_to_execute)
self.check_dependencies = get_dependencies_for_checks(provider, self.check_dict)
self.remaining_checks = self.initialize_remaining_checks(
self.check_dependencies
)
self.services_queue = self.initialize_services_queue(self.check_dependencies)
# For tracking the executed services and checks
self.services_executed: Set[str] = set()
self.checks_executed: Set[str] = set()
# Initialize the Audit Metadata
self.audit_info.audit_metadata = Audit_Metadata(
services_scanned=0,
expected_checks=self.checks_to_execute,
completed_checks=0,
audit_progress=0,
)
def update_tracking(self, service: str, check: str):
self.services_executed.add(service)
self.checks_executed.add(check)
@staticmethod
def initialize_remaining_checks(check_dependencies):
remaining_checks = {}
for service, checks in check_dependencies.items():
for check_name, clients in checks.items():
remaining_checks[(service, check_name)] = clients
return remaining_checks
@staticmethod
def initialize_services_queue(check_dependencies):
return list(check_dependencies.keys())
@staticmethod
def create_check_service_dict(checks_to_execute):
output = {}
for check_name in checks_to_execute:
service = check_name.split("_")[0]
if service not in output.keys():
output[service] = []
output[service].append(check_name)
return output
def total_checks_per_service(self):
"""Returns a dictionary with the total number of checks for each service."""
total_checks = {}
for service, checks in self.check_dict.items():
total_checks[service] = len(checks)
return total_checks
def find_next_service(self):
# Prioritize services that use already loaded clients
for service in self.services_queue:
checks = self.check_dependencies[service]
if any(
client in self.loaded_clients
for check in checks.values()
for client in check
):
return service
return None if not self.services_queue else self.services_queue[0]
@staticmethod
def import_check(check_path: str) -> ModuleType:
"""
Imports an input check using its path
When importing a module using importlib.import_module, it's loaded and added to the sys.modules cache.
This means that the module remains in memory and is not garbage collected immediately after use, as it's still referenced in sys.modules.
This behavior is intentional, as importing modules can be a costly operation, and keeping them in memory allows for faster re-use.
release_check deletes this reference if it is no longer required by any of the remaining checks
"""
lib = importlib.import_module(f"{check_path}")
return lib
# Imports service clients, and tracks if it needs to be imported
def import_client(self, client_name):
if not self.loaded_clients.get(client_name):
# Dynamically import the client
module_name, _ = client_name.rsplit("_", 1)
client_module = importlib.import_module(
f"prowler.providers.{self.provider}.services.{module_name}.{client_name}"
)
self.loaded_clients[client_name] = client_module
def release_clients(self, completed_check_clients):
for client_name in completed_check_clients:
# Determine if any of the remaining checks still require the client
if not any(
client == client_name
for check in self.remaining_checks
for client in self.remaining_checks[check]
):
# Delete the reference to the client for this object
del self.loaded_clients[client_name]
module_name, _ = client_name.rsplit("_", 1)
# Delete the reference to the client in sys.modules
del sys.modules[
f"prowler.providers.aws.services.{module_name}.{client_name}"
]
def generate_checks(self):
"""
This is a generator function, which will:
* Determine the next service whose checks will be executed
* Load all the clients which are required by the checks into memory (init them)
* Yield the service and check name, 1-by-1, to be used within execute_checks
* Pass the completed checks to release_clients to determine if the clients that were required by the check are no longer needed, and can be garabage collected
It will complete the checks for a service, before moving onto the next one
It uses find_next_service to prioritize the next service based on if any of that service's checks require a client that has already been loaded
"""
while self.remaining_checks:
current_service = self.find_next_service()
if not current_service:
# Execution has completed, return
break
# Remove the service from the services_queue
self.services_queue.remove(current_service)
checks = self.check_dependencies[current_service]
clients_for_service = list(
set(client for client_list in checks.values() for client in client_list)
)
for client in clients_for_service:
self.live_display.add_client_init_section(client)
self.import_client(client)
# Add the display component
total_checks = len(self.check_dict[current_service])
self.live_display.add_service_section(current_service, total_checks)
for check_name, clients_for_check in checks.items():
yield current_service, check_name
self.live_display.increment_check_progress()
self.live_display.increment_overall_check_progress()
del self.remaining_checks[(current_service, check_name)]
self.release_clients(clients_for_check)
self.live_display.increment_overall_service_progress()
def execute_checks(self) -> list:
# List to store all the check's findings
all_findings = []
# Services and checks executed for the Audit Status
global_provider = get_global_provider()
# Initialize the Audit Metadata
global_provider.audit_metadata = Audit_Metadata(
services_scanned=0,
expected_checks=self.checks_to_execute,
completed_checks=0,
audit_progress=0,
)
if os.name != "nt":
try:
from resource import RLIMIT_NOFILE, getrlimit
# Check ulimit for the maximum system open files
soft, _ = getrlimit(RLIMIT_NOFILE)
if soft < 4096:
logger.warning(
f"Your session file descriptors limit ({soft} open files) is below 4096. We recommend to increase it to avoid errors. Solve it running this command `ulimit -n 4096`. For more info visit https://docs.prowler.cloud/en/latest/troubleshooting/"
)
except Exception as error:
logger.error("Unable to retrieve ulimit default settings")
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
# Execution with the --only-logs flag
if self.audit_output_options.only_logs:
for service, check_name in self.generate_checks():
try:
check_findings = self.execute(service, check_name)
all_findings.extend(check_findings)
# If check does not exists in the provider or is from another provider
except ModuleNotFoundError:
logger.error(
f"Check '{check_name}' was not found for the {self.provider.upper()} provider"
)
except Exception as error:
logger.error(
f"{check_name} - {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
# Default execution
total_checks = self.total_checks_per_service()
self.live_display.add_overall_progress_section(
total_checks_dict=total_checks
)
# For tracking when a service is completed
completed_checks = {service: 0 for service in total_checks}
service_findings = []
for service, check_name in self.generate_checks():
try:
check_findings = self.execute(
service,
check_name,
)
all_findings.extend(check_findings)
service_findings.extend(check_findings)
# Update the completed checks count
completed_checks[service] += 1
# Check if all checks for the service are completed
if completed_checks[service] == total_checks[service]:
# All checks for the service are completed
# Add a summary table or perform other actions
live_display.add_results_for_service(service, service_findings)
# Clear service_findings
service_findings = []
# If check does not exists in the provider or is from another provider
except ModuleNotFoundError:
logger.error(
f"Check '{check_name}' was not found for the {self.provider.upper()} provider"
)
except Exception as error:
logger.error(
f"{check_name} - {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
self.live_display.hide_service_section()
return all_findings
def execute(
self,
service: str,
check_name: str,
):
try:
# Import check module
check_module_path = f"prowler.providers.{self.provider}.services.{service}.{check_name}.{check_name}"
lib = self.import_check(check_module_path)
# Recover functions from check
check_to_execute = getattr(lib, check_name)
c = check_to_execute()
# Update check metadata to reflect that in the outputs
if self.custom_checks_metadata and self.custom_checks_metadata[
"Checks"
].get(c.CheckID):
c = update_check_metadata(
c, self.custom_checks_metadata["Checks"][c.CheckID]
)
# Run check
check_findings = self.run_check(c, self.audit_output_options)
# Update Audit Status
self.update_tracking(service, check_name)
self.update_audit_metadata()
# Mutelist findings
if self.audit_output_options.mutelist_file:
check_findings = mutelist_findings(
self.audit_output_options.mutelist_file,
self.audit_info.audited_account,
check_findings,
)
# Report the check's findings
report(check_findings, self.audit_output_options, self.audit_info)
if os.environ.get("PROWLER_REPORT_LIB_PATH"):
try:
logger.info("Using custom report interface ...")
lib = os.environ["PROWLER_REPORT_LIB_PATH"]
outputs_module = importlib.import_module(lib)
custom_report_interface = getattr(outputs_module, "report")
custom_report_interface(
check_findings, self.audit_output_options, self.audit_info
)
except Exception:
sys.exit(1)
except Exception as error:
logger.error(
f"{check_name} - {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return check_findings
@staticmethod
def run_check(check: Check, output_options: Provider_Output_Options) -> list:
findings = []
if output_options.verbose:
print(
f"\nCheck ID: {check.CheckID} - {Fore.MAGENTA}{check.ServiceName}{Fore.YELLOW} [{check.Severity}]{Style.RESET_ALL}"
)
logger.debug(f"Executing check: {check.CheckID}")
try:
findings = check.execute()
except Exception as error:
if not output_options.only_logs:
print(
f"Something went wrong in {check.CheckID}, please use --log-level ERROR"
)
logger.error(
f"{check.CheckID} -- {error.__class__.__name__}[{traceback.extract_tb(error.__traceback__)[-1].lineno}]: {error}"
)
finally:
return findings
def update_audit_metadata(self):
"""update_audit_metadata returns the audit_metadata updated with the new status
Updates the given audit_metadata using the length of the services_executed and checks_executed
"""
try:
self.audit_info.audit_metadata.services_scanned = len(
self.services_executed
)
self.audit_info.audit_metadata.completed_checks = len(self.checks_executed)
self.audit_info.audit_metadata.audit_progress = (
100
* len(self.checks_executed)
/ len(self.audit_info.audit_metadata.expected_checks)
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)

View File

@@ -2,10 +2,13 @@ import os
import sys
from abc import ABC, abstractmethod
from dataclasses import dataclass
from functools import wraps
from pydantic import BaseModel, ValidationError
from pydantic.main import ModelMetaclass
from prowler.lib.logger import logger
from prowler.lib.ui.live_display import live_display
class Code(BaseModel):
@@ -57,9 +60,29 @@ class Check_Metadata_Model(BaseModel):
Compliance: list = None
class Check(ABC, Check_Metadata_Model):
class CheckMeta(ModelMetaclass):
"""
Dynamically decorates the execute function of all subclasses of the Check class
By making CheckMeta inherit from ModelMetaclass, it ensures that all features provided by Pydantic's BaseModel (such as data validation, serialization, and so forth) are preserved. CheckMeta just adds additional behavior (decorator application) on top of the existing features.
This also works because ModelMetaclass inherits from ABCMeta, as does the ABC class (its got to do with how metaclasses work when applying it to a class that inherits from other classes that have a metaclass).
The primary role of CheckMeta is to automatically apply a decorator to the execute method of subclasses. This behavior does not conflict with the typical responsibilities of ModelMetaclass
"""
def __new__(cls, name, bases, dct):
if "execute" in dct and not getattr(
dct["execute"], "__isabstractmethod__", False
):
dct["execute"] = Check.update_title_with_findings_decorator(dct["execute"])
return super(CheckMeta, cls).__new__(cls, name, bases, dct)
class Check(ABC, Check_Metadata_Model, metaclass=CheckMeta):
"""Prowler Check"""
title_bar_task: int = None
progress_task: int = None
def __init__(self, **data):
"""Check's init function. Calls the CheckMetadataModel init."""
# Parse the Check's metadata file
@@ -72,6 +95,43 @@ class Check(ABC, Check_Metadata_Model):
# Calls parents init function
super().__init__(**data)
self.live_display_enabled = False
service_section = live_display.get_service_section()
if service_section:
self.live_display_enabled = True
self.title_bar_task = service_section.title_bar.add_task(
f"{self.CheckTitle}...", start=False
)
def increment_task_progress(self):
if self.live_display_enabled:
current_section = live_display.get_service_section()
current_section.task_progress.update(self.progress_task, advance=1)
def start_task(self, message, count):
if self.live_display_enabled:
current_section = live_display.get_service_section()
self.progress_task = current_section.task_progress.add_task(
description=message, total=count, visible=True
)
def update_title_with_findings(self, findings):
if self.live_display_enabled:
current_section = live_display.get_service_section()
# current_section.task_progress.remove_task(self.progress_task)
total_failed = len(
[report for report in findings if report.status == "FAIL"]
)
total_checked = len(findings)
if total_failed == 0:
message = f"{self.CheckTitle} [pass]All resources passed ({total_checked})[/pass]"
else:
message = f"{self.CheckTitle} [fail]{total_failed}/{total_checked} failed![/fail]"
current_section.title_bar.update(
task_id=self.title_bar_task, description=message
)
def metadata(self) -> dict:
"""Return the JSON representation of the check's metadata"""
return self.json()
@@ -80,6 +140,24 @@ class Check(ABC, Check_Metadata_Model):
def execute(self):
"""Execute the check's logic"""
@staticmethod
def update_title_with_findings_decorator(func):
"""
Decorator to update the title bar in the live_display with findings after executing a check.
"""
@wraps(func)
def wrapper(check_instance, *args, **kwargs):
# Execute the original check's logic
findings = func(check_instance, *args, **kwargs)
# Update the title bar with the findings
check_instance.update_title_with_findings(findings)
return findings
return wrapper
@dataclass
class Check_Report:

View File

@@ -7,6 +7,7 @@ from prowler.config.config import (
check_current_version,
default_config_file_path,
default_output_directory,
valid_severities,
finding_statuses,
)
from prowler.providers.common.arguments import (
@@ -225,8 +226,8 @@ Detailed documentation at https://docs.prowler.cloud
common_checks_parser.add_argument(
"--severity",
nargs="+",
help="List of severities to be executed [informational, low, medium, high, critical]",
choices=["informational", "low", "medium", "high", "critical"],
help=f"List of severities to be executed {valid_severities}",
choices=valid_severities,
)
group.add_argument(
"--compliance",

View File

@@ -0,0 +1,642 @@
import sys
from csv import DictWriter
from colorama import Fore, Style
from tabulate import tabulate
from prowler.config.config import orange_color, timestamp
from prowler.lib.check.models import Check_Report
from prowler.lib.logger import logger
from prowler.lib.outputs.models import (
Check_Output_CSV_AWS_CIS,
Check_Output_CSV_AWS_ISO27001_2013,
Check_Output_CSV_AWS_Well_Architected,
Check_Output_CSV_ENS_RD2022,
Check_Output_CSV_GCP_CIS,
Check_Output_CSV_Generic_Compliance,
Check_Output_MITRE_ATTACK,
generate_csv_fields,
unroll_list,
)
from prowler.lib.utils.utils import outputs_unix_timestamp
def add_manual_controls(output_options, audit_info, file_descriptors):
try:
# Check if MANUAL control was already added to output
if "manual_check" in output_options.bulk_checks_metadata:
manual_finding = Check_Report(
output_options.bulk_checks_metadata["manual_check"].json()
)
manual_finding.status = "INFO"
manual_finding.status_extended = "Manual check"
manual_finding.resource_id = "manual_check"
manual_finding.resource_name = "Manual check"
manual_finding.region = ""
manual_finding.location = ""
manual_finding.project_id = ""
fill_compliance(
output_options, manual_finding, audit_info, file_descriptors
)
del output_options.bulk_checks_metadata["manual_check"]
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def fill_compliance(output_options, finding, audit_info, file_descriptors):
try:
# We have to retrieve all the check's compliance requirements
check_compliance = output_options.bulk_checks_metadata[
finding.check_metadata.CheckID
].Compliance
for compliance in check_compliance:
csv_header = compliance_row = compliance_output = None
if (
compliance.Framework == "ENS"
and compliance.Version == "RD2022"
and "ens_rd2022_aws" in output_options.output_modes
):
compliance_output = "ens_rd2022_aws"
for requirement in compliance.Requirements:
requirement_description = requirement.Description
requirement_id = requirement.Id
for attribute in requirement.Attributes:
compliance_row = Check_Output_CSV_ENS_RD2022(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
Requirements_Id=requirement_id,
Requirements_Description=requirement_description,
Requirements_Attributes_IdGrupoControl=attribute.IdGrupoControl,
Requirements_Attributes_Marco=attribute.Marco,
Requirements_Attributes_Categoria=attribute.Categoria,
Requirements_Attributes_DescripcionControl=attribute.DescripcionControl,
Requirements_Attributes_Nivel=attribute.Nivel,
Requirements_Attributes_Tipo=attribute.Tipo,
Requirements_Attributes_Dimensiones=",".join(
attribute.Dimensiones
),
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
)
csv_header = generate_csv_fields(Check_Output_CSV_ENS_RD2022)
elif compliance.Framework == "CIS" and "cis_" in str(
output_options.output_modes
):
compliance_output = (
"cis_" + compliance.Version + "_" + compliance.Provider.lower()
)
# Only with the version of CIS that was selected
if compliance_output in str(output_options.output_modes):
for requirement in compliance.Requirements:
requirement_description = requirement.Description
requirement_id = requirement.Id
for attribute in requirement.Attributes:
if compliance.Provider == "AWS":
compliance_row = Check_Output_CSV_AWS_CIS(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
Requirements_Id=requirement_id,
Requirements_Description=requirement_description,
Requirements_Attributes_Section=attribute.Section,
Requirements_Attributes_Profile=attribute.Profile,
Requirements_Attributes_AssessmentStatus=attribute.AssessmentStatus,
Requirements_Attributes_Description=attribute.Description,
Requirements_Attributes_RationaleStatement=attribute.RationaleStatement,
Requirements_Attributes_ImpactStatement=attribute.ImpactStatement,
Requirements_Attributes_RemediationProcedure=attribute.RemediationProcedure,
Requirements_Attributes_AuditProcedure=attribute.AuditProcedure,
Requirements_Attributes_AdditionalInformation=attribute.AdditionalInformation,
Requirements_Attributes_References=attribute.References,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
)
csv_header = generate_csv_fields(
Check_Output_CSV_AWS_CIS
)
elif compliance.Provider == "GCP":
compliance_row = Check_Output_CSV_GCP_CIS(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
ProjectId=finding.project_id,
Location=finding.location.lower(),
AssessmentDate=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
Requirements_Id=requirement_id,
Requirements_Description=requirement_description,
Requirements_Attributes_Section=attribute.Section,
Requirements_Attributes_Profile=attribute.Profile,
Requirements_Attributes_AssessmentStatus=attribute.AssessmentStatus,
Requirements_Attributes_Description=attribute.Description,
Requirements_Attributes_RationaleStatement=attribute.RationaleStatement,
Requirements_Attributes_ImpactStatement=attribute.ImpactStatement,
Requirements_Attributes_RemediationProcedure=attribute.RemediationProcedure,
Requirements_Attributes_AuditProcedure=attribute.AuditProcedure,
Requirements_Attributes_AdditionalInformation=attribute.AdditionalInformation,
Requirements_Attributes_References=attribute.References,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
ResourceName=finding.resource_name,
CheckId=finding.check_metadata.CheckID,
)
csv_header = generate_csv_fields(
Check_Output_CSV_GCP_CIS
)
elif (
"AWS-Well-Architected-Framework" in compliance.Framework
and compliance.Provider == "AWS"
):
compliance_output = compliance.Framework
if compliance.Version != "":
compliance_output += "_" + compliance.Version
if compliance.Provider != "":
compliance_output += "_" + compliance.Provider
compliance_output = compliance_output.lower().replace("-", "_")
if compliance_output in output_options.output_modes:
for requirement in compliance.Requirements:
requirement_description = requirement.Description
requirement_id = requirement.Id
for attribute in requirement.Attributes:
compliance_row = Check_Output_CSV_AWS_Well_Architected(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
Requirements_Id=requirement_id,
Requirements_Description=requirement_description,
Requirements_Attributes_Name=attribute.Name,
Requirements_Attributes_WellArchitectedQuestionId=attribute.WellArchitectedQuestionId,
Requirements_Attributes_WellArchitectedPracticeId=attribute.WellArchitectedPracticeId,
Requirements_Attributes_Section=attribute.Section,
Requirements_Attributes_SubSection=attribute.SubSection,
Requirements_Attributes_LevelOfRisk=attribute.LevelOfRisk,
Requirements_Attributes_AssessmentMethod=attribute.AssessmentMethod,
Requirements_Attributes_Description=attribute.Description,
Requirements_Attributes_ImplementationGuidanceUrl=attribute.ImplementationGuidanceUrl,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
)
csv_header = generate_csv_fields(
Check_Output_CSV_AWS_Well_Architected
)
elif (
compliance.Framework == "ISO27001"
and compliance.Version == "2013"
and compliance.Provider == "AWS"
):
compliance_output = compliance.Framework
if compliance.Version != "":
compliance_output += "_" + compliance.Version
if compliance.Provider != "":
compliance_output += "_" + compliance.Provider
compliance_output = compliance_output.lower().replace("-", "_")
if compliance_output in output_options.output_modes:
for requirement in compliance.Requirements:
requirement_description = requirement.Description
requirement_id = requirement.Id
requirement_name = requirement.Name
for attribute in requirement.Attributes:
compliance_row = Check_Output_CSV_AWS_ISO27001_2013(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
Requirements_Id=requirement_id,
Requirements_Name=requirement_name,
Requirements_Description=requirement_description,
Requirements_Attributes_Category=attribute.Category,
Requirements_Attributes_Objetive_ID=attribute.Objetive_ID,
Requirements_Attributes_Objetive_Name=attribute.Objetive_Name,
Requirements_Attributes_Check_Summary=attribute.Check_Summary,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
)
csv_header = generate_csv_fields(Check_Output_CSV_AWS_ISO27001_2013)
elif (
compliance.Framework == "MITRE-ATTACK"
and compliance.Version == ""
and compliance.Provider == "AWS"
):
compliance_output = compliance.Framework
if compliance.Version != "":
compliance_output += "_" + compliance.Version
if compliance.Provider != "":
compliance_output += "_" + compliance.Provider
compliance_output = compliance_output.lower().replace("-", "_")
if compliance_output in output_options.output_modes:
for requirement in compliance.Requirements:
requirement_description = requirement.Description
requirement_id = requirement.Id
requirement_name = requirement.Name
attributes_aws_services = ""
attributes_categories = ""
attributes_values = ""
attributes_comments = ""
for attribute in requirement.Attributes:
attributes_aws_services += attribute.AWSService + "\n"
attributes_categories += attribute.Category + "\n"
attributes_values += attribute.Value + "\n"
attributes_comments += attribute.Comment + "\n"
compliance_row = Check_Output_MITRE_ATTACK(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
Requirements_Id=requirement_id,
Requirements_Description=requirement_description,
Requirements_Name=requirement_name,
Requirements_Tactics=unroll_list(requirement.Tactics),
Requirements_SubTechniques=unroll_list(
requirement.SubTechniques
),
Requirements_Platforms=unroll_list(requirement.Platforms),
Requirements_TechniqueURL=requirement.TechniqueURL,
Requirements_Attributes_AWSServices=attributes_aws_services,
Requirements_Attributes_Categories=attributes_categories,
Requirements_Attributes_Values=attributes_values,
Requirements_Attributes_Comments=attributes_comments,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
)
csv_header = generate_csv_fields(Check_Output_MITRE_ATTACK)
else:
compliance_output = compliance.Framework
if compliance.Version != "":
compliance_output += "_" + compliance.Version
if compliance.Provider != "":
compliance_output += "_" + compliance.Provider
compliance_output = compliance_output.lower().replace("-", "_")
if compliance_output in output_options.output_modes:
for requirement in compliance.Requirements:
requirement_description = requirement.Description
requirement_id = requirement.Id
for attribute in requirement.Attributes:
compliance_row = Check_Output_CSV_Generic_Compliance(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
Requirements_Id=requirement_id,
Requirements_Description=requirement_description,
Requirements_Attributes_Section=attribute.Section,
Requirements_Attributes_SubSection=attribute.SubSection,
Requirements_Attributes_SubGroup=attribute.SubGroup,
Requirements_Attributes_Service=attribute.Service,
Requirements_Attributes_Soc_Type=attribute.Soc_Type,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
)
csv_header = generate_csv_fields(
Check_Output_CSV_Generic_Compliance
)
if compliance_row:
csv_writer = DictWriter(
file_descriptors[compliance_output],
fieldnames=csv_header,
delimiter=";",
)
csv_writer.writerow(compliance_row.__dict__)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def display_compliance_table(
findings: list,
bulk_checks_metadata: dict,
compliance_framework: str,
output_filename: str,
output_directory: str,
):
try:
if "ens_rd2022_aws" == compliance_framework:
marcos = {}
ens_compliance_table = {
"Proveedor": [],
"Marco/Categoria": [],
"Estado": [],
"Alto": [],
"Medio": [],
"Bajo": [],
"Opcional": [],
}
pass_count = fail_count = 0
for finding in findings:
check = bulk_checks_metadata[finding.check_metadata.CheckID]
check_compliances = check.Compliance
for compliance in check_compliances:
if (
compliance.Framework == "ENS"
and compliance.Provider == "AWS"
and compliance.Version == "RD2022"
):
compliance_version = compliance.Version
compliance_fm = compliance.Framework
compliance_provider = compliance.Provider
for requirement in compliance.Requirements:
for attribute in requirement.Attributes:
marco_categoria = (
f"{attribute.Marco}/{attribute.Categoria}"
)
# Check if Marco/Categoria exists
if marco_categoria not in marcos:
marcos[marco_categoria] = {
"Estado": f"{Fore.GREEN}CUMPLE{Style.RESET_ALL}",
"Opcional": 0,
"Alto": 0,
"Medio": 0,
"Bajo": 0,
}
if finding.status == "FAIL":
if attribute.Tipo != "recomendacion":
fail_count += 1
marcos[marco_categoria][
"Estado"
] = f"{Fore.RED}NO CUMPLE{Style.RESET_ALL}"
elif finding.status == "PASS":
pass_count += 1
if attribute.Nivel == "opcional":
marcos[marco_categoria]["Opcional"] += 1
elif attribute.Nivel == "alto":
marcos[marco_categoria]["Alto"] += 1
elif attribute.Nivel == "medio":
marcos[marco_categoria]["Medio"] += 1
elif attribute.Nivel == "bajo":
marcos[marco_categoria]["Bajo"] += 1
# Add results to table
for marco in sorted(marcos):
ens_compliance_table["Proveedor"].append(compliance.Provider)
ens_compliance_table["Marco/Categoria"].append(marco)
ens_compliance_table["Estado"].append(marcos[marco]["Estado"])
ens_compliance_table["Opcional"].append(
f"{Fore.BLUE}{marcos[marco]['Opcional']}{Style.RESET_ALL}"
)
ens_compliance_table["Alto"].append(
f"{Fore.LIGHTRED_EX}{marcos[marco]['Alto']}{Style.RESET_ALL}"
)
ens_compliance_table["Medio"].append(
f"{orange_color}{marcos[marco]['Medio']}{Style.RESET_ALL}"
)
ens_compliance_table["Bajo"].append(
f"{Fore.YELLOW}{marcos[marco]['Bajo']}{Style.RESET_ALL}"
)
if fail_count + pass_count < 0:
print(
f"\n {Style.BRIGHT}There are no resources for {Fore.YELLOW}{compliance_fm} {compliance_version} - {compliance_provider}{Style.RESET_ALL}.\n"
)
else:
print(
f"\nEstado de Cumplimiento de {Fore.YELLOW}{compliance_fm} {compliance_version} - {compliance_provider}{Style.RESET_ALL}:"
)
overview_table = [
[
f"{Fore.RED}{round(fail_count/(fail_count+pass_count)*100, 2)}% ({fail_count}) NO CUMPLE{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count/(fail_count+pass_count)*100, 2)}% ({pass_count}) CUMPLE{Style.RESET_ALL}",
]
]
print(tabulate(overview_table, tablefmt="rounded_grid"))
print(
f"\nResultados de {Fore.YELLOW}{compliance_fm} {compliance_version} - {compliance_provider}{Style.RESET_ALL}:"
)
print(
tabulate(
ens_compliance_table, headers="keys", tablefmt="rounded_grid"
)
)
print(
f"{Style.BRIGHT}* Solo aparece el Marco/Categoria que contiene resultados.{Style.RESET_ALL}"
)
print(f"\nResultados detallados de {compliance_fm} en:")
print(
f" - CSV: {output_directory}/{output_filename}_{compliance_framework}.csv\n"
)
elif "cis_" in compliance_framework:
sections = {}
cis_compliance_table = {
"Provider": [],
"Section": [],
"Level 1": [],
"Level 2": [],
}
pass_count = fail_count = 0
for finding in findings:
check = bulk_checks_metadata[finding.check_metadata.CheckID]
check_compliances = check.Compliance
for compliance in check_compliances:
if (
compliance.Framework == "CIS"
and compliance.Version in compliance_framework
):
compliance_version = compliance.Version
compliance_fm = compliance.Framework
for requirement in compliance.Requirements:
for attribute in requirement.Attributes:
section = attribute.Section
# Check if Section exists
if section not in sections:
sections[section] = {
"Status": f"{Fore.GREEN}PASS{Style.RESET_ALL}",
"Level 1": {"FAIL": 0, "PASS": 0},
"Level 2": {"FAIL": 0, "PASS": 0},
}
if finding.status == "FAIL":
fail_count += 1
elif finding.status == "PASS":
pass_count += 1
if attribute.Profile == "Level 1":
if finding.status == "FAIL":
sections[section]["Level 1"]["FAIL"] += 1
else:
sections[section]["Level 1"]["PASS"] += 1
elif attribute.Profile == "Level 2":
if finding.status == "FAIL":
sections[section]["Level 2"]["FAIL"] += 1
else:
sections[section]["Level 2"]["PASS"] += 1
# Add results to table
sections = dict(sorted(sections.items()))
for section in sections:
cis_compliance_table["Provider"].append(compliance.Provider)
cis_compliance_table["Section"].append(section)
if sections[section]["Level 1"]["FAIL"] > 0:
cis_compliance_table["Level 1"].append(
f"{Fore.RED}FAIL({sections[section]['Level 1']['FAIL']}){Style.RESET_ALL}"
)
else:
cis_compliance_table["Level 1"].append(
f"{Fore.GREEN}PASS({sections[section]['Level 1']['PASS']}){Style.RESET_ALL}"
)
if sections[section]["Level 2"]["FAIL"] > 0:
cis_compliance_table["Level 2"].append(
f"{Fore.RED}FAIL({sections[section]['Level 2']['FAIL']}){Style.RESET_ALL}"
)
else:
cis_compliance_table["Level 2"].append(
f"{Fore.GREEN}PASS({sections[section]['Level 2']['PASS']}){Style.RESET_ALL}"
)
if fail_count + pass_count < 1:
print(
f"\n {Style.BRIGHT}There are no resources for {Fore.YELLOW}{compliance_fm}-{compliance_version}{Style.RESET_ALL}.\n"
)
else:
print(
f"\nCompliance Status of {Fore.YELLOW}{compliance_fm}-{compliance_version}{Style.RESET_ALL} Framework:"
)
overview_table = [
[
f"{Fore.RED}{round(fail_count/(fail_count+pass_count)*100, 2)}% ({fail_count}) FAIL{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count/(fail_count+pass_count)*100, 2)}% ({pass_count}) PASS{Style.RESET_ALL}",
]
]
print(tabulate(overview_table, tablefmt="rounded_grid"))
print(
f"\nFramework {Fore.YELLOW}{compliance_fm}-{compliance_version}{Style.RESET_ALL} Results:"
)
print(
tabulate(
cis_compliance_table, headers="keys", tablefmt="rounded_grid"
)
)
print(
f"{Style.BRIGHT}* Only sections containing results appear.{Style.RESET_ALL}"
)
print(f"\nDetailed results of {compliance_fm} are in:")
print(
f" - CSV: {output_directory}/{output_filename}_{compliance_framework}.csv\n"
)
elif "mitre_attack" in compliance_framework:
tactics = {}
mitre_compliance_table = {
"Provider": [],
"Tactic": [],
"Status": [],
}
pass_count = fail_count = 0
for finding in findings:
check = bulk_checks_metadata[finding.check_metadata.CheckID]
check_compliances = check.Compliance
for compliance in check_compliances:
if (
"MITRE-ATTACK" in compliance.Framework
and compliance.Version in compliance_framework
):
compliance_fm = compliance.Framework
for requirement in compliance.Requirements:
for tactic in requirement.Tactics:
if tactic not in tactics:
tactics[tactic] = {"FAIL": 0, "PASS": 0}
if finding.status == "FAIL":
fail_count += 1
tactics[tactic]["FAIL"] += 1
elif finding.status == "PASS":
pass_count += 1
tactics[tactic]["PASS"] += 1
# Add results to table
tactics = dict(sorted(tactics.items()))
for tactic in tactics:
mitre_compliance_table["Provider"].append(compliance.Provider)
mitre_compliance_table["Tactic"].append(tactic)
if tactics[tactic]["FAIL"] > 0:
mitre_compliance_table["Status"].append(
f"{Fore.RED}FAIL({tactics[tactic]['FAIL']}){Style.RESET_ALL}"
)
else:
mitre_compliance_table["Status"].append(
f"{Fore.GREEN}PASS({tactics[tactic]['PASS']}){Style.RESET_ALL}"
)
if fail_count + pass_count < 1:
print(
f"\n {Style.BRIGHT}There are no resources for {Fore.YELLOW}{compliance_fm}{Style.RESET_ALL}.\n"
)
else:
print(
f"\nCompliance Status of {Fore.YELLOW}{compliance_fm}{Style.RESET_ALL} Framework:"
)
overview_table = [
[
f"{Fore.RED}{round(fail_count/(fail_count+pass_count)*100, 2)}% ({fail_count}) FAIL{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count/(fail_count+pass_count)*100, 2)}% ({pass_count}) PASS{Style.RESET_ALL}",
]
]
print(tabulate(overview_table, tablefmt="rounded_grid"))
print(
f"\nFramework {Fore.YELLOW}{compliance_fm}{Style.RESET_ALL} Results:"
)
print(
tabulate(
mitre_compliance_table, headers="keys", tablefmt="rounded_grid"
)
)
print(
f"{Style.BRIGHT}* Only sections containing results appear.{Style.RESET_ALL}"
)
print(f"\nDetailed results of {compliance_fm} are in:")
print(
f" - CSV: {output_directory}/{output_filename}_{compliance_framework}.csv\n"
)
else:
print(f"\nDetailed results of {compliance_framework.upper()} are in:")
print(
f" - CSV: {output_directory}/{output_filename}_{compliance_framework}.csv\n"
)
except Exception as error:
logger.critical(
f"{error.__class__.__name__}:{error.__traceback__.tb_lineno} -- {error}"
)
sys.exit(1)

View File

@@ -0,0 +1,485 @@
import os
import pathlib
from datetime import timedelta
from time import time
from rich.align import Align
from rich.console import Console, Group
from rich.layout import Layout
from rich.live import Live
from rich.padding import Padding
from rich.panel import Panel
from rich.progress import (
BarColumn,
MofNCompleteColumn,
Progress,
TextColumn,
TimeElapsedColumn,
TimeRemainingColumn,
)
from rich.rule import Rule
from rich.table import Table
from rich.text import Text
from rich.theme import Theme
from prowler.config.config import prowler_version, timestamp
from prowler.providers.aws.models import AWSIdentityInfo, AWSAssumeRole
# Defines a subclass of Live for creating and managing the live display in the CLI
class LiveDisplay(Live):
def __init__(self, *args, **kwargs):
# Load a theme for the console display from a file
theme = self.load_theme_from_file()
super().__init__(renderable=None, console=Console(theme=theme), *args, **kwargs)
self.sections = {} # Stores different sections of the layout
self.enabled = False # Flag to enable or disable the live display
# Sets up the layout of the live display
def make_layout(self):
"""
Defines the layout.
Making sections invisible so it doesnt show the default Layout metadata before content is added
Text(" ") is to stop the layout metadata from rendering before the layout is updated with real content
client_and_service handles client init (when importing clients) and service check execution
"""
self.layout = Layout(name="root")
# Split layout into intro, overall progress, and main sections
self.layout.split(
Layout(name="intro", ratio=3, minimum_size=9),
Layout(Text(" "), name="overall_progress", minimum_size=5),
Layout(name="main", ratio=10),
)
# Further split intro layout into body and creds sections
self.layout["intro"].split_row(
Layout(name="body", ratio=3),
Layout(name="creds", ratio=2, visible=False),
)
# Split main layout into client_and_service and results sections
self.layout["main"].split_row(
Layout(
Text(" "), name="client_and_service", ratio=3
), # For client_init and service
Layout(name="results", ratio=2, visible=False),
)
# Loads a theme from a YAML file located in the same directory as this file
def load_theme_from_file(self):
# Loads theme.yaml from the same folder as this file
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
with open(f"{actual_directory}/theme.yaml") as f:
theme = Theme.from_file(f)
return theme
# Initializes the layout and sections based on CLI arguments
def initialize(self, args):
# A way to get around parsing args to LiveDisplay when it is intialized
# This is so that the live_display object can be intialized in this file, and imported to other parts of prowler
self.cli_args = args
self.enabled = not args.only_logs
if self.enabled:
# Initialize layout
self.make_layout()
# Apply layout
self.update(self.layout)
# Add Intro section
intro_layout = self.layout["intro"]
intro_section = IntroSection(args, intro_layout)
self.sections["intro"] = intro_section
# Start live display
self.start()
# Adds AWS credentials to the display
def print_aws_credentials(self, aws_identity_info: AWSIdentityInfo, assumed_role_info: AWSAssumeRole):
# Adds the AWS credentials to the display - will need to extend to gcp and azure
# Create a new function for gcp and azure in this class, that will call a function in the intro_section class
intro_section = self.sections["intro"]
intro_section.add_aws_credentials(aws_identity_info, assumed_role_info)
# Adds and manages the overall progress section
def add_overall_progress_section(self, total_checks_dict):
overall_progress_section = OverallProgressSection(total_checks_dict)
overall_progress_layout = self.layout["overall_progress"]
overall_progress_layout.update(overall_progress_section)
overall_progress_layout.visible = True
self.sections["overall_progress"] = overall_progress_section
# Add results section
self.add_results_section()
# Wrapper function to increment the overall progress
def increment_overall_check_progress(self):
# Called by ExecutionManager
if self.enabled:
section = self.sections["overall_progress"]
section.increment_check_progress()
# Wrapper function to increment the progress for the current service
def increment_overall_service_progress(self):
# Called by ExecutionManager
if self.enabled:
section = self.sections["overall_progress"]
section.increment_service_progress()
# Adds and manages the results section
def add_results_section(self):
# Intializes the results section
results_layout = self.layout["results"]
results_section = ResultsSection()
results_layout.update(results_section)
results_layout.visible = True
self.sections["results"] = results_section
def add_results_for_service(self, service_name, service_findings):
# Adds rows to the Service Check Results table
if self.enabled:
results_section = self.sections["results"]
results_section.add_results_for_service(service_name, service_findings)
# Client Init Section
def add_client_init_section(self, service_name):
# Used to track progress of client init process
if self.enabled:
client_init_section = ClientInitSection(service_name)
self.sections["client_and_service"] = client_init_section
self.layout["client_and_service"].update(client_init_section)
self.layout["client_and_service"].visible = True
# Service Section
def add_service_section(self, service_name, total_checks):
# Used to create the ServiceSection when checks start to execute (after clients have been imported)
if self.enabled:
service_section = ServiceSection(service_name, total_checks)
self.sections["client_and_service"] = service_section
self.layout["client_and_service"].update(service_section)
def increment_check_progress(self):
if self.enabled:
service_section = self.sections["client_and_service"]
service_section.increment_check_progress()
# Misc
def get_service_section(self):
# Used by Check
if self.enabled:
return self.sections["client_and_service"]
def get_client_init_section(self):
# Used by AWSService
if self.enabled:
return self.sections["client_and_service"]
def hide_service_section(self):
# To hide the last service after execution has completed
self.layout["client_and_service"].visible = False
def print_message(self, message):
# No use yet
self.console.print(message)
# The following classes (ServiceSection, ClientInitSection, IntroSection, OverallProgressSection, ResultsSection)
# are used to define different sections of the live display, each with its own layout, progress bars,
class ServiceSection:
def __init__(self, service_name, total_checks) -> None:
self.service_name = service_name
self.total_checks = total_checks
self.renderables = self.create_service_section()
self.start_check_progress()
def __rich__(self):
return Padding(self.renderables, (2, 2))
def create_service_section(self):
# Create the progress components
self.check_progress = Progress(
TextColumn("[bold]{task.description}"),
BarColumn(bar_width=None),
MofNCompleteColumn(),
transient=False, # Optional: set True if you want the progress bar to disappear after completion
)
# Used to add titles that dont need progress bars
self.title_bar = Progress(
TextColumn("[progress.description]{task.description}"), transient=True
)
# Progress Bar for Service Init and Checks
self.task_progress = Progress(
TextColumn("[progress.description]{task.description}"),
BarColumn(bar_width=None),
MofNCompleteColumn(),
TimeElapsedColumn(),
TimeRemainingColumn(),
transient=True,
)
return Group(
Panel(
Group(
self.check_progress,
Rule(style="bold blue"),
self.title_bar,
Rule(style="bold blue"),
self.task_progress,
),
title=f"Service: {self.service_name}",
),
)
def start_check_progress(self):
self.check_progress_task_id = self.check_progress.add_task(
"Checks executed", total=self.total_checks
)
def increment_check_progress(self):
self.check_progress.update(self.check_progress_task_id, advance=1)
class ClientInitSection:
def __init__(self, client_name) -> None:
self.client_name = client_name
self.renderables = self.create_client_init_section()
def __rich__(self):
return Padding(self.renderables, (2, 2))
def create_client_init_section(self):
# Progress Bar for Checks
self.task_progress_bar = Progress(
TextColumn("[progress.description]{task.description}"),
BarColumn(bar_width=None),
MofNCompleteColumn(),
TimeElapsedColumn(),
TimeRemainingColumn(),
transient=True,
)
return Group(
Panel(
Group(
self.task_progress_bar,
),
title=f"Intializing {self.client_name.replace('_', ' ')}",
),
)
class IntroSection:
def __init__(self, args, layout: Layout) -> None:
self.body_layout = layout["body"]
self.creds_layout = layout["creds"]
self.renderables = []
self.title = f"Prowler v{prowler_version}"
if not args.no_banner:
self.create_banner(args)
def __rich__(self):
return Group(*self.renderables)
def create_banner(self, args):
banner_text = f"""[banner_color] _
_ __ _ __ _____ _| | ___ _ __
| '_ \| '__/ _ \ \ /\ / / |/ _ \ '__|
| |_) | | | (_) \ V V /| | __/ |
| .__/|_| \___/ \_/\_/ |_|\___|_|v{prowler_version}
|_|[/banner_color][banner_blue]the handy cloud security tool[/banner_blue]
[info]Date: {timestamp.strftime('%Y-%m-%d %H:%M:%S')}[/info]
"""
if args.verbose:
banner_text += """
Color code for results:
- [info]INFO (Information)[/info]
- [pass]PASS (Recommended value)[/pass]
- [orange_color]WARNING (Ignored by mutelist)[/orange_color]
- [fail]FAIL (Fix required)[/fail]
"""
self.renderables.append(banner_text)
self.body_layout.update(Group(*self.renderables))
self.body_layout.visible = True
def add_aws_credentials(self, aws_identity_info: AWSIdentityInfo, assumed_role_info: AWSAssumeRole):
# Beautify audited regions, and set to "all" if there is no filter region
regions = (
", ".join(aws_identity_info.audited_regions)
if aws_identity_info.audited_regions is not None
else "all"
)
# Beautify audited profile, set and to "default" if there is no profile set
profile = aws_identity_info.profile if aws_identity_info.profile is not None else "default"
content = Text()
content.append(
"This report is being generated using credentials below:\n\n", style="bold"
)
content.append("AWS-CLI Profile: ", style="bold")
content.append(f"[{profile}]\n", style="info")
content.append("AWS Filter Region: ", style="bold")
content.append(f"[{regions}]\n", style="info")
content.append("AWS Account: ", style="bold")
content.append(f"[{aws_identity_info.account}]\n", style="info")
content.append("UserId: ", style="bold")
content.append(f"[{aws_identity_info.user_id}]\n", style="info")
content.append("Caller Identity ARN: ", style="bold")
content.append(f"[{aws_identity_info.identity_arn}]\n", style="info")
# If a role has been assumed, print the Assumed Role ARN
if assumed_role_info.role_arn is not None:
content.append("Assumed Role ARN: ", style="bold")
content.append(f"[{assumed_role_info.role_arn}]\n", style="info")
self.creds_layout.update(content)
self.creds_layout.visible = True
class OverallProgressSection:
def __init__(self, total_checks_dict: dict) -> None:
self.start_time = time() # Start the timer
self.renderables = self.create_renderable(total_checks_dict)
def __rich__(self):
elapsed_time = self.total_time_taken()
return Group(*self.renderables, f"Total time taken: {elapsed_time}")
def total_time_taken(self):
elapsed_seconds = int(time() - self.start_time)
elapsed_time = timedelta(seconds=elapsed_seconds)
return elapsed_time
def create_renderable(self, total_checks_dict):
services_num = len(total_checks_dict) # number of keys == number of services
checks_num = sum(total_checks_dict.values())
plural_string = "checks"
singular_string = "check"
check_noun = plural_string if checks_num > 1 else singular_string
# Create the progress bar
self.overall_progress_bar = Progress(
TextColumn("[bold]{task.description}"),
BarColumn(bar_width=None),
MofNCompleteColumn(),
transient=False, # Optional: set True if you want the progress bar to disappear after completion
)
# Create the Services Completed task, to track the number of services completed
self.service_progress_task_id = self.overall_progress_bar.add_task(
"Services completed", total=services_num
)
# Create the Checks Completed task, to track the number of checks completed across all services
self.check_progress_task_id = self.overall_progress_bar.add_task(
"Checks executed", total=checks_num
)
content = Text()
content.append(
f"Executing {checks_num} {check_noun} across {services_num} services, please wait...\n",
style="bold",
)
return [content, self.overall_progress_bar]
def increment_check_progress(self):
self.overall_progress_bar.update(self.check_progress_task_id, advance=1)
def increment_service_progress(self):
self.overall_progress_bar.update(self.service_progress_task_id, advance=1)
class ResultsSection:
def __init__(self, verbose=True):
self.verbose = verbose
self.table = Table(title="Service Check Results")
self.table.add_column("Service", justify="left")
if self.verbose:
self.serverities = ["critical", "high", "medium", "low"]
# Add columns for each severity level when verbose, report on the count of fails per severity per service
for severity in self.serverities:
styled_header = (
f"[{severity.lower()}]{severity.capitalize()}[/{severity.lower()}]"
)
self.table.add_column(styled_header, justify="center")
else:
# Dynamically track the status's, report on the status counts for each service
self.status_columns = set(["PASS", "FAIL"])
self.service_findings = {} # Dictionary to store findings for each service
# Dictionary to map plain statuses to their stylized forms
self.status_headers = {
"FAIL": "[fail]Fail[/fail]",
"PASS": "[pass]Pass[/pass]",
}
# Add the initial columns with styling
for status, header in self.status_headers.items():
self.table.add_column(header, justify="center")
def add_results_for_service(self, service_name, service_findings):
if self.verbose:
# Count fails per severity
severity_counts = {severity: 0 for severity in self.serverities}
for finding in service_findings:
if finding.status == "FAIL":
severity_counts[finding.check_metadata.Severity] += 1
# Add row with severity counts
row = [service_name] + [
str(severity_counts[severity]) for severity in self.serverities
]
self.table.add_row(*row)
else:
# Update the dictionary with the new findings
status_counts = {report.status: 0 for report in service_findings}
for report in service_findings:
status_counts[report.status] += 1
self.service_findings[service_name] = status_counts
# Update status_columns and table columns
self.status_columns.update(status_counts.keys())
for status in self.status_columns:
if status not in self.status_headers:
# [{status.lower()}] is for the styling (defined in theme.yaml)
# If new status, add it to status_headers and table
styled_header = (
f"[{status.lower()}]{status.capitalize()}[/{status.lower()}]"
)
self.status_headers[status] = styled_header
self.table.add_column(styled_header, justify="center")
# Update the table with findings for all services
self._update_table()
def _update_table(self):
# Used for when verbose = false
# Clear existing rows
self.table.rows.clear()
# Add updated rows for all services
for service, counts in self.service_findings.items():
row = [service]
for status in self.status_columns:
count = counts.get(status, 0)
percentage = (
f"{(count / sum(counts.values()) * 100):.2f}%" if counts else "0%"
)
row.append(f"{count} ({percentage})")
self.table.add_row(*row)
def __rich__(self):
# This method allows the ResultsSection to be directly rendered by Rich
if not self.table.rows:
return Text("")
return Padding(Align.center(self.table), (0, 2))
# Create an instance of LiveDisplay to import elsewhere (ExecutionManager, the checks, the services)
live_display = LiveDisplay(vertical_overflow="visible")

16
prowler/lib/ui/theme.yaml Normal file
View File

@@ -0,0 +1,16 @@
[styles]
info = yellow1
warning = dark_orange
fail = bold red
pass = bold green
banner_blue = dodger_blue3 bold
banner_color = bold green
orange_color = dark_orange
critical = bold bright_red
high = bold red
medium = bold dark_orange
low = bold yellow1
# style names must be lower case, start with a letter, and only contain letters or the characters ".", "-", "_".

View File

@@ -152,23 +152,31 @@ def input_role_mfa_token_and_code() -> tuple[str]:
def generate_regional_clients(
service: str, audit_info: AWS_Audit_Info, global_service: bool = False
service: str,
audit_info: AWS_Audit_Info,
) -> dict:
"""generate_regional_clients returns a dict with the following format for the given service:
Example:
{"eu-west-1": boto3_service_client}
"""
try:
regional_clients = {}
service_regions = get_available_aws_service_regions(service, audit_info)
# Check if it is global service to gather only one region
if global_service:
if service_regions:
if audit_info.profile_region in service_regions:
service_regions = [audit_info.profile_region]
service_regions = service_regions[:1]
for region in service_regions:
# Get the regions enabled for the account and get the intersection with the service available regions
if audit_info.enabled_regions:
enabled_regions = service_regions.intersection(audit_info.enabled_regions)
else:
enabled_regions = service_regions
for region in enabled_regions:
regional_client = audit_info.audit_session.client(
service, region_name=region, config=audit_info.session_config
)
regional_client.region = region
regional_clients[region] = regional_client
return regional_clients
except Exception as error:
logger.error(
@@ -176,6 +184,22 @@ def generate_regional_clients(
)
def get_aws_enabled_regions(audit_info: AWS_Audit_Info) -> set:
"""get_aws_enabled_regions returns a set of enabled AWS regions"""
# EC2 Client to check enabled regions
service = "ec2"
default_region = get_default_region(service, audit_info)
ec2_client = audit_info.audit_session.client(service, region_name=default_region)
enabled_regions = set()
# With AllRegions=False we only get the enabled regions for the account
for region in ec2_client.describe_regions(AllRegions=False).get("Regions", []):
enabled_regions.add(region.get("RegionName"))
return enabled_regions
def get_aws_available_regions():
try:
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
@@ -216,6 +240,8 @@ def get_checks_from_input_arn(audit_resources: list, provider: str) -> set:
service = "efs"
elif service == "logs":
service = "cloudwatch"
elif service == "cognito":
service = "cognito-idp"
# Check if Prowler has checks in service
try:
list_modules(provider, service)
@@ -267,17 +293,18 @@ def get_regions_from_audit_resources(audit_resources: list) -> set:
return audited_regions
def get_available_aws_service_regions(service: str, audit_info: AWS_Audit_Info) -> list:
def get_available_aws_service_regions(service: str, audit_info: AWS_Audit_Info) -> set:
# Get json locally
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
with open_file(f"{actual_directory}/{aws_services_json_file}") as f:
data = parse_json_file(f)
# Check if it is a subservice
json_regions = data["services"][service]["regions"][audit_info.audited_partition]
if audit_info.audited_regions: # Check for input aws audit_info.audited_regions
regions = list(
set(json_regions).intersection(audit_info.audited_regions)
) # Get common regions between input and json
json_regions = set(
data["services"][service]["regions"][audit_info.audited_partition]
)
# Check for input aws audit_info.audited_regions
if audit_info.audited_regions:
# Get common regions between input and json
regions = json_regions.intersection(audit_info.audited_regions)
else: # Get all regions from json of the service and partition
regions = json_regions
return regions

View File

@@ -2,8 +2,6 @@ import os
import pathlib
import sys
from argparse import Namespace
from dataclasses import dataclass
from datetime import datetime
from typing import Any, Optional
from boto3 import client, session
@@ -14,12 +12,21 @@ from colorama import Fore, Style
from prowler.config.config import aws_services_json_file
from prowler.lib.check.check import list_modules, recover_checks_from_service
from prowler.lib.ui.live_display import live_display
from prowler.lib.logger import logger
from prowler.lib.utils.utils import open_file, parse_json_file
from prowler.providers.aws.config import (
AWS_STS_GLOBAL_ENDPOINT_REGION,
BOTO3_USER_AGENT_EXTRA,
)
from prowler.providers.aws.models import (
AWSOrganizationsInfo,
AWSCredentials,
AWSAssumeRole,
AWSAssumeRoleConfiguration,
AWSIdentityInfo,
AWSSession,
)
from prowler.providers.aws.lib.arn.arn import parse_iam_credentials_arn
from prowler.providers.aws.lib.credentials.credentials import (
create_sts_session,
@@ -30,57 +37,6 @@ from prowler.providers.aws.lib.organizations.organizations import (
)
from prowler.providers.common.provider import Provider
@dataclass
class AWSOrganizationsInfo:
account_details_email: str
account_details_name: str
account_details_arn: str
account_details_org: str
account_details_tags: str
@dataclass
class AWSCredentials:
aws_access_key_id: str
aws_session_token: str
aws_secret_access_key: str
expiration: datetime
@dataclass
class AWSAssumeRole:
role_arn: str
session_duration: int
external_id: str
mfa_enabled: bool
@dataclass
class AWSAssumeRoleConfiguration:
assumed_role_info: AWSAssumeRole
assumed_role_credentials: AWSCredentials
@dataclass
class AWSIdentityInfo:
account: str
account_arn: str
user_id: str
partition: str
identity_arn: str
profile: str
profile_region: str
audited_regions: list
@dataclass
class AWSSession:
session: session.Session
session_config: Config
original_session: None
class AwsProvider(Provider):
session: AWSSession = AWSSession(
session=None, session_config=None, original_session=None
@@ -328,45 +284,7 @@ class AwsProvider(Provider):
# This method is called "adding ()" to the name, so it cannot accept arguments
# https://github.com/boto/botocore/blob/098cc255f81a25b852e1ecdeb7adebd94c7b1b73/botocore/credentials.py#L570
def refresh_credentials(self):
logger.info("Refreshing assumed credentials...")
response = self.__assume_role__(self.aws_session, self.role_info)
refreshed_credentials = dict(
# Keys of the dict has to be the same as those that are being searched in the parent class
# https://github.com/boto/botocore/blob/098cc255f81a25b852e1ecdeb7adebd94c7b1b73/botocore/credentials.py#L609
access_key=response["Credentials"]["AccessKeyId"],
secret_key=response["Credentials"]["SecretAccessKey"],
token=response["Credentials"]["SessionToken"],
expiry_time=response["Credentials"]["Expiration"].isoformat(),
)
logger.info("Refreshed Credentials:")
logger.info(refreshed_credentials)
return refreshed_credentials
def print_credentials(self):
# Beautify audited regions, set "all" if there is no filter region
regions = (
", ".join(self.identity.audited_regions)
if self.identity.audited_regions is not None
else "all"
)
# Beautify audited profile, set "default" if there is no profile set
profile = (
self.identity.profile if self.identity.profile is not None else "default"
)
report = f"""
This report is being generated using credentials below:
AWS-CLI Profile: {Fore.YELLOW}[{profile}]{Style.RESET_ALL} AWS Filter Region: {Fore.YELLOW}[{regions}]{Style.RESET_ALL}
AWS Account: {Fore.YELLOW}[{self.identity.account}]{Style.RESET_ALL} UserId: {Fore.YELLOW}[{self.identity.user_id}]{Style.RESET_ALL}
Caller Identity ARN: {Fore.YELLOW}[{ self.identity.identity_arn}]{Style.RESET_ALL}
"""
# If -A is set, print Assumed Role ARN
if self.assumed_role.assumed_role_info.role_arn is not None:
report += f"""Assumed Role ARN: {Fore.YELLOW}[{self.assumed_role.assumed_role_info.role_arn}]{Style.RESET_ALL}
"""
print(report)
live_display.print_aws_credentials(self.identity, self.assumed_role.assumed_role_info)
def generate_regional_clients(
self, service: str, global_service: bool = False

View File

@@ -1061,6 +1061,17 @@
]
}
},
"b2bi": {
"regions": {
"aws": [
"us-east-1",
"us-east-2",
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": []
}
},
"backup": {
"regions": {
"aws": [
@@ -1481,6 +1492,7 @@
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-south-1",
"sa-east-1",
"us-east-1",
@@ -1707,6 +1719,7 @@
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
@@ -2183,15 +2196,49 @@
"aws-us-gov": []
}
},
"cognito-identity": {
"cognito": {
"regions": {
"aws": [
"af-south-1",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ca-central-1",
"eu-central-1",
"eu-north-1",
"eu-south-1",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-south-1",
"sa-east-1",
"us-east-1",
"us-east-2",
"us-west-1",
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": [
"us-gov-west-1"
]
}
},
"cognito-identity": {
"regions": {
"aws": [
"af-south-1",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ca-central-1",
"eu-central-1",
"eu-north-1",
@@ -2218,12 +2265,14 @@
"cognito-idp": {
"regions": {
"aws": [
"af-south-1",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ca-central-1",
"eu-central-1",
"eu-north-1",
@@ -2513,6 +2562,15 @@
]
}
},
"cost-optimization-hub": {
"regions": {
"aws": [
"us-east-1"
],
"aws-cn": [],
"aws-us-gov": []
}
},
"costexplorer": {
"regions": {
"aws": [
@@ -2876,6 +2934,7 @@
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-central-1",
"me-south-1",
"sa-east-1",
@@ -3000,7 +3059,10 @@
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": []
"aws-us-gov": [
"us-gov-east-1",
"us-gov-west-1"
]
}
},
"ds": {
@@ -3391,6 +3453,42 @@
]
}
},
"eks-auth": {
"regions": {
"aws": [
"af-south-1",
"ap-east-1",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-central-1",
"me-south-1",
"sa-east-1",
"us-east-1",
"us-east-2",
"us-west-1",
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": []
}
},
"elastic-inference": {
"regions": {
"aws": [
@@ -3682,6 +3780,7 @@
"eu-west-1",
"eu-west-2",
"eu-west-3",
"me-central-1",
"me-south-1",
"sa-east-1",
"us-east-1",
@@ -4699,6 +4798,7 @@
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-4",
"ca-central-1",
"eu-central-1",
"eu-central-2",
@@ -4708,6 +4808,7 @@
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-central-1",
"me-south-1",
"sa-east-1",
"us-east-1",
@@ -4806,6 +4907,40 @@
]
}
},
"inspector-scan": {
"regions": {
"aws": [
"af-south-1",
"ap-east-1",
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ca-central-1",
"eu-central-1",
"eu-central-2",
"eu-north-1",
"eu-south-1",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"me-south-1",
"sa-east-1",
"us-east-1",
"us-east-2",
"us-west-1",
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": [
"us-gov-east-1",
"us-gov-west-1"
]
}
},
"inspector2": {
"regions": {
"aws": [
@@ -6270,14 +6405,17 @@
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-south-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-4",
"ca-central-1",
"eu-central-1",
"eu-north-1",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"me-central-1",
"sa-east-1",
"us-east-1",
"us-east-2",
@@ -6708,6 +6846,7 @@
"eu-west-1",
"eu-west-2",
"eu-west-3",
"il-central-1",
"me-central-1",
"me-south-1",
"sa-east-1",
@@ -7428,7 +7567,10 @@
"us-west-1",
"us-west-2"
],
"aws-cn": [],
"aws-cn": [
"cn-north-1",
"cn-northwest-1"
],
"aws-us-gov": []
}
},
@@ -7853,6 +7995,20 @@
]
}
},
"redshift-serverless": {
"regions": {
"aws": [
"ap-south-1",
"ca-central-1",
"eu-west-3",
"us-west-1"
],
"aws-cn": [
"cn-north-1"
],
"aws-us-gov": []
}
},
"rekognition": {
"regions": {
"aws": [
@@ -7877,6 +8033,16 @@
]
}
},
"repostspace": {
"regions": {
"aws": [
"eu-central-1",
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": []
}
},
"resiliencehub": {
"regions": {
"aws": [
@@ -8181,7 +8347,10 @@
"cn-north-1",
"cn-northwest-1"
],
"aws-us-gov": []
"aws-us-gov": [
"us-gov-east-1",
"us-gov-west-1"
]
}
},
"route53-recovery-readiness": {
@@ -9745,6 +9914,21 @@
]
}
},
"thinclient": {
"regions": {
"aws": [
"ap-south-1",
"ca-central-1",
"eu-central-1",
"eu-west-1",
"eu-west-2",
"us-east-1",
"us-west-2"
],
"aws-cn": [],
"aws-us-gov": []
}
},
"timestream": {
"regions": {
"aws": [
@@ -9782,10 +9966,14 @@
"tnb": {
"regions": {
"aws": [
"ap-northeast-2",
"ap-southeast-2",
"ca-central-1",
"eu-central-1",
"eu-north-1",
"eu-south-2",
"eu-west-3",
"sa-east-1",
"us-east-1",
"us-west-2"
],

View File

@@ -1,4 +1,5 @@
from argparse import ArgumentTypeError, Namespace
from re import search
from prowler.providers.aws.aws_provider import get_aws_available_regions
from prowler.providers.aws.lib.arn.arn import arn_type
@@ -78,6 +79,11 @@ def init_parser(self):
action="store_true",
help="Skip updating previous findings of Prowler in Security Hub",
)
aws_security_hub_subparser.add_argument(
"--send-sh-only-fails",
action="store_true",
help="Send only Prowler failed findings to SecurityHub",
)
# AWS Quick Inventory
aws_quick_inventory_subparser = aws_parser.add_argument_group("Quick Inventory")
aws_quick_inventory_subparser.add_argument(
@@ -93,6 +99,7 @@ def init_parser(self):
"-B",
"--output-bucket",
nargs="?",
type=validate_bucket,
default=None,
help="Custom output bucket, requires -M <mode> and it can work also with -o flag.",
)
@@ -100,6 +107,7 @@ def init_parser(self):
"-D",
"--output-bucket-no-assume",
nargs="?",
type=validate_bucket,
default=None,
help="Same as -B but do not use the assumed role credentials to put objects to the bucket, instead uses the initial credentials.",
)
@@ -179,3 +187,13 @@ def validate_arguments(arguments: Namespace) -> tuple[bool, str]:
return (False, "To use -I/-T options -R option is needed")
return (True, "")
def validate_bucket(bucket_name):
"""validate_bucket validates that the input bucket_name is valid"""
if search("(?!(^xn--|.+-s3alias$))^[a-z0-9][a-z0-9-]{1,61}[a-z0-9]$", bucket_name):
return bucket_name
else:
raise ArgumentTypeError(
"Bucket name must be valid (https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html)"
)

View File

@@ -38,4 +38,5 @@ current_audit_info = AWS_Audit_Info(
audit_metadata=None,
audit_config=None,
ignore_unused_services=False,
enabled_regions=set(),
)

View File

@@ -1,4 +1,4 @@
from dataclasses import dataclass
from dataclasses import dataclass, field
from datetime import datetime
from typing import Any, Optional
@@ -53,3 +53,4 @@ class AWS_Audit_Info:
audit_metadata: Optional[Any]
audit_config: Optional[dict] = None
ignore_unused_services: bool = False
enabled_regions: set = field(default_factory=set)

View File

@@ -1,8 +1,11 @@
def is_account_only_allowed_in_condition(
condition_statement: dict, source_account: str
def is_condition_block_restrictive(
condition_statement: dict, source_account: str, is_cross_account_allowed=False
):
"""
is_account_only_allowed_in_condition parses the IAM Condition policy block and returns True if the source_account passed as argument is within, False if not.
is_condition_block_restrictive parses the IAM Condition policy block and, by default, returns True if the source_account passed as argument is within, False if not.
If argument is_cross_account_allowed is True it tests if the Condition block includes any of the operators mutelisted returning True if does, False if not.
@param condition_statement: dict with an IAM Condition block, e.g.:
{
@@ -54,13 +57,19 @@ def is_account_only_allowed_in_condition(
condition_statement[condition_operator][value],
list,
):
# if there is an arn/account without the source account -> we do not consider it safe
# here by default we assume is true and look for false entries
is_condition_key_restrictive = True
for item in condition_statement[condition_operator][value]:
if source_account not in item:
is_condition_key_restrictive = False
break
# if cross account is not allowed check for each condition block looking for accounts
# different than default
if not is_cross_account_allowed:
# if there is an arn/account without the source account -> we do not consider it safe
# here by default we assume is true and look for false entries
for item in condition_statement[condition_operator][value]:
if source_account not in item:
is_condition_key_restrictive = False
break
if is_condition_key_restrictive:
is_condition_valid = True
if is_condition_key_restrictive:
is_condition_valid = True
@@ -70,10 +79,13 @@ def is_account_only_allowed_in_condition(
condition_statement[condition_operator][value],
str,
):
if (
source_account
in condition_statement[condition_operator][value]
):
if is_cross_account_allowed:
is_condition_valid = True
else:
if (
source_account
in condition_statement[condition_operator][value]
):
is_condition_valid = True
return is_condition_valid

View File

@@ -1,5 +1,3 @@
import sys
from prowler.config.config import (
csv_file_suffix,
html_file_suffix,
@@ -41,10 +39,9 @@ def send_to_s3_bucket(
s3_client.upload_file(file_name, output_bucket_name, object_name)
except Exception as error:
logger.critical(
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
sys.exit(1)
def get_s3_object_path(output_directory: str) -> str:

View File

@@ -1,13 +1,24 @@
import threading
from concurrent.futures import ThreadPoolExecutor, as_completed
from functools import wraps
from prowler.lib.logger import logger
from prowler.lib.ui.live_display import live_display
from prowler.providers.aws.aws_provider import (
generate_regional_clients,
get_default_region,
)
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.aws.aws_provider_new import AwsProvider
MAX_WORKERS = 10
class AWSService:
"""The AWSService class offers a parent class for each AWS Service to generate:
- AWS Regional Clients
- Shared information like the account ID and ARN, the the AWS partition and the checks audited
- AWS Session
- Thread pool for the __threading_call__
- Also handles if the AWS Service is Global
"""
@@ -40,14 +51,95 @@ class AWSService:
self.region = provider.get_default_region(self.service)
self.client = self.session.client(self.service, self.region)
# Thread pool for __threading_call__
self.thread_pool = ThreadPoolExecutor(max_workers=MAX_WORKERS)
self.live_display_enabled = False
# Progress bar to add tasks to
service_init_section = live_display.get_client_init_section()
if service_init_section:
# Only Flags is not set to True
self.task_progress_bar = service_init_section.task_progress_bar
self.progress_tasks = []
# For us in other functions
self.live_display_enabled = True
def __get_session__(self):
return self.session
def __threading_call__(self, call):
threads = []
for regional_client in self.regional_clients.values():
threads.append(threading.Thread(target=call, args=(regional_client,)))
for t in threads:
t.start()
for t in threads:
t.join()
def __threading_call__(self, call, iterator=None, *args, **kwargs):
# Use the provided iterator, or default to self.regional_clients
items = iterator if iterator is not None else self.regional_clients.values()
# Determine the total count for logging
item_count = len(items)
# Trim leading and trailing underscores from the call's name
call_name = call.__name__.strip("_")
# Add Capitalization
call_name = " ".join([x.capitalize() for x in call_name.split("_")])
# Print a message based on the call's name, and if its regional or processing a list of items
if iterator is None:
logger.info(
f"{self.service.upper()} - Starting threads for '{call_name}' function across {item_count} regions..."
)
else:
logger.info(
f"{self.service.upper()} - Starting threads for '{call_name}' function to process {item_count} items..."
)
if self.live_display_enabled:
# Setup the progress bar
task_id = self.task_progress_bar.add_task(
f"- {call_name}...", total=item_count, task_type="Service"
)
self.progress_tasks.append(task_id)
# Submit tasks to the thread pool
futures = [
self.thread_pool.submit(call, item, *args, **kwargs) for item in items
]
# Wait for all tasks to complete
for future in as_completed(futures):
try:
future.result() # Raises exceptions from the thread, if any
if self.live_display_enabled:
# Update the progress bar
self.task_progress_bar.update(task_id, advance=1)
except Exception:
# Handle exceptions if necessary
pass # Replace 'pass' with any additional exception handling logic. Currently handled within the called function
# Make the task disappear once completed
# self.progress.remove_task(task_id)
@staticmethod
def progress_decorator(func):
"""
Decorator to update the progress bar before and after a function call.
To be used for methods within global services, which do not make use of the __threading_call__ function
"""
@wraps(func)
def wrapper(self, *args, **kwargs):
# Trim leading and trailing underscores from the call's name
func_name = func.__name__.strip("_")
# Add Capitalization
func_name = " ".join([x.capitalize() for x in func_name.split("_")])
if self.live_display_enabled:
task_id = self.task_progress_bar.add_task(
f"- {func_name}...", total=1, task_type="Service"
)
self.progress_tasks.append(task_id)
result = func(self, *args, **kwargs) # Execute the function
if self.live_display_enabled:
self.task_progress_bar.update(task_id, advance=1)
# self.task_progress_bar.remove_task(task_id) # Uncomment if you want to remove the task on completion
return result
return wrapper

View File

@@ -0,0 +1,54 @@
from dataclasses import dataclass
from datetime import datetime
from boto3 import session
from botocore.config import Config
@dataclass
class AWSOrganizationsInfo:
account_details_email: str
account_details_name: str
account_details_arn: str
account_details_org: str
account_details_tags: str
@dataclass
class AWSCredentials:
aws_access_key_id: str
aws_session_token: str
aws_secret_access_key: str
expiration: datetime
@dataclass
class AWSAssumeRole:
role_arn: str
session_duration: int
external_id: str
mfa_enabled: bool
@dataclass
class AWSAssumeRoleConfiguration:
assumed_role_info: AWSAssumeRole
assumed_role_credentials: AWSCredentials
@dataclass
class AWSIdentityInfo:
account: str
account_arn: str
user_id: str
partition: str
identity_arn: str
profile: str
profile_region: str
audited_regions: list
@dataclass
class AWSSession:
session: session.Session
session_config: Config
original_session: None

View File

@@ -85,21 +85,36 @@ class AccessAnalyzer(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
# TODO: We need to include ListFindingsV2
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/accessanalyzer/client/list_findings_v2.html
def __list_findings__(self):
logger.info("AccessAnalyzer - Listing Findings per Analyzer...")
try:
for analyzer in self.analyzers:
if analyzer.status == "ACTIVE":
regional_client = self.regional_clients[analyzer.region]
list_findings_paginator = regional_client.get_paginator(
"list_findings"
try:
if analyzer.status == "ACTIVE":
regional_client = self.regional_clients[analyzer.region]
list_findings_paginator = regional_client.get_paginator(
"list_findings"
)
for page in list_findings_paginator.paginate(
analyzerArn=analyzer.arn
):
for finding in page["findings"]:
analyzer.findings.append(Finding(id=finding["id"]))
except ClientError as error:
if error.response["Error"]["Code"] == "ValidationException":
logger.warning(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
for page in list_findings_paginator.paginate(
analyzerArn=analyzer.arn
):
for finding in page["findings"]:
analyzer.findings.append(Finding(id=finding["id"]))
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -15,8 +15,8 @@ class ACM(AWSService):
super().__init__(__class__.__name__, provider)
self.certificates = []
self.__threading_call__(self.__list_certificates__)
self.__describe_certificates__()
self.__list_tags_for_certificate__()
self.__threading_call__(self.__describe_certificates__, self.certificates)
self.__threading_call__(self.__list_tags_for_certificate__, self.certificates)
def __list_certificates__(self, regional_client):
logger.info("ACM - Listing Certificates...")
@@ -59,33 +59,29 @@ class ACM(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __describe_certificates__(self):
logger.info("ACM - Describing Certificates...")
def __describe_certificates__(self, certificate):
try:
for certificate in self.certificates:
regional_client = self.regional_clients[certificate.region]
response = regional_client.describe_certificate(
CertificateArn=certificate.arn
)["Certificate"]
if (
response["Options"]["CertificateTransparencyLoggingPreference"]
== "ENABLED"
):
certificate.transparency_logging = True
regional_client = self.regional_clients[certificate.region]
response = regional_client.describe_certificate(
CertificateArn=certificate.arn
)["Certificate"]
if (
response["Options"]["CertificateTransparencyLoggingPreference"]
== "ENABLED"
):
certificate.transparency_logging = True
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __list_tags_for_certificate__(self):
logger.info("ACM - List Tags...")
def __list_tags_for_certificate__(self, certificate):
try:
for certificate in self.certificates:
regional_client = self.regional_clients[certificate.region]
response = regional_client.list_tags_for_certificate(
CertificateArn=certificate.arn
)["Tags"]
certificate.tags = response
regional_client = self.regional_clients[certificate.region]
response = regional_client.list_tags_for_certificate(
CertificateArn=certificate.arn
)["Tags"]
certificate.tags = response
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -1,7 +1,7 @@
{
"Provider": "aws",
"CheckID": "apigateway_restapi_authorizers_enabled",
"CheckTitle": "Check if API Gateway has configured authorizers.",
"CheckTitle": "Check if API Gateway has configured authorizers at api or method level.",
"CheckAliases": [
"apigateway_authorizers_enabled"
],
@@ -13,7 +13,7 @@
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"Severity": "medium",
"ResourceType": "AwsApiGatewayRestApi",
"Description": "Check if API Gateway has configured authorizers.",
"Description": "Check if API Gateway has configured authorizers at api or method level.",
"Risk": "If no authorizer is enabled anyone can use the service.",
"RelatedUrl": "",
"Remediation": {

View File

@@ -13,12 +13,41 @@ class apigateway_restapi_authorizers_enabled(Check):
report.resource_id = rest_api.name
report.resource_arn = rest_api.arn
report.resource_tags = rest_api.tags
# it there are not authorizers at api level and resources without methods (default case) ->
report.status = "FAIL"
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} does not have an authorizer configured at api level."
if rest_api.authorizer:
report.status = "PASS"
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} has an authorizer configured."
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} has an authorizer configured at api level"
else:
report.status = "FAIL"
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} does not have an authorizer configured."
# we want to know if api has not authorizers and all the resources don't have methods configured
resources_have_methods = False
all_methods_authorized = True
resource_paths_with_unathorized_methods = []
for resource in rest_api.resources:
# if the resource has methods test if they have all configured authorizer
if resource.resource_methods:
resources_have_methods = True
for (
http_method,
authorization_method,
) in resource.resource_methods.items():
if authorization_method == "NONE":
all_methods_authorized = False
unauthorized_method = (
resource.path + " -> " + http_method
)
resource_paths_with_unathorized_methods.append(
unauthorized_method
)
# if there are methods in at least one resource and are all authorized
if all_methods_authorized and resources_have_methods:
report.status = "PASS"
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} has all methods authorized"
# if there are methods in at least one result but some of then are not authorized-> list it
elif not all_methods_authorized:
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} does not have authorizers at api level and the following paths and methods are unauthorized: {'; '.join(resource_paths_with_unathorized_methods)}."
findings.append(report)
return findings

View File

@@ -13,10 +13,11 @@ class APIGateway(AWSService):
# Call AWSService's __init__
super().__init__(__class__.__name__, provider)
self.rest_apis = []
self.__threading_call__(self.__get_rest_apis__)
self.__get_authorizers__()
self.__get_rest_api__()
self.__get_stages__()
self.__threading_call__(self.__get_rest_apis__, self.rest_apis)
self.__threading_call__(self.__get_authorizers__, self.rest_apis)
self.__threading_call__(self.__get_rest_api__, self.rest_apis)
self.__threading_call__(self.__get_stages__, self.rest_apis)
self.__threading_call__(self.__get_resources__, self.rest_apis)
def __get_rest_apis__(self, regional_client):
logger.info("APIGateway - Getting Rest APIs...")
@@ -42,60 +43,93 @@ class APIGateway(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_authorizers__(self):
logger.info("APIGateway - Getting Rest APIs authorizer...")
def __get_authorizers__(self, rest_api):
try:
for rest_api in self.rest_apis:
regional_client = self.regional_clients[rest_api.region]
authorizers = regional_client.get_authorizers(restApiId=rest_api.id)[
"items"
]
if authorizers:
rest_api.authorizer = True
regional_client = self.regional_clients[rest_api.region]
authorizers = regional_client.get_authorizers(restApiId=rest_api.id)[
"items"
]
if authorizers:
rest_api.authorizer = True
except Exception as error:
logger.error(f"{error.__class__.__name__}: {error}")
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_rest_api__(self):
logger.info("APIGateway - Describing Rest API...")
def __get_rest_api__(self, rest_api):
try:
for rest_api in self.rest_apis:
regional_client = self.regional_clients[rest_api.region]
rest_api_info = regional_client.get_rest_api(restApiId=rest_api.id)
if rest_api_info["endpointConfiguration"]["types"] == ["PRIVATE"]:
rest_api.public_endpoint = False
regional_client = self.regional_clients[rest_api.region]
rest_api_info = regional_client.get_rest_api(restApiId=rest_api.id)
if rest_api_info["endpointConfiguration"]["types"] == ["PRIVATE"]:
rest_api.public_endpoint = False
except Exception as error:
logger.error(f"{error.__class__.__name__}: {error}")
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_stages__(self):
logger.info("APIGateway - Getting stages for Rest APIs...")
def __get_stages__(self, rest_api):
try:
for rest_api in self.rest_apis:
regional_client = self.regional_clients[rest_api.region]
stages = regional_client.get_stages(restApiId=rest_api.id)
for stage in stages["item"]:
waf = None
logging = False
client_certificate = False
if "webAclArn" in stage:
waf = stage["webAclArn"]
if "methodSettings" in stage:
if stage["methodSettings"]:
logging = True
if "clientCertificateId" in stage:
client_certificate = True
arn = f"arn:{self.audited_partition}:apigateway:{regional_client.region}::/restapis/{rest_api.id}/stages/{stage['stageName']}"
rest_api.stages.append(
Stage(
name=stage["stageName"],
arn=arn,
logging=logging,
client_certificate=client_certificate,
waf=waf,
tags=[stage.get("tags")],
regional_client = self.regional_clients[rest_api.region]
stages = regional_client.get_stages(restApiId=rest_api.id)
for stage in stages["item"]:
waf = None
logging = False
client_certificate = False
if "webAclArn" in stage:
waf = stage["webAclArn"]
if "methodSettings" in stage:
if stage["methodSettings"]:
logging = True
if "clientCertificateId" in stage:
client_certificate = True
arn = f"arn:{self.audited_partition}:apigateway:{regional_client.region}::/restapis/{rest_api.id}/stages/{stage['stageName']}"
rest_api.stages.append(
Stage(
name=stage["stageName"],
arn=arn,
logging=logging,
client_certificate=client_certificate,
waf=waf,
tags=[stage.get("tags")],
)
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_resources__(self, rest_api):
try:
regional_client = self.regional_clients[rest_api.region]
get_resources_paginator = regional_client.get_paginator("get_resources")
for page in get_resources_paginator.paginate(restApiId=rest_api.id):
for resource in page["items"]:
id = resource["id"]
resource_methods = []
methods_auth = {}
for resource_method in resource.get("resourceMethods", {}).keys():
resource_methods.append(resource_method)
for resource_method in resource_methods:
if resource_method != "OPTIONS":
method_config = regional_client.get_method(
restApiId=rest_api.id,
resourceId=id,
httpMethod=resource_method,
)
auth_type = method_config["authorizationType"]
methods_auth.update({resource_method: auth_type})
rest_api.resources.append(
PathResourceMethods(
path=resource["path"], resource_methods=methods_auth
)
)
except Exception as error:
logger.error(f"{error.__class__.__name__}: {error}")
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
class Stage(BaseModel):
@@ -107,6 +141,11 @@ class Stage(BaseModel):
tags: Optional[list] = []
class PathResourceMethods(BaseModel):
path: str
resource_methods: dict
class RestAPI(BaseModel):
id: str
arn: str
@@ -116,3 +155,4 @@ class RestAPI(BaseModel):
public_endpoint: bool = True
stages: list[Stage] = []
tags: Optional[list] = []
resources: list[PathResourceMethods] = []

View File

@@ -13,9 +13,9 @@ class ApiGatewayV2(AWSService):
# Call AWSService's __init__
super().__init__(__class__.__name__, provider)
self.apis = []
self.__threading_call__(self.__get_apis__)
self.__get_authorizers__()
self.__get_stages__()
self.__threading_call__(self.__get_apis__, self.apis)
self.__threading_call__(self.__get_authorizers__, self.apis)
self.__threading_call__(self.__get_stages__, self.apis)
def __get_apis__(self, regional_client):
logger.info("APIGatewayv2 - Getting APIs...")
@@ -41,36 +41,32 @@ class ApiGatewayV2(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_authorizers__(self):
logger.info("APIGatewayv2 - Getting APIs authorizer...")
def __get_authorizers__(self, api):
try:
for api in self.apis:
regional_client = self.regional_clients[api.region]
authorizers = regional_client.get_authorizers(ApiId=api.id)["Items"]
if authorizers:
api.authorizer = True
regional_client = self.regional_clients[api.region]
authorizers = regional_client.get_authorizers(ApiId=api.id)["Items"]
if authorizers:
api.authorizer = True
except Exception as error:
logger.error(
f"{error.__class__.__name__}:{error.__traceback__.tb_lineno} -- {error}"
)
def __get_stages__(self):
logger.info("APIGatewayv2 - Getting stages for APIs...")
def __get_stages__(self, api):
try:
for api in self.apis:
regional_client = self.regional_clients[api.region]
stages = regional_client.get_stages(ApiId=api.id)
for stage in stages["Items"]:
logging = False
if "AccessLogSettings" in stage:
logging = True
api.stages.append(
Stage(
name=stage["StageName"],
logging=logging,
tags=[stage.get("Tags")],
)
regional_client = self.regional_clients[api.region]
stages = regional_client.get_stages(ApiId=api.id)
for stage in stages["Items"]:
logging = False
if "AccessLogSettings" in stage:
logging = True
api.stages.append(
Stage(
name=stage["StageName"],
logging=logging,
tags=[stage.get("Tags")],
)
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}:{error.__traceback__.tb_lineno} -- {error}"

View File

@@ -14,7 +14,7 @@ class AppStream(AWSService):
super().__init__(__class__.__name__, provider)
self.fleets = []
self.__threading_call__(self.__describe_fleets__)
self.__list_tags_for_resource__()
self.__threading_call__(self.__list_tags_for_resource__, self.fleets)
def __describe_fleets__(self, regional_client):
logger.info("AppStream - Describing Fleets...")
@@ -50,15 +50,13 @@ class AppStream(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __list_tags_for_resource__(self):
logger.info("AppStream - List Tags...")
def __list_tags_for_resource__(self, fleet):
try:
for fleet in self.fleets:
regional_client = self.regional_clients[fleet.region]
response = regional_client.list_tags_for_resource(
ResourceArn=fleet.arn
)["Tags"]
fleet.tags = [response]
regional_client = self.regional_clients[fleet.region]
response = regional_client.list_tags_for_resource(ResourceArn=fleet.arn)[
"Tags"
]
fleet.tags = [response]
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -14,9 +14,13 @@ class Athena(AWSService):
super().__init__(__class__.__name__, provider)
self.workgroups = {}
self.__threading_call__(self.__list_workgroups__)
self.__get_workgroups__()
self.__list_query_executions__()
self.__list_tags_for_resource__()
self.__threading_call__(self.__get_workgroups__, self.workgroups.values())
self.__threading_call__(
self.__list_query_executions__, self.workgroups.values()
)
self.__threading_call__(
self.__list_tags_for_resource__, self.workgroups.values()
)
def __list_workgroups__(self, regional_client):
logger.info("Athena - Listing WorkGroups...")
@@ -44,86 +48,65 @@ class Athena(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_workgroups__(self):
logger.info("Athena - Getting WorkGroups...")
def __get_workgroups__(self, workgroup):
try:
for workgroup in self.workgroups.values():
try:
wg = self.regional_clients[workgroup.region].get_work_group(
WorkGroup=workgroup.name
)
wg = self.regional_clients[workgroup.region].get_work_group(
WorkGroup=workgroup.name
)
wg_configuration = wg.get("WorkGroup").get("Configuration")
self.workgroups[
workgroup.arn
].enforce_workgroup_configuration = wg_configuration.get(
"EnforceWorkGroupConfiguration", False
)
wg_configuration = wg.get("WorkGroup").get("Configuration")
self.workgroups[
workgroup.arn
].enforce_workgroup_configuration = wg_configuration.get(
"EnforceWorkGroupConfiguration", False
)
# We include an empty EncryptionConfiguration to handle if the workgroup does not have encryption configured
encryption = (
wg_configuration.get(
"ResultConfiguration",
{"EncryptionConfiguration": {}},
)
.get(
"EncryptionConfiguration",
{"EncryptionOption": ""},
)
.get("EncryptionOption")
)
# We include an empty EncryptionConfiguration to handle if the workgroup does not have encryption configured
encryption = (
wg_configuration.get(
"ResultConfiguration",
{"EncryptionConfiguration": {}},
)
.get(
"EncryptionConfiguration",
{"EncryptionOption": ""},
)
.get("EncryptionOption")
)
if encryption in ["SSE_S3", "SSE_KMS", "CSE_KMS"]:
encryption_configuration = EncryptionConfiguration(
encryption_option=encryption, encrypted=True
)
self.workgroups[
workgroup.arn
].encryption_configuration = encryption_configuration
except Exception as error:
logger.error(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
if encryption in ["SSE_S3", "SSE_KMS", "CSE_KMS"]:
encryption_configuration = EncryptionConfiguration(
encryption_option=encryption, encrypted=True
)
self.workgroups[
workgroup.arn
].encryption_configuration = encryption_configuration
except Exception as error:
logger.error(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
f"{workgroup.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __list_query_executions__(self):
logger.info("Athena - Listing Queries...")
def __list_query_executions__(self, workgroup):
try:
for workgroup in self.workgroups.values():
try:
queries = (
self.regional_clients[workgroup.region]
.list_query_executions(WorkGroup=workgroup.name)
.get("QueryExecutionIds", [])
)
if queries:
workgroup.queries = True
except Exception as error:
logger.error(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
queries = (
self.regional_clients[workgroup.region]
.list_query_executions(WorkGroup=workgroup.name)
.get("QueryExecutionIds", [])
)
if queries:
workgroup.queries = True
except Exception as error:
logger.error(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
f"{workgroup.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __list_tags_for_resource__(self):
logger.info("Athena - Listing Tags...")
def __list_tags_for_resource__(self, workgroup):
try:
for workgroup in self.workgroups.values():
try:
regional_client = self.regional_clients[workgroup.region]
workgroup.tags = regional_client.list_tags_for_resource(
ResourceARN=workgroup.arn
)["Tags"]
except Exception as error:
logger.error(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
regional_client = self.regional_clients[workgroup.region]
workgroup.tags = regional_client.list_tags_for_resource(
ResourceARN=workgroup.arn
)["Tags"]
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -8,7 +8,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
class awslambda_function_invoke_api_operations_cloudtrail_logging_enabled(Check):
def execute(self):
findings = []
for function in awslambda_client.functions.values():
functions = awslambda_client.functions.values()
self.start_task("Processing functions...", len(functions))
for function in functions:
report = Check_Report_AWS(self.metadata())
report.region = function.region
report.resource_id = function.name
@@ -49,5 +51,7 @@ class awslambda_function_invoke_api_operations_cloudtrail_logging_enabled(Check)
report.status_extended = f"Lambda function {function.name} is recorded by CloudTrail trail {trail.name}."
break
findings.append(report)
self.increment_task_progress()
self.update_title_with_findings(findings)
return findings

View File

@@ -11,57 +11,92 @@ from prowler.providers.aws.services.awslambda.awslambda_client import awslambda_
class awslambda_function_no_secrets_in_code(Check):
def execute(self):
findings = []
for function in awslambda_client.functions.values():
if function.code:
report = Check_Report_AWS(self.metadata())
report.region = function.region
report.resource_id = function.name
report.resource_arn = function.arn
report.resource_tags = function.tags
if awslambda_client.functions:
functions = awslambda_client.functions.values()
self.start_task("Processing functions...", len(functions))
for function, function_code in awslambda_client.__get_function_code__():
if function_code:
report = Check_Report_AWS(self.metadata())
report.region = function.region
report.resource_id = function.name
report.resource_arn = function.arn
report.resource_tags = function.tags
report.status = "PASS"
report.status_extended = (
f"No secrets found in Lambda function {function.name} code."
)
with tempfile.TemporaryDirectory() as tmp_dir_name:
function.code.code_zip.extractall(tmp_dir_name)
# List all files
files_in_zip = next(os.walk(tmp_dir_name))[2]
secrets_findings = []
for file in files_in_zip:
secrets = SecretsCollection()
with default_settings():
secrets.scan_file(f"{tmp_dir_name}/{file}")
detect_secrets_output = secrets.json()
if detect_secrets_output:
for (
file_name
) in (
detect_secrets_output.keys()
): # Appears that only 1 file is being scanned at a time, so could rework this
output_file_name = file_name.replace(
f"{tmp_dir_name}/", ""
)
secrets_string = ", ".join(
[
f"{secret['type']} on line {secret['line_number']}"
for secret in detect_secrets_output[file_name]
]
)
secrets_findings.append(
f"{output_file_name}: {secrets_string}"
)
report.status = "PASS"
report.status_extended = (
f"No secrets found in Lambda function {function.name} code."
)
with tempfile.TemporaryDirectory() as tmp_dir_name:
function_code.code_zip.extractall(tmp_dir_name)
# List all files
files_in_zip = next(os.walk(tmp_dir_name))[2]
secrets_findings = []
for file in files_in_zip:
secrets = SecretsCollection()
with default_settings():
secrets.scan_file(f"{tmp_dir_name}/{file}")
detect_secrets_output = secrets.json()
if detect_secrets_output:
for (
file_name
) in (
detect_secrets_output.keys()
): # Appears that only 1 file is being scanned at a time, so could rework this
output_file_name = file_name.replace(
f"{tmp_dir_name}/", ""
)
secrets_string = ", ".join(
[
f"{secret['type']} on line {secret['line_number']}"
for secret in detect_secrets_output[
file_name
]
]
)
secrets_findings.append(
f"{output_file_name}: {secrets_string}"
)
report.status = "PASS"
report.status_extended = (
f"No secrets found in Lambda function {function.name} code."
)
with tempfile.TemporaryDirectory() as tmp_dir_name:
function_code.code_zip.extractall(tmp_dir_name)
# List all files
files_in_zip = next(os.walk(tmp_dir_name))[2]
secrets_findings = []
for file in files_in_zip:
secrets = SecretsCollection()
with default_settings():
secrets.scan_file(f"{tmp_dir_name}/{file}")
detect_secrets_output = secrets.json()
if detect_secrets_output:
for (
file_name
) in (
detect_secrets_output.keys()
): # Appears that only 1 file is being scanned at a time, so could rework this
output_file_name = file_name.replace(
f"{tmp_dir_name}/", ""
)
secrets_string = ", ".join(
[
f"{secret['type']} on line {secret['line_number']}"
for secret in detect_secrets_output[
file_name
]
]
)
secrets_findings.append(
f"{output_file_name}: {secrets_string}"
)
if secrets_findings:
final_output_string = "; ".join(secrets_findings)
report.status = "FAIL"
# report.status_extended = f"Potential {'secrets' if len(secrets_findings)>1 else 'secret'} found in Lambda function {function.name} code. {final_output_string}."
if len(secrets_findings) > 1:
report.status_extended = f"Potential secrets found in Lambda function {function.name} code -> {final_output_string}."
else:
report.status_extended = f"Potential secret found in Lambda function {function.name} code -> {final_output_string}."
# break // Don't break as there may be additional findings
findings.append(report)
if secrets_findings:
final_output_string = "; ".join(secrets_findings)
report.status = "FAIL"
report.status_extended = f"Potential {'secrets' if len(secrets_findings) > 1 else 'secret'} found in Lambda function {function.name} code -> {final_output_string}."
findings.append(report)
self.increment_task_progress()
self.update_title_with_findings(findings)
return findings

View File

@@ -12,6 +12,8 @@ from prowler.providers.aws.services.awslambda.awslambda_client import awslambda_
class awslambda_function_no_secrets_in_variables(Check):
def execute(self):
findings = []
functions = awslambda_client.functions.values()
self.start_task("Processing functions...", len(functions))
for function in awslambda_client.functions.values():
report = Check_Report_AWS(self.metadata())
report.region = function.region
@@ -52,5 +54,6 @@ class awslambda_function_no_secrets_in_variables(Check):
os.remove(temp_env_data_file.name)
findings.append(report)
self.increment_task_progress()
self.update_title_with_findings(findings)
return findings

View File

@@ -5,7 +5,9 @@ from prowler.providers.aws.services.awslambda.awslambda_client import awslambda_
class awslambda_function_not_publicly_accessible(Check):
def execute(self):
findings = []
for function in awslambda_client.functions.values():
functions = awslambda_client.functions.values()
self.start_task("Processing functions...", len(functions))
for function in functions:
report = Check_Report_AWS(self.metadata())
report.region = function.region
report.resource_id = function.name
@@ -39,5 +41,6 @@ class awslambda_function_not_publicly_accessible(Check):
report.status_extended = f"Lambda function {function.name} has a policy resource-based policy with public access."
findings.append(report)
self.increment_task_progress()
self.update_title_with_findings(findings)
return findings

View File

@@ -5,6 +5,8 @@ from prowler.providers.aws.services.awslambda.awslambda_client import awslambda_
class awslambda_function_url_cors_policy(Check):
def execute(self):
findings = []
functions = awslambda_client.functions.values()
self.start_task("Processing functions...", len(functions))
for function in awslambda_client.functions.values():
report = Check_Report_AWS(self.metadata())
report.region = function.region
@@ -20,5 +22,6 @@ class awslambda_function_url_cors_policy(Check):
report.status_extended = f"Lambda function {function.name} does not have a wide CORS configuration."
findings.append(report)
self.increment_task_progress()
self.update_title_with_findings(findings)
return findings

View File

@@ -6,6 +6,8 @@ from prowler.providers.aws.services.awslambda.awslambda_service import AuthType
class awslambda_function_url_public(Check):
def execute(self):
findings = []
functions = awslambda_client.functions.values()
self.start_task("Processing functions...", len(functions))
for function in awslambda_client.functions.values():
report = Check_Report_AWS(self.metadata())
report.region = function.region
@@ -21,5 +23,6 @@ class awslambda_function_url_public(Check):
report.status_extended = f"Lambda function {function.name} has a publicly accessible function URL."
findings.append(report)
self.increment_task_progress()
self.update_title_with_findings(findings)
return findings

View File

@@ -5,6 +5,8 @@ from prowler.providers.aws.services.awslambda.awslambda_client import awslambda_
class awslambda_function_using_supported_runtimes(Check):
def execute(self):
findings = []
functions = awslambda_client.functions.values()
self.start_task("Processing functions...", len(functions))
for function in awslambda_client.functions.values():
if function.runtime:
report = Check_Report_AWS(self.metadata())
@@ -23,5 +25,7 @@ class awslambda_function_using_supported_runtimes(Check):
report.status_extended = f"Lambda function {function.name} is using {function.runtime} which is supported."
findings.append(report)
self.increment_task_progress()
self.update_title_with_findings(findings)
return findings

View File

@@ -1,6 +1,7 @@
import io
import json
import zipfile
from concurrent.futures import as_completed
from enum import Enum
from typing import Any, Optional
@@ -20,7 +21,9 @@ class Lambda(AWSService):
super().__init__(__class__.__name__, provider)
self.functions = {}
self.__threading_call__(self.__list_functions__)
self.__list_tags_for_resource__()
self.__threading_call__(
self.__list_tags_for_resource__, self.functions.values()
)
# We only want to retrieve the Lambda code if the
# awslambda_function_no_secrets_in_code check is set
@@ -28,13 +31,12 @@ class Lambda(AWSService):
"awslambda_function_no_secrets_in_code"
in provider.audit_metadata.expected_checks
):
self.__threading_call__(self.__get_function__)
self.__threading_call__(self.__get_function_code__,self.functions.values())
self.__threading_call__(self.__get_policy__)
self.__threading_call__(self.__get_function_url_config__)
self.__threading_call__(self.__get_policy__,self.functions.values())
self.__threading_call__(self.__get_function_url_config__,self.functions.values())
def __list_functions__(self, regional_client):
logger.info("Lambda - Listing Functions...")
try:
list_functions_paginator = regional_client.get_paginator("list_functions")
for page in list_functions_paginator.paginate():
@@ -62,7 +64,6 @@ class Lambda(AWSService):
"Variables"
)
self.functions[lambda_arn].environment = lambda_environment
except Exception as error:
logger.error(
f"{regional_client.region} --"
@@ -70,22 +71,56 @@ class Lambda(AWSService):
f" {error}"
)
def __get_function__(self, regional_client):
logger.info("Lambda - Getting Function...")
try:
for function in self.functions.values():
if function.region == regional_client.region:
function_information = regional_client.get_function(
FunctionName=function.name
)
if "Location" in function_information["Code"]:
code_location_uri = function_information["Code"]["Location"]
raw_code_zip = requests.get(code_location_uri).content
self.functions[function.arn].code = LambdaCode(
location=code_location_uri,
code_zip=zipfile.ZipFile(io.BytesIO(raw_code_zip)),
)
def __get_function_code__(self):
logger.info("Lambda - Getting Function Code...")
# Use a thread pool handle the queueing and execution of the __fetch_function_code__ tasks, up to max_workers tasks concurrently.
lambda_functions_to_fetch = {
self.thread_pool.submit(
self.__fetch_function_code__, function.name, function.region
): function
for function in self.functions.values()
}
for fetched_lambda_code in as_completed(lambda_functions_to_fetch):
function = lambda_functions_to_fetch[fetched_lambda_code]
try:
function_code = fetched_lambda_code.result()
if function_code:
yield function, function_code
except Exception as error:
logger.error(
f"{function.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __fetch_function_code__(self, function_name, function_region):
try:
regional_client = self.regional_clients[function_region]
function_information = regional_client.get_function(
FunctionName=function_name
)
if "Location" in function_information["Code"]:
code_location_uri = function_information["Code"]["Location"]
raw_code_zip = requests.get(code_location_uri).content
return LambdaCode(
location=code_location_uri,
code_zip=zipfile.ZipFile(io.BytesIO(raw_code_zip)),
)
except Exception as error:
logger.error(
f"{regional_client.region} --"
f" {error.__class__.__name__}[{error.__traceback__.tb_lineno}]:"
f" {error}"
)
raise
def __get_policy__(self, function):
try:
regional_client = self.regional_clients[function.region]
function_policy = regional_client.get_policy(FunctionName=function.name)
self.functions[function.arn].policy = json.loads(function_policy["Policy"])
except ClientError as e:
if e.response["Error"]["Code"] == "ResourceNotFoundException":
self.functions[function.arn].policy = {}
except Exception as error:
logger.error(
f"{regional_client.region} --"
@@ -93,22 +128,24 @@ class Lambda(AWSService):
f" {error}"
)
def __get_policy__(self, regional_client):
logger.info("Lambda - Getting Policy...")
def __get_function_url_config__(self, function):
try:
for function in self.functions.values():
if function.region == regional_client.region:
try:
function_policy = regional_client.get_policy(
FunctionName=function.name
)
self.functions[function.arn].policy = json.loads(
function_policy["Policy"]
)
except ClientError as e:
if e.response["Error"]["Code"] == "ResourceNotFoundException":
self.functions[function.arn].policy = {}
regional_client = self.regional_clients[function.region]
function_url_config = regional_client.get_function_url_config(
FunctionName=function.name
)
if "Cors" in function_url_config:
allow_origins = function_url_config["Cors"]["AllowOrigins"]
else:
allow_origins = []
self.functions[function.arn].url_config = URLConfig(
auth_type=function_url_config["AuthType"],
url=function_url_config["FunctionUrl"],
cors_config=URLConfigCORS(allow_origins=allow_origins),
)
except ClientError as e:
if e.response["Error"]["Code"] == "ResourceNotFoundException":
self.functions[function.arn].url_config = None
except Exception as error:
logger.error(
f"{regional_client.region} --"
@@ -116,47 +153,14 @@ class Lambda(AWSService):
f" {error}"
)
def __get_function_url_config__(self, regional_client):
logger.info("Lambda - Getting Function URL Config...")
def __list_tags_for_resource__(self, function):
try:
for function in self.functions.values():
if function.region == regional_client.region:
try:
function_url_config = regional_client.get_function_url_config(
FunctionName=function.name
)
if "Cors" in function_url_config:
allow_origins = function_url_config["Cors"]["AllowOrigins"]
else:
allow_origins = []
self.functions[function.arn].url_config = URLConfig(
auth_type=function_url_config["AuthType"],
url=function_url_config["FunctionUrl"],
cors_config=URLConfigCORS(allow_origins=allow_origins),
)
except ClientError as e:
if e.response["Error"]["Code"] == "ResourceNotFoundException":
self.functions[function.arn].url_config = None
except Exception as error:
logger.error(
f"{regional_client.region} --"
f" {error.__class__.__name__}[{error.__traceback__.tb_lineno}]:"
f" {error}"
)
def __list_tags_for_resource__(self):
logger.info("Lambda - List Tags...")
try:
for function in self.functions.values():
try:
regional_client = self.regional_clients[function.region]
response = regional_client.list_tags(Resource=function.arn)["Tags"]
function.tags = [response]
except ClientError as e:
if e.response["Error"]["Code"] == "ResourceNotFoundException":
function.tags = []
regional_client = self.regional_clients[function.region]
response = regional_client.list_tags(Resource=function.arn)["Tags"]
function.tags = [response]
except ClientError as e:
if e.response["Error"]["Code"] == "ResourceNotFoundException":
function.tags = []
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -15,7 +15,7 @@ class CloudFormation(AWSService):
super().__init__(__class__.__name__, provider)
self.stacks = []
self.__threading_call__(self.__describe_stacks__)
self.__describe_stack__()
self.__threading_call__(self.__describe_stack__, self.stacks)
def __describe_stacks__(self, regional_client):
"""Get ALL CloudFormation Stacks"""
@@ -47,33 +47,30 @@ class CloudFormation(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __describe_stack__(self):
def __describe_stack__(self, stack):
"""Get Details for a CloudFormation Stack"""
logger.info("CloudFormation - Describing Stack to get specific details...")
for stack in self.stacks:
try:
stack_details = self.regional_clients[stack.region].describe_stacks(
StackName=stack.name
)
# Termination Protection
stack.enable_termination_protection = stack_details["Stacks"][0][
"EnableTerminationProtection"
]
# Nested Stack
if "RootId" in stack_details["Stacks"][0]:
stack.root_nested_stack = stack_details["Stacks"][0]["RootId"]
stack.is_nested_stack = True if stack.root_nested_stack != "" else False
try:
stack_details = self.regional_clients[stack.region].describe_stacks(
StackName=stack.name
)
# Termination Protection
stack.enable_termination_protection = stack_details["Stacks"][0][
"EnableTerminationProtection"
]
# Nested Stack
if "RootId" in stack_details["Stacks"][0]:
stack.root_nested_stack = stack_details["Stacks"][0]["RootId"]
stack.is_nested_stack = True if stack.root_nested_stack != "" else False
except ClientError as error:
if error.response["Error"]["Code"] == "ValidationError":
logger.warning(
f"{stack.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
continue
except Exception as error:
logger.error(
except ClientError as error:
if error.response["Error"]["Code"] == "ValidationError":
logger.warning(
f"{stack.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{stack.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
class Stack(BaseModel):

View File

@@ -15,9 +15,18 @@ class CloudFront(AWSService):
super().__init__(__class__.__name__, provider, global_service=True)
self.distributions = {}
self.__list_distributions__(self.client, self.region)
self.__get_distribution_config__(self.client, self.distributions, self.region)
self.__list_tags_for_resource__(self.client, self.distributions, self.region)
self.__threading_call__(
self.__get_distribution_config__,
iterator=self.distributions,
args=(self.client, self.region),
)
self.__threading_call__(
self.__list_tags_for_resource__,
iterator=self.distributions,
args=(self.client, self.region),
)
@AWSService.progress_decorator
def __list_distributions__(self, client, region) -> dict:
logger.info("CloudFront - Listing Distributions...")
try:
@@ -44,57 +53,52 @@ class CloudFront(AWSService):
f"{region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_distribution_config__(self, client, distributions, region) -> dict:
logger.info("CloudFront - Getting Distributions...")
def __get_distribution_config__(self, distribution_id, client, region) -> dict:
try:
for distribution_id in distributions.keys():
distribution_config = client.get_distribution_config(Id=distribution_id)
# Global Config
distributions[distribution_id].logging_enabled = distribution_config[
"DistributionConfig"
]["Logging"]["Enabled"]
distributions[
distribution_id
].geo_restriction_type = GeoRestrictionType(
distribution_config["DistributionConfig"]["Restrictions"][
"GeoRestriction"
]["RestrictionType"]
)
distributions[distribution_id].web_acl_id = distribution_config[
"DistributionConfig"
]["WebACLId"]
distribution_config = client.get_distribution_config(Id=distribution_id)
# Global Config
self.distributions[distribution_id].logging_enabled = distribution_config[
"DistributionConfig"
]["Logging"]["Enabled"]
self.distributions[
distribution_id
].geo_restriction_type = GeoRestrictionType(
distribution_config["DistributionConfig"]["Restrictions"][
"GeoRestriction"
]["RestrictionType"]
)
self.distributions[distribution_id].web_acl_id = distribution_config[
"DistributionConfig"
]["WebACLId"]
# Default Cache Config
default_cache_config = DefaultCacheConfigBehaviour(
realtime_log_config_arn=distribution_config["DistributionConfig"][
# Default Cache Config
default_cache_config = DefaultCacheConfigBehaviour(
realtime_log_config_arn=distribution_config["DistributionConfig"][
"DefaultCacheBehavior"
].get("RealtimeLogConfigArn"),
viewer_protocol_policy=ViewerProtocolPolicy(
distribution_config["DistributionConfig"][
"DefaultCacheBehavior"
].get("RealtimeLogConfigArn"),
viewer_protocol_policy=ViewerProtocolPolicy(
distribution_config["DistributionConfig"][
"DefaultCacheBehavior"
].get("ViewerProtocolPolicy")
),
field_level_encryption_id=distribution_config["DistributionConfig"][
"DefaultCacheBehavior"
].get("FieldLevelEncryptionId"),
)
distributions[
distribution_id
].default_cache_config = default_cache_config
].get("ViewerProtocolPolicy")
),
field_level_encryption_id=distribution_config["DistributionConfig"][
"DefaultCacheBehavior"
].get("FieldLevelEncryptionId"),
)
self.distributions[
distribution_id
].default_cache_config = default_cache_config
except Exception as error:
logger.error(
f"{region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __list_tags_for_resource__(self, client, distributions, region):
def __list_tags_for_resource__(self, distribution, client, region):
logger.info("CloudFront - List Tags...")
try:
for distribution in distributions.values():
response = client.list_tags_for_resource(Resource=distribution.arn)[
"Tags"
]
distribution.tags = response.get("Items")
response = client.list_tags_for_resource(Resource=distribution.arn)["Tags"]
distribution.tags = response.get("Items")
except Exception as error:
logger.error(
f"{region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -16,10 +16,10 @@ class Cloudtrail(AWSService):
super().__init__(__class__.__name__, provider)
self.trails = []
self.__threading_call__(self.__get_trails__)
self.__get_trail_status__()
self.__get_insight_selectors__()
self.__get_event_selectors__()
self.__list_tags_for_resource__()
self.__threading_call__(self.__get_trail_status__, self.trails)
self.__threading_call__(self.__get_insight_selectors__, self.trails)
self.__threading_call__(self.__get_event_selectors__, self.trails)
self.__threading_call__(self.__list_tags_for_resource__, self.trails)
def __get_trails__(self, regional_client):
logger.info("Cloudtrail - Getting trails...")
@@ -68,112 +68,102 @@ class Cloudtrail(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_trail_status__(self):
logger.info("Cloudtrail - Getting trail status")
def __get_trail_status__(self, trail):
try:
for trail in self.trails:
for region, client in self.regional_clients.items():
if trail.region == region and trail.name:
status = client.get_trail_status(Name=trail.arn)
trail.is_logging = status["IsLogging"]
if "LatestCloudWatchLogsDeliveryTime" in status:
trail.latest_cloudwatch_delivery_time = status[
"LatestCloudWatchLogsDeliveryTime"
]
regional_client = self.regional_clients[trail.region]
if trail.region and trail.name:
status = regional_client.get_trail_status(Name=trail.arn)
trail.is_logging = status["IsLogging"]
if "LatestCloudWatchLogsDeliveryTime" in status:
trail.latest_cloudwatch_delivery_time = status[
"LatestCloudWatchLogsDeliveryTime"
]
except Exception as error:
logger.error(
f"{client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_event_selectors__(self):
logger.info("Cloudtrail - Getting event selector")
def __get_event_selectors__(self, trail):
try:
for trail in self.trails:
for region, client in self.regional_clients.items():
if trail.region == region and trail.name:
data_events = client.get_event_selectors(TrailName=trail.arn)
# EventSelectors
if (
"EventSelectors" in data_events
and data_events["EventSelectors"]
):
for event in data_events["EventSelectors"]:
event_selector = Event_Selector(
is_advanced=False, event_selector=event
)
trail.data_events.append(event_selector)
# AdvancedEventSelectors
elif (
"AdvancedEventSelectors" in data_events
and data_events["AdvancedEventSelectors"]
):
for event in data_events["AdvancedEventSelectors"]:
event_selector = Event_Selector(
is_advanced=True, event_selector=event
)
trail.data_events.append(event_selector)
regional_client = self.regional_clients[trail.region]
if trail.region and trail.name:
data_events = regional_client.get_event_selectors(TrailName=trail.arn)
# EventSelectors
if "EventSelectors" in data_events and data_events["EventSelectors"]:
for event in data_events["EventSelectors"]:
event_selector = Event_Selector(
is_advanced=False, event_selector=event
)
trail.data_events.append(event_selector)
# AdvancedEventSelectors
elif (
"AdvancedEventSelectors" in data_events
and data_events["AdvancedEventSelectors"]
):
for event in data_events["AdvancedEventSelectors"]:
event_selector = Event_Selector(
is_advanced=True, event_selector=event
)
trail.data_events.append(event_selector)
except Exception as error:
logger.error(
f"{client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_insight_selectors__(self):
logger.info("Cloudtrail - Getting trail insight selectors...")
def __get_insight_selectors__(self, trail):
try:
for trail in self.trails:
for region, client in self.regional_clients.items():
if trail.region == region and trail.name:
insight_selectors = None
trail.has_insight_selectors = None
try:
client_insight_selectors = client.get_insight_selectors(
TrailName=trail.arn
)
insight_selectors = client_insight_selectors.get(
"InsightSelectors"
)
except ClientError as error:
if (
error.response["Error"]["Code"]
== "InsightNotEnabledException"
):
continue
else:
logger.error(
f"{client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
continue
if insight_selectors:
trail.has_insight_selectors = insight_selectors[0].get(
"InsightType"
)
regional_client = self.regional_clients[trail.region]
if trail.region and trail.name:
insight_selectors = None
trail.has_insight_selectors = None
try:
client_insight_selectors = regional_client.get_insight_selectors(
TrailName=trail.arn
)
insight_selectors = client_insight_selectors.get("InsightSelectors")
except ClientError as error:
if error.response["Error"]["Code"] == "InsightNotEnabledException":
logger.warning(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
elif (
error.response["Error"]["Code"]
== "UnsupportedOperationException"
):
logger.warning(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
raise
if insight_selectors:
trail.has_insight_selectors = insight_selectors[0].get(
"InsightType"
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __list_tags_for_resource__(self):
def __list_tags_for_resource__(self, trail):
logger.info("CloudTrail - List Tags...")
try:
for trail in self.trails:
# Check if trails are in this account and region
if (
trail.region == trail.home_region
and self.audited_account in trail.arn
):
regional_client = self.regional_clients[trail.region]
response = regional_client.list_tags(ResourceIdList=[trail.arn])[
"ResourceTagList"
][0]
trail.tags = response.get("TagsList")
# Check if trails are in this account and region
if trail.region == trail.home_region and self.audited_account in trail.arn:
regional_client = self.regional_clients[trail.region]
response = regional_client.list_tags(ResourceIdList=[trail.arn])[
"ResourceTagList"
][0]
trail.tags = response.get("TagsList")
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -22,26 +23,13 @@ class cloudwatch_changes_to_network_acls_alarm_configured(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -22,26 +23,13 @@ class cloudwatch_changes_to_network_gateways_alarm_configured(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -22,26 +23,13 @@ class cloudwatch_changes_to_network_route_tables_alarm_configured(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -22,26 +23,13 @@ class cloudwatch_changes_to_vpcs_alarm_configured(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -24,26 +25,13 @@ class cloudwatch_log_metric_filter_and_alarm_for_aws_config_configuration_change
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -24,26 +25,13 @@ class cloudwatch_log_metric_filter_and_alarm_for_cloudtrail_configuration_change
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -22,26 +23,13 @@ class cloudwatch_log_metric_filter_authentication_failures(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -22,26 +23,13 @@ class cloudwatch_log_metric_filter_aws_organizations_changes(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -22,26 +23,13 @@ class cloudwatch_log_metric_filter_disable_or_scheduled_deletion_of_kms_cmk(Chec
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -22,26 +23,14 @@ class cloudwatch_log_metric_filter_for_s3_bucket_policy_changes(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -22,26 +23,13 @@ class cloudwatch_log_metric_filter_policy_changes(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -22,26 +23,13 @@ class cloudwatch_log_metric_filter_root_usage(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -22,26 +23,13 @@ class cloudwatch_log_metric_filter_security_group_changes(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -22,26 +23,13 @@ class cloudwatch_log_metric_filter_sign_in_without_mfa(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -22,26 +23,13 @@ class cloudwatch_log_metric_filter_unauthorized_api_calls(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -0,0 +1,34 @@
import re
from prowler.lib.check.models import Check_Report_AWS
def check_cloudwatch_log_metric_filter(
metric_filter_pattern: str,
trails: list,
metric_filters: list,
metric_alarms: list,
report: Check_Report_AWS,
):
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in metric_filters:
if metric_filter.log_group in log_groups:
if re.search(metric_filter_pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
return report

View File

@@ -0,0 +1,4 @@
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
from prowler.providers.aws.services.cognito.cognito_service import CognitoIDP
cognito_idp_client = CognitoIDP(current_audit_info)

View File

@@ -0,0 +1,122 @@
from datetime import datetime
from typing import Optional
from pydantic import BaseModel
from prowler.lib.logger import logger
from prowler.lib.scan_filters.scan_filters import is_resource_filtered
from prowler.providers.aws.lib.service.service import AWSService
################## CognitoIDP
class CognitoIDP(AWSService):
def __init__(self, audit_info):
super().__init__("cognito-idp", audit_info)
self.user_pools = {}
self.__threading_call__(self.__list_user_pools__)
self.__describe_user_pools__()
self.__get_user_pool_mfa_config__()
def __list_user_pools__(self, regional_client):
logger.info("Cognito - Listing User Pools...")
try:
user_pools_paginator = regional_client.get_paginator("list_user_pools")
for page in user_pools_paginator.paginate(MaxResults=60):
for user_pool in page["UserPools"]:
arn = f"arn:{self.audited_partition}:cognito-idp:{regional_client.region}:{self.audited_account}:userpool/{user_pool['Id']}"
if not self.audit_resources or (
is_resource_filtered(arn, self.audit_resources)
):
try:
self.user_pools[arn] = UserPool(
id=user_pool["Id"],
arn=arn,
name=user_pool["Name"],
region=regional_client.region,
last_modified=user_pool["LastModifiedDate"],
creation_date=user_pool["CreationDate"],
status=user_pool.get("Status", "Disabled"),
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __describe_user_pools__(self):
logger.info("Cognito - Describing User Pools...")
try:
for user_pool in self.user_pools.values():
try:
user_pool_details = self.regional_clients[
user_pool.region
].describe_user_pool(UserPoolId=user_pool.id)["UserPool"]
user_pool.password_policy = user_pool_details.get(
"Policies", {}
).get("PasswordPolicy", {})
user_pool.deletion_protection = user_pool_details.get(
"DeletionProtection", "INACTIVE"
)
user_pool.advanced_security_mode = user_pool_details.get(
"UserPoolAddOns", {}
).get("AdvancedSecurityMode", "OFF")
user_pool.tags = [user_pool_details.get("UserPoolTags", "")]
except Exception as error:
logger.error(
f"{user_pool.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{user_pool.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_user_pool_mfa_config__(self):
logger.info("Cognito - Getting User Pool MFA Configuration...")
try:
for user_pool in self.user_pools.values():
try:
mfa_config = self.regional_clients[
user_pool.region
].get_user_pool_mfa_config(UserPoolId=user_pool.id)
if mfa_config["MfaConfiguration"] != "OFF":
user_pool.mfa_config = MFAConfig(
sms_authentication=mfa_config.get(
"SmsMfaConfiguration", {}
),
software_token_mfa_authentication=mfa_config.get(
"SoftwareTokenMfaConfiguration", {}
),
status=mfa_config["MfaConfiguration"],
)
except Exception as error:
logger.error(
f"{user_pool.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{user_pool.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
class MFAConfig(BaseModel):
sms_authentication: Optional[dict]
software_token_mfa_authentication: Optional[dict]
status: str
class UserPool(BaseModel):
id: str
arn: str
name: str
region: str
advanced_security_mode: str = "OFF"
deletion_protection: str = "INACTIVE"
last_modified: datetime
creation_date: datetime
status: str
password_policy: Optional[dict]
mfa_config: Optional[MFAConfig]
tags: Optional[list] = []

View File

@@ -17,7 +17,8 @@ class EC2(AWSService):
super().__init__(__class__.__name__, provider)
self.instances = []
self.__threading_call__(self.__describe_instances__)
self.__get_instance_user_data__()
self.__threading_call__(self.__get_instance_user_data__, self.instances)
self.__threading_call__(self.__get_instance_user_data__, self.instances)
self.security_groups = []
self.regions_with_sgs = []
self.__threading_call__(self.__describe_security_groups__)
@@ -27,7 +28,7 @@ class EC2(AWSService):
self.volumes_with_snapshots = {}
self.regions_with_snapshots = {}
self.__threading_call__(self.__describe_snapshots__)
self.__get_snapshot_public__()
self.__threading_call__(self.__get_snapshot_public__, self.snapshots)
self.network_interfaces = []
self.__threading_call__(self.__describe_public_network_interfaces__)
self.__threading_call__(self.__describe_sg_network_interfaces__)
@@ -36,12 +37,11 @@ class EC2(AWSService):
self.volumes = []
self.__threading_call__(self.__describe_volumes__)
self.ebs_encryption_by_default = []
self.__threading_call__(self.__get_ebs_encryption_by_default__)
self.__threading_call__(self.__get_ebs_encryption_settings__)
self.elastic_ips = []
self.__threading_call__(self.__describe_addresses__)
self.__threading_call__(self.__describe_ec2_addresses__)
def __describe_instances__(self, regional_client):
logger.info("EC2 - Describing EC2 Instances...")
try:
describe_instances_paginator = regional_client.get_paginator(
"describe_instances"
@@ -106,7 +106,6 @@ class EC2(AWSService):
)
def __describe_security_groups__(self, regional_client):
logger.info("EC2 - Describing Security Groups...")
try:
describe_security_groups_paginator = regional_client.get_paginator(
"describe_security_groups"
@@ -155,7 +154,6 @@ class EC2(AWSService):
)
def __describe_network_acls__(self, regional_client):
logger.info("EC2 - Describing Network ACLs...")
try:
describe_network_acls_paginator = regional_client.get_paginator(
"describe_network_acls"
@@ -186,7 +184,6 @@ class EC2(AWSService):
)
def __describe_snapshots__(self, regional_client):
logger.info("EC2 - Describing Snapshots...")
try:
snapshots_in_region = False
describe_snapshots_paginator = regional_client.get_paginator(
@@ -219,35 +216,31 @@ class EC2(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_snapshot_public__(self):
def __get_snapshot_public__(self, snapshot):
logger.info("EC2 - Getting snapshot volume attribute permissions...")
for snapshot in self.snapshots:
try:
regional_client = self.regional_clients[snapshot.region]
snapshot_public = regional_client.describe_snapshot_attribute(
Attribute="createVolumePermission", SnapshotId=snapshot.id
)
for permission in snapshot_public["CreateVolumePermissions"]:
if "Group" in permission:
if permission["Group"] == "all":
snapshot.public = True
try:
regional_client = self.regional_clients[snapshot.region]
snapshot_public = regional_client.describe_snapshot_attribute(
Attribute="createVolumePermission", SnapshotId=snapshot.id
)
for permission in snapshot_public["CreateVolumePermissions"]:
if "Group" in permission:
if permission["Group"] == "all":
snapshot.public = True
except ClientError as error:
if error.response["Error"]["Code"] == "InvalidSnapshot.NotFound":
logger.warning(
f"{snapshot.region} --"
f" {error.__class__.__name__}[{error.__traceback__.tb_lineno}]:"
f" {error}"
)
continue
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
except ClientError as error:
if error.response["Error"]["Code"] == "InvalidSnapshot.NotFound":
logger.warning(
f"{snapshot.region} --"
f" {error.__class__.__name__}[{error.__traceback__.tb_lineno}]:"
f" {error}"
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __describe_public_network_interfaces__(self, regional_client):
logger.info("EC2 - Describing Network Interfaces...")
try:
# Get Network Interfaces with Public IPs
describe_network_interfaces_paginator = regional_client.get_paginator(
@@ -274,7 +267,6 @@ class EC2(AWSService):
)
def __describe_sg_network_interfaces__(self, regional_client):
logger.info("EC2 - Describing Network Interfaces...")
try:
# Get Network Interfaces for Security Groups
for sg in self.security_groups:
@@ -299,30 +291,26 @@ class EC2(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_instance_user_data__(self):
def __get_instance_user_data__(self, instance):
logger.info("EC2 - Getting instance user data...")
for instance in self.instances:
try:
regional_client = self.regional_clients[instance.region]
user_data = regional_client.describe_instance_attribute(
Attribute="userData", InstanceId=instance.id
)["UserData"]
if "Value" in user_data:
instance.user_data = user_data["Value"]
except ClientError as error:
if error.response["Error"]["Code"] == "InvalidInstanceID.NotFound":
logger.warning(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
continue
except Exception as error:
logger.error(
try:
regional_client = self.regional_clients[instance.region]
user_data = regional_client.describe_instance_attribute(
Attribute="userData", InstanceId=instance.id
)["UserData"]
if "Value" in user_data:
instance.user_data = user_data["Value"]
except ClientError as error:
if error.response["Error"]["Code"] == "InvalidInstanceID.NotFound":
logger.warning(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __describe_images__(self, regional_client):
logger.info("EC2 - Describing Images...")
try:
for image in regional_client.describe_images(Owners=["self"])["Images"]:
arn = f"arn:{self.audited_partition}:ec2:{regional_client.region}:{self.audited_account}:image/{image['ImageId']}"
@@ -345,7 +333,6 @@ class EC2(AWSService):
)
def __describe_volumes__(self, regional_client):
logger.info("EC2 - Describing Volumes...")
try:
describe_volumes_paginator = regional_client.get_paginator(
"describe_volumes"
@@ -370,8 +357,7 @@ class EC2(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __describe_addresses__(self, regional_client):
logger.info("EC2 - Describing Elastic IPs...")
def __describe_ec2_addresses__(self, regional_client):
try:
for address in regional_client.describe_addresses()["Addresses"]:
public_ip = None
@@ -402,8 +388,7 @@ class EC2(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_ebs_encryption_by_default__(self, regional_client):
logger.info("EC2 - Get EBS Encryption By Default...")
def __get_ebs_encryption_settings__(self, regional_client):
try:
volumes_in_region = False
for volume in self.volumes:

View File

@@ -1,5 +1,6 @@
from typing import Optional
from botocore.exceptions import ClientError
from pydantic import BaseModel
from prowler.lib.logger import logger
@@ -73,7 +74,15 @@ class ElastiCache(AWSService):
cluster.tags = regional_client.list_tags_for_resource(
ResourceName=cluster.arn
)["TagList"]
except ClientError as error:
if error.response["Error"]["Code"] == "CacheClusterNotFound":
logger.warning(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -33,7 +33,7 @@ class elbv2_insecure_ssl_ciphers(Check):
and listener.ssl_policy not in secure_ssl_policies
):
report.status = "FAIL"
report.status_extended = f"ELBv2 {lb.name} has listeners with insecure SSL protocols or ciphers."
report.status_extended = f"ELBv2 {lb.name} has listeners with insecure SSL protocols or ciphers ({listener.ssl_policy})."
findings.append(report)

View File

@@ -5,8 +5,6 @@ from prowler.lib.logger import logger
from prowler.lib.scan_filters.scan_filters import is_resource_filtered
from prowler.providers.aws.lib.service.service import AWSService
# from prowler.providers.aws.aws_provider import generate_regional_clients
################## FMS
class FMS(AWSService):

View File

@@ -1,6 +1,6 @@
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.lib.policy_condition_parser.policy_condition_parser import (
is_account_only_allowed_in_condition,
is_condition_block_restrictive,
)
from prowler.providers.aws.services.iam.iam_client import iam_client
@@ -30,7 +30,7 @@ class iam_role_cross_service_confused_deputy_prevention(Check):
and "Service" in statement["Principal"]
# Check to see if the appropriate condition statements have been implemented
and "Condition" in statement
and is_account_only_allowed_in_condition(
and is_condition_block_restrictive(
statement["Condition"], iam_client.audited_account
)
):

View File

@@ -494,11 +494,30 @@ class IAM(AWSService):
document=inline_group_policy_doc,
)
)
except ClientError as error:
if error.response["Error"]["Code"] == "NoSuchEntity":
logger.warning(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
group.inline_policies = inline_group_policies
except ClientError as error:
if error.response["Error"]["Code"] == "NoSuchEntity":
logger.warning(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(

View File

@@ -1,5 +1,4 @@
import json
import threading
from typing import Optional
from botocore.client import ClientError
@@ -17,25 +16,26 @@ class S3(AWSService):
super().__init__(__class__.__name__, provider)
self.regions_with_buckets = []
self.buckets = self.__list_buckets__(provider)
self.__threading_call__(self.__get_bucket_versioning__)
self.__threading_call__(self.__get_bucket_logging__)
self.__threading_call__(self.__get_bucket_policy__)
self.__threading_call__(self.__get_bucket_acl__)
self.__threading_call__(self.__get_public_access_block__)
self.__threading_call__(self.__get_bucket_encryption__)
self.__threading_call__(self.__get_bucket_ownership_controls__)
self.__threading_call__(self.__get_object_lock_configuration__)
self.__threading_call__(self.__get_bucket_tagging__)
self.__threading_call__(self.__get_bucket_versioning__, self.buckets)
self.__threading_call__(self.__get_bucket_logging__, self.buckets)
self.__threading_call__(self.__get_bucket_policy__, self.buckets)
self.__threading_call__(self.__get_bucket_acl__, self.buckets)
self.__threading_call__(self.__get_public_access_block__, self.buckets)
self.__threading_call__(self.__get_bucket_encryption__, self.buckets)
self.__threading_call__(self.__get_bucket_ownership_controls__, self.buckets)
self.__threading_call__(self.__get_object_lock_configuration__, self.buckets)
self.__threading_call__(self.__get_bucket_tagging__, self.buckets)
# In the S3 service we override the "__threading_call__" method because we spawn a process per bucket instead of per region
def __threading_call__(self, call):
threads = []
for bucket in self.buckets:
threads.append(threading.Thread(target=call, args=(bucket,)))
for t in threads:
t.start()
for t in threads:
t.join()
# TODO: Replace the above function with the service __threading_call__ using the buckets as the iterator
# def __threading_call__(self, call):
# threads = []
# for bucket in self.buckets:
# threads.append(threading.Thread(target=call, args=(bucket,)))
# for t in threads:
# t.start()
# for t in threads:
# t.join()
def __list_buckets__(self, provider):
logger.info("S3 - Listing buckets...")
@@ -101,6 +101,15 @@ class S3(AWSService):
if "MFADelete" in bucket_versioning:
if "Enabled" == bucket_versioning["MFADelete"]:
bucket.mfa_delete = True
except ClientError as error:
if error.response["Error"]["Code"] == "NoSuchBucket":
logger.warning(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
if bucket.region:
logger.error(
@@ -153,6 +162,15 @@ class S3(AWSService):
bucket.logging_target_bucket = bucket_logging["LoggingEnabled"][
"TargetBucket"
]
except ClientError as error:
if error.response["Error"]["Code"] == "NoSuchBucket":
logger.warning(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
if regional_client:
logger.error(
@@ -224,6 +242,15 @@ class S3(AWSService):
grantee.permission = grant["Permission"]
grantees.append(grantee)
bucket.acl_grantees = grantees
except ClientError as error:
if error.response["Error"]["Code"] == "NoSuchBucket":
logger.warning(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
if regional_client:
logger.error(
@@ -241,6 +268,15 @@ class S3(AWSService):
bucket.policy = json.loads(
regional_client.get_bucket_policy(Bucket=bucket.name)["Policy"]
)
except ClientError as error:
if error.response["Error"]["Code"] == "NoSuchBucket":
logger.warning(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
if "NoSuchBucketPolicy" in str(error):
bucket.policy = {}

View File

@@ -1,6 +1,6 @@
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.lib.policy_condition_parser.policy_condition_parser import (
is_account_only_allowed_in_condition,
is_condition_block_restrictive,
)
from prowler.providers.aws.services.sns.sns_client import sns_client
@@ -35,7 +35,7 @@ class sns_topics_not_publicly_accessible(Check):
):
if (
"Condition" in statement
and is_account_only_allowed_in_condition(
and is_condition_block_restrictive(
statement["Condition"], sns_client.audited_account
)
):

View File

@@ -1,6 +1,6 @@
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.lib.policy_condition_parser.policy_condition_parser import (
is_account_only_allowed_in_condition,
is_condition_block_restrictive,
)
from prowler.providers.aws.services.sqs.sqs_client import sqs_client
@@ -32,8 +32,10 @@ class sqs_queues_not_publicly_accessible(Check):
)
):
if "Condition" in statement:
if is_account_only_allowed_in_condition(
statement["Condition"], sqs_client.audited_account
if is_condition_block_restrictive(
statement["Condition"],
sqs_client.audited_account,
True,
):
report.status_extended = f"SQS queue {queue.id} is not public because its policy only allows access from the same account."
else:

View File

@@ -34,9 +34,9 @@ class TrustedAdvisor(AWSService):
def __describe_trusted_advisor_checks__(self):
logger.info("TrustedAdvisor - Describing Checks...")
try:
for check in self.client.describe_trusted_advisor_checks(language="en")[
"checks"
]:
for check in self.client.describe_trusted_advisor_checks(language="en").get(
"checks", []
):
self.checks.append(
Check(
id=check["id"],

View File

@@ -5,22 +5,23 @@ from prowler.providers.aws.services.vpc.vpc_client import vpc_client
class vpc_different_regions(Check):
def execute(self):
findings = []
vpc_regions = set()
for vpc in vpc_client.vpcs.values():
if not vpc.default:
vpc_regions.add(vpc.region)
if len(vpc_client.vpcs) > 0:
vpc_regions = set()
for vpc in vpc_client.vpcs.values():
if not vpc.default:
vpc_regions.add(vpc.region)
report = Check_Report_AWS(self.metadata())
# This is a global check under the vpc service: region, resource_id and tags are not relevant here but we keep them for consistency
report.region = vpc_client.region
report.resource_id = vpc_client.audited_account
report.resource_arn = vpc_client.audited_account_arn
report.status = "FAIL"
report.status_extended = "VPCs found only in one region."
if len(vpc_regions) > 1:
report.status = "PASS"
report.status_extended = "VPCs found in more than one region."
report = Check_Report_AWS(self.metadata())
report.region = vpc_client.region
report.resource_id = vpc_client.audited_account
report.resource_arn = vpc_client.audited_account_arn
findings.append(report)
report.status = "FAIL"
report.status_extended = "VPCs found only in one region."
if len(vpc_regions) > 1:
report.status = "PASS"
report.status_extended = "VPCs found in more than one region."
findings.append(report)
return findings

View File

@@ -2,7 +2,7 @@ from re import compile
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.lib.policy_condition_parser.policy_condition_parser import (
is_account_only_allowed_in_condition,
is_condition_block_restrictive,
)
from prowler.providers.aws.services.vpc.vpc_client import vpc_client
@@ -35,7 +35,7 @@ class vpc_endpoint_connections_trust_boundaries(Check):
if "Condition" in statement:
for account_id in trusted_account_ids:
if is_account_only_allowed_in_condition(
if is_condition_block_restrictive(
statement["Condition"], account_id
):
access_from_trusted_accounts = True
@@ -70,7 +70,7 @@ class vpc_endpoint_connections_trust_boundaries(Check):
access_from_trusted_accounts = False
if "Condition" in statement:
for account_id in trusted_account_ids:
if is_account_only_allowed_in_condition(
if is_condition_block_restrictive(
statement["Condition"], account_id
):
access_from_trusted_accounts = True
@@ -102,7 +102,7 @@ class vpc_endpoint_connections_trust_boundaries(Check):
if "Condition" in statement:
for account_id in trusted_account_ids:
if is_account_only_allowed_in_condition(
if is_condition_block_restrictive(
statement["Condition"], account_id
):
access_from_trusted_accounts = True

View File

@@ -8,6 +8,7 @@ from prowler.lib.logger import logger
from prowler.providers.aws.aws_provider import (
AWS_Provider,
assume_role,
get_aws_enabled_regions,
get_checks_from_input_arn,
get_regions_from_audit_resources,
)
@@ -269,6 +270,9 @@ Azure Identity Type: {Fore.YELLOW}[{audit_info.identity.identity_type}]{Style.RE
if arguments.get("resource_arn"):
current_audit_info.audit_resources = arguments.get("resource_arn")
# Get Enabled Regions
current_audit_info.enabled_regions = get_aws_enabled_regions(current_audit_info)
return current_audit_info
def set_aws_execution_parameters(self, provider, audit_info) -> list[str]:

View File

@@ -154,6 +154,7 @@ class Aws_Output_Options(Provider_Output_Options):
# Security Hub Outputs
self.security_hub_enabled = arguments.security_hub
self.send_sh_only_fails = arguments.send_sh_only_fails
if arguments.security_hub:
if not self.output_modes:
self.output_modes = ["json-asff"]

View File

@@ -1,6 +1,7 @@
import os
import sys
from colorama import Fore, Style
from google import auth
from googleapiclient import discovery
@@ -89,4 +90,7 @@ class GCP_Provider:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
print(
f"\n{Fore.YELLOW}Cloud Resource Manager API {Style.RESET_ALL}has not been used before or it is disabled.\nEnable it by visiting https://console.developers.google.com/apis/api/cloudresourcemanager.googleapis.com/ then retry."
)
return []

View File

@@ -9,26 +9,29 @@ from googleapiclient.discovery import Resource
from prowler.lib.logger import logger
from prowler.providers.gcp.gcp_provider_new import GcpProvider
from prowler.providers.gcp.lib.audit_info.models import GCP_Audit_Info
class GCPService:
def __init__(
self,
service: str,
provider: GcpProvider,
audit_info: GCP_Audit_Info,
region="global",
api_version="v1",
):
# We receive the service using __class__.__name__ or the service name in lowercase
# e.g.: APIKeys --> we need a lowercase string, so service.lower()
self.service = service.lower() if not service.islower() else service
self.credentials = provider.session
self.credentials = audit_info.credentials
self.api_version = api_version
self.default_project_id = provider.default_project_id
self.default_project_id = audit_info.default_project_id
self.region = region
self.client = self.__generate_client__(service, api_version, self.credentials)
self.client = self.__generate_client__(
self.service, api_version, audit_info.credentials
)
# Only project ids that have their API enabled will be scanned
self.project_ids = self.__is_api_active__(provider.project_ids)
self.project_ids = self.__is_api_active__(audit_info.project_ids)
def __get_client__(self):
return self.client
@@ -60,7 +63,7 @@ class GCPService:
project_ids.append(project_id)
else:
print(
f"\n{Fore.YELLOW}{self.service} API {Style.RESET_ALL}has not been used in project {project_id} before or it is disabled.\nEnable it by visiting https://console.developers.google.com/apis/api/dataproc.googleapis.com/overview?project={project_id} then retry."
f"\n{Fore.YELLOW}{self.service} API {Style.RESET_ALL}has not been used in project {project_id} before or it is disabled.\nEnable it by visiting https://console.developers.google.com/apis/api/{self.service}.googleapis.com/overview?project={project_id} then retry."
)
except Exception as error:
logger.error(

View File

@@ -38,39 +38,41 @@ boto3 = "1.26.165"
botocore = "1.29.165"
colorama = "0.4.6"
detect-secrets = "1.4.0"
google-api-python-client = "2.108.0"
google-auth-httplib2 = "^0.1.0"
jsonschema = "4.18.0"
kubernetes = "^28.1.0"
google-api-python-client = "2.110.0"
google-auth-httplib2 = ">=0.1,<0.3"
jsonschema = "4.20.0"
mkdocs = {version = "1.5.3", optional = true}
mkdocs-material = {version = "9.4.10", optional = true}
kubernetes = "^28.1.0"
mkdocs-material = {version = "9.5.2", optional = true}
msgraph-core = "0.2.2"
msrestazure = "^0.6.4"
pydantic = "1.10.13"
python = ">=3.9,<3.12"
rich = "^13.7.0"
schema = "0.7.5"
shodan = "1.30.1"
slack-sdk = "3.24.0"
slack-sdk = "3.26.1"
tabulate = "0.9.0"
[tool.poetry.extras]
docs = ["mkdocs", "mkdocs-material"]
[tool.poetry.group.dev.dependencies]
bandit = "1.7.5"
bandit = "1.7.6"
black = "22.12.0"
coverage = "7.3.2"
docker = "6.1.3"
coverage = "7.3.3"
docker = "7.0.0"
flake8 = "6.1.0"
freezegun = "1.2.2"
freezegun = "1.3.1"
mock = "5.1.0"
moto = {extras = ["all"], version = "4.2.9"}
moto = {extras = ["all"], version = "4.2.12"}
openapi-schema-validator = "0.6.2"
openapi-spec-validator = "0.7.1"
pylint = "3.0.2"
pylint = "3.0.3"
pytest = "7.4.3"
pytest-cov = "4.1.0"
pytest-randomly = "3.15.0"
pytest-xdist = "3.4.0"
pytest-xdist = "3.5.0"
safety = "2.3.5"
vulture = "2.10"

View File

@@ -0,0 +1,319 @@
from mock import patch
from prowler.lib.check.checks_loader import (
load_checks_to_execute,
update_checks_to_execute_with_aliases,
)
from prowler.lib.check.models import (
Check_Metadata_Model,
Code,
Recommendation,
Remediation,
)
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME = "s3_bucket_level_public_access_block"
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME_CUSTOM_ALIAS = (
"s3_bucket_level_public_access_block"
)
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_SEVERITY = "medium"
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME_SERVICE = "s3"
class TestCheckLoader:
provider = "aws"
def get_custom_check_metadata(self):
return Check_Metadata_Model(
Provider="aws",
CheckID=S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME,
CheckTitle="Check S3 Bucket Level Public Access Block.",
CheckType=["Data Protection"],
CheckAliases=[S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME_CUSTOM_ALIAS],
ServiceName=S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME_SERVICE,
SubServiceName="",
ResourceIdTemplate="arn:partition:s3:::bucket_name",
Severity=S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_SEVERITY,
ResourceType="AwsS3Bucket",
Description="Check S3 Bucket Level Public Access Block.",
Risk="Public access policies may be applied to sensitive data buckets.",
RelatedUrl="https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html",
Remediation=Remediation(
Code=Code(
NativeIaC="",
Terraform="https://docs.bridgecrew.io/docs/bc_aws_s3_20#terraform",
CLI="aws s3api put-public-access-block --region <REGION_NAME> --public-access-block-configuration BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true --bucket <BUCKET_NAME>",
Other="https://github.com/cloudmatos/matos/tree/master/remediations/aws/s3/s3/block-public-access",
),
Recommendation=Recommendation(
Text="You can enable Public Access Block at the bucket level to prevent the exposure of your data stored in S3.",
Url="https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html",
),
),
Categories=["internet-exposed"],
DependsOn=[],
RelatedTo=[],
Notes="",
Compliance=[],
)
def test_load_checks_to_execute(self):
bulk_checks_metatada = {
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME: self.get_custom_check_metadata()
}
bulk_compliance_frameworks = None
checks_file = None
check_list = None
service_list = None
severities = None
compliance_frameworks = None
categories = None
with patch(
"prowler.lib.check.checks_loader.recover_checks_from_provider",
return_value=[
(
f"{S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME}",
"path/to/{S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME}",
)
],
):
assert {S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME} == load_checks_to_execute(
bulk_checks_metatada,
bulk_compliance_frameworks,
checks_file,
check_list,
service_list,
severities,
compliance_frameworks,
categories,
self.provider,
)
def test_load_checks_to_execute_with_check_list(self):
bulk_checks_metatada = {
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME: self.get_custom_check_metadata()
}
bulk_compliance_frameworks = None
checks_file = None
check_list = [S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME]
service_list = None
severities = None
compliance_frameworks = None
categories = None
assert {S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME} == load_checks_to_execute(
bulk_checks_metatada,
bulk_compliance_frameworks,
checks_file,
check_list,
service_list,
severities,
compliance_frameworks,
categories,
self.provider,
)
def test_load_checks_to_execute_with_severities(self):
bulk_checks_metatada = {
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME: self.get_custom_check_metadata()
}
bulk_compliance_frameworks = None
checks_file = None
check_list = []
service_list = None
severities = [S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_SEVERITY]
compliance_frameworks = None
categories = None
assert {S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME} == load_checks_to_execute(
bulk_checks_metatada,
bulk_compliance_frameworks,
checks_file,
check_list,
service_list,
severities,
compliance_frameworks,
categories,
self.provider,
)
def test_load_checks_to_execute_with_severities_and_services(self):
bulk_checks_metatada = {
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME: self.get_custom_check_metadata()
}
bulk_compliance_frameworks = None
checks_file = None
check_list = []
service_list = [S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME_SERVICE]
severities = [S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_SEVERITY]
compliance_frameworks = None
categories = None
with patch(
"prowler.lib.check.checks_loader.recover_checks_from_service",
return_value={S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME},
):
assert {S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME} == load_checks_to_execute(
bulk_checks_metatada,
bulk_compliance_frameworks,
checks_file,
check_list,
service_list,
severities,
compliance_frameworks,
categories,
self.provider,
)
def test_load_checks_to_execute_with_severities_and_services_not_within_severity(
self,
):
bulk_checks_metatada = {
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME: self.get_custom_check_metadata()
}
bulk_compliance_frameworks = None
checks_file = None
check_list = []
service_list = ["ec2"]
severities = [S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_SEVERITY]
compliance_frameworks = None
categories = None
with patch(
"prowler.lib.check.checks_loader.recover_checks_from_service",
return_value={"ec2_ami_public"},
):
assert set() == load_checks_to_execute(
bulk_checks_metatada,
bulk_compliance_frameworks,
checks_file,
check_list,
service_list,
severities,
compliance_frameworks,
categories,
self.provider,
)
def test_load_checks_to_execute_with_checks_file(
self,
):
bulk_checks_metatada = {
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME: self.get_custom_check_metadata()
}
bulk_compliance_frameworks = None
checks_file = "path/to/test_file"
check_list = []
service_list = []
severities = []
compliance_frameworks = None
categories = None
with patch(
"prowler.lib.check.checks_loader.parse_checks_from_file",
return_value={S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME},
):
assert {S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME} == load_checks_to_execute(
bulk_checks_metatada,
bulk_compliance_frameworks,
checks_file,
check_list,
service_list,
severities,
compliance_frameworks,
categories,
self.provider,
)
def test_load_checks_to_execute_with_service_list(
self,
):
bulk_checks_metatada = {
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME: self.get_custom_check_metadata()
}
bulk_compliance_frameworks = None
checks_file = None
check_list = []
service_list = [S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME_SERVICE]
severities = []
compliance_frameworks = None
categories = None
with patch(
"prowler.lib.check.checks_loader.recover_checks_from_service",
return_value={S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME},
):
assert {S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME} == load_checks_to_execute(
bulk_checks_metatada,
bulk_compliance_frameworks,
checks_file,
check_list,
service_list,
severities,
compliance_frameworks,
categories,
self.provider,
)
def test_load_checks_to_execute_with_compliance_frameworks(
self,
):
bulk_checks_metatada = {
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME: self.get_custom_check_metadata()
}
bulk_compliance_frameworks = None
checks_file = None
check_list = []
service_list = []
severities = []
compliance_frameworks = ["test-compliance-framework"]
categories = None
with patch(
"prowler.lib.check.checks_loader.parse_checks_from_compliance_framework",
return_value={S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME},
):
assert {S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME} == load_checks_to_execute(
bulk_checks_metatada,
bulk_compliance_frameworks,
checks_file,
check_list,
service_list,
severities,
compliance_frameworks,
categories,
self.provider,
)
def test_load_checks_to_execute_with_categories(
self,
):
bulk_checks_metatada = {
S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME: self.get_custom_check_metadata()
}
bulk_compliance_frameworks = None
checks_file = None
check_list = []
service_list = []
severities = []
compliance_frameworks = []
categories = {"internet-exposed"}
assert {S3_BUCKET_LEVEL_PUBLIC_ACCESS_BLOCK_NAME} == load_checks_to_execute(
bulk_checks_metatada,
bulk_compliance_frameworks,
checks_file,
check_list,
service_list,
severities,
compliance_frameworks,
categories,
self.provider,
)
def test_update_checks_to_execute_with_aliases(self):
checks_to_execute = {"renamed_check"}
check_aliases = {"renamed_check": "check_name"}
assert {"check_name"} == update_checks_to_execute_with_aliases(
checks_to_execute, check_aliases
)

View File

@@ -3,7 +3,7 @@ import pathlib
from importlib.machinery import FileFinder
from pkgutil import ModuleInfo
from boto3 import client, session
from boto3 import client
from fixtures.bulk_checks_metadata import test_bulk_checks_metadata
from mock import patch
from moto import mock_s3
@@ -27,8 +27,7 @@ from prowler.providers.aws.aws_provider import (
get_checks_from_input_arn,
get_regions_from_audit_resources,
)
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.common.models import Audit_Metadata
from tests.providers.aws.audit_info_utils import set_mocked_aws_audit_info
AWS_ACCOUNT_NUMBER = "123456789012"
AWS_REGION = "us-east-1"
@@ -257,37 +256,11 @@ def mock_recover_checks_from_aws_provider_rds_service(*_):
]
class Test_Check:
def set_mocked_audit_info(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audited_account=AWS_ACCOUNT_NUMBER,
audited_account_arn=f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root",
audited_user_id=None,
audited_partition="aws",
audited_identity_arn=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=None,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
return audit_info
def mock_recover_checks_from_aws_provider_cognito_service(*_):
return []
class Test_Check:
def test_load_check_metadata(self):
test_cases = [
{
@@ -363,7 +336,7 @@ class Test_Check:
provider = test["input"]["provider"]
assert (
parse_checks_from_folder(
self.set_mocked_audit_info(), check_folder, provider
set_mocked_aws_audit_info(), check_folder, provider
)
== test["expected"]
)
@@ -596,6 +569,19 @@ class Test_Check:
recovered_checks = get_checks_from_input_arn(audit_resources, provider)
assert recovered_checks == expected_checks
@patch(
"prowler.lib.check.check.recover_checks_from_provider",
new=mock_recover_checks_from_aws_provider_cognito_service,
)
def test_get_checks_from_input_arn_cognito(self):
audit_resources = [
f"arn:aws:cognito-idp:us-east-1:{AWS_ACCOUNT_NUMBER}:userpool/test"
]
provider = "aws"
expected_checks = []
recovered_checks = get_checks_from_input_arn(audit_resources, provider)
assert recovered_checks == expected_checks
@patch(
"prowler.lib.check.check.recover_checks_from_provider",
new=mock_recover_checks_from_aws_provider_ec2_service,

View File

@@ -5,6 +5,7 @@ import pytest
from mock import patch
from prowler.lib.cli.parser import ProwlerArgumentParser
from prowler.providers.aws.lib.arguments.arguments import validate_bucket
from prowler.providers.azure.lib.arguments.arguments import validate_azure_region
prowler_command = "prowler"
@@ -915,6 +916,12 @@ class Test_Parser:
parsed = self.parser.parse(command)
assert parsed.skip_sh_update
def test_aws_parser_send_only_fail(self):
argument = "--send-sh-only-fails"
command = [prowler_command, argument]
parsed = self.parser.parse(command)
assert parsed.send_sh_only_fails
def test_aws_parser_quick_inventory_short(self):
argument = "-i"
command = [prowler_command, argument]
@@ -1174,3 +1181,28 @@ class Test_Parser:
match=f"Region {invalid_region} not allowed, allowed regions are {' '.join(expected_regions)}",
):
validate_azure_region(invalid_region)
def test_validate_bucket_invalid_bucket_names(self):
bad_bucket_names = [
"xn--bucket-name",
"mrryadfpcwlscicvnrchmtmyhwrvzkgfgdxnlnvaaummnywciixnzvycnzmhhpwb",
"192.168.5.4",
"bucket-name-s3alias",
"bucket-name-s3alias-",
"bucket-n$ame",
"bu",
]
for bucket_name in bad_bucket_names:
with pytest.raises(ArgumentTypeError) as argument_error:
validate_bucket(bucket_name)
assert argument_error.type == ArgumentTypeError
assert (
argument_error.value.args[0]
== "Bucket name must be valid (https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html)"
)
def test_validate_bucket_valid_bucket_names(self):
valid_bucket_names = ["bucket-name" "test" "test-test-test"]
for bucket_name in valid_bucket_names:
assert validate_bucket(bucket_name) == bucket_name

View File

@@ -1,48 +1,86 @@
from boto3 import session
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.aws.lib.audit_info.models import AWS_Assume_Role, AWS_Audit_Info
from prowler.providers.common.models import Audit_Metadata
AWS_REGION_US_EAST_1 = "us-east-1"
AWS_REGION_EU_WEST_1 = "eu-west-1"
AWS_REGION_EU_WEST_2 = "eu-west-2"
AWS_PARTITION = "aws"
# Root AWS Account
AWS_ACCOUNT_NUMBER = "123456789012"
AWS_ACCOUNT_ARN = f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root"
AWS_COMMERCIAL_PARTITION = "aws"
# Commercial Regions
AWS_REGION_US_EAST_1 = "us-east-1"
AWS_REGION_US_EAST_1_AZA = "us-east-1a"
AWS_REGION_US_EAST_1_AZB = "us-east-1b"
AWS_REGION_EU_WEST_1 = "eu-west-1"
AWS_REGION_EU_WEST_1_AZA = "eu-west-1a"
AWS_REGION_EU_WEST_1_AZB = "eu-west-1b"
AWS_REGION_EU_WEST_2 = "eu-west-2"
AWS_REGION_CN_NORTHWEST_1 = "cn-northwest-1"
AWS_REGION_CN_NORTH_1 = "cn-north-1"
AWS_REGION_EU_SOUTH_2 = "eu-south-2"
AWS_REGION_US_WEST_2 = "us-west-2"
AWS_REGION_US_EAST_2 = "us-east-2"
# China Regions
AWS_REGION_CHINA_NORHT_1 = "cn-north-1"
# Gov Cloud Regions
AWS_REGION_GOV_CLOUD_US_EAST_1 = "us-gov-east-1"
# Iso Regions
AWS_REGION_ISO_GLOBAL = "aws-iso-global"
# AWS Partitions
AWS_COMMERCIAL_PARTITION = "aws"
AWS_GOV_CLOUD_PARTITION = "aws-us-gov"
AWS_CHINA_PARTITION = "aws-cn"
AWS_ISO_PARTITION = "aws-iso"
# Mocked AWS Audit Info
def set_mocked_aws_audit_info(
audited_regions: [str] = [],
audited_account: str = AWS_ACCOUNT_NUMBER,
audited_account_arn: str = AWS_ACCOUNT_ARN,
audited_partition: str = AWS_COMMERCIAL_PARTITION,
expected_checks: [str] = [],
profile_region: str = None,
audit_config: dict = {},
ignore_unused_services: bool = False,
assumed_role_info: AWS_Assume_Role = None,
audit_session: session.Session = session.Session(
profile_name=None,
botocore_session=None,
),
original_session: session.Session = None,
enabled_regions: set = None,
):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
original_session=original_session,
audit_session=audit_session,
audited_account=audited_account,
audited_account_arn=audited_account_arn,
audited_user_id=None,
audited_partition=AWS_PARTITION,
audited_partition=audited_partition,
audited_identity_arn=None,
profile=None,
profile_region=None,
profile_region=profile_region,
credentials=None,
assumed_role_info=None,
assumed_role_info=assumed_role_info,
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
audit_resources=[],
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
expected_checks=expected_checks,
completed_checks=0,
audit_progress=0,
),
audit_config=audit_config,
ignore_unused_services=ignore_unused_services,
enabled_regions=enabled_regions if enabled_regions else set(audited_regions),
)
return audit_info

View File

@@ -12,21 +12,29 @@ from prowler.providers.aws.aws_provider import (
get_default_region,
get_global_region,
)
from prowler.providers.aws.lib.audit_info.models import AWS_Assume_Role, AWS_Audit_Info
from prowler.providers.common.models import Audit_Metadata
ACCOUNT_ID = 123456789012
AWS_REGION = "us-east-1"
from prowler.providers.aws.lib.audit_info.models import AWS_Assume_Role
from tests.providers.aws.audit_info_utils import (
AWS_ACCOUNT_NUMBER,
AWS_CHINA_PARTITION,
AWS_GOV_CLOUD_PARTITION,
AWS_ISO_PARTITION,
AWS_REGION_CHINA_NORHT_1,
AWS_REGION_EU_WEST_1,
AWS_REGION_GOV_CLOUD_US_EAST_1,
AWS_REGION_ISO_GLOBAL,
AWS_REGION_US_EAST_1,
AWS_REGION_US_EAST_2,
set_mocked_aws_audit_info,
)
class Test_AWS_Provider:
@mock_iam
@mock_sts
def test_aws_provider_user_without_mfa(self):
audited_regions = ["eu-west-1"]
# sessionName = "ProwlerAsessmentSession"
# Boto 3 client to create our user
iam_client = boto3.client("iam", region_name=AWS_REGION)
iam_client = boto3.client("iam", region_name=AWS_REGION_US_EAST_1)
# IAM user
iam_user = iam_client.create_user(UserName="test-user")["User"]
access_key = iam_client.create_access_key(UserName=iam_user["UserName"])[
@@ -38,44 +46,27 @@ class Test_AWS_Provider:
session = boto3.session.Session(
aws_access_key_id=access_key_id,
aws_secret_access_key=secret_access_key,
region_name=AWS_REGION,
region_name=AWS_REGION_US_EAST_1,
)
# Fulfil the input session object for Prowler
audit_info = AWS_Audit_Info(
session_config=None,
original_session=session,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition=None,
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
audit_info = set_mocked_aws_audit_info(
audited_regions=[AWS_REGION_EU_WEST_1],
assumed_role_info=AWS_Assume_Role(
role_arn=None,
session_duration=None,
external_id=None,
mfa_enabled=False,
),
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
original_session=session,
)
# Call assume_role
with patch(
"prowler.providers.aws.aws_provider.input_role_mfa_token_and_code",
return_value=(f"arn:aws:iam::{ACCOUNT_ID}:mfa/test-role-mfa", "111111"),
return_value=(
f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:mfa/test-role-mfa",
"111111",
),
):
aws_provider = AWS_Provider(audit_info)
assert aws_provider.aws_session.region_name is None
@@ -89,9 +80,8 @@ class Test_AWS_Provider:
@mock_iam
@mock_sts
def test_aws_provider_user_with_mfa(self):
audited_regions = "eu-west-1"
# Boto 3 client to create our user
iam_client = boto3.client("iam", region_name=AWS_REGION)
iam_client = boto3.client("iam", region_name=AWS_REGION_US_EAST_1)
# IAM user
iam_user = iam_client.create_user(UserName="test-user")["User"]
access_key = iam_client.create_access_key(UserName=iam_user["UserName"])[
@@ -103,38 +93,28 @@ class Test_AWS_Provider:
session = boto3.session.Session(
aws_access_key_id=access_key_id,
aws_secret_access_key=secret_access_key,
region_name=AWS_REGION,
region_name=AWS_REGION_US_EAST_1,
)
# Fulfil the input session object for Prowler
audit_info = AWS_Audit_Info(
session_config=None,
original_session=session,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition=None,
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=AWS_REGION,
credentials=None,
audit_info = set_mocked_aws_audit_info(
audited_regions=[AWS_REGION_EU_WEST_1],
assumed_role_info=AWS_Assume_Role(
role_arn=None,
session_duration=None,
external_id=None,
mfa_enabled=False,
),
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=True,
original_session=session,
profile_region=AWS_REGION_US_EAST_1,
)
# # Call assume_role
# Call assume_role
with patch(
"prowler.providers.aws.aws_provider.input_role_mfa_token_and_code",
return_value=(f"arn:aws:iam::{ACCOUNT_ID}:mfa/test-role-mfa", "111111"),
return_value=(
f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:mfa/test-role-mfa",
"111111",
),
):
aws_provider = AWS_Provider(audit_info)
assert aws_provider.aws_session.region_name is None
@@ -150,12 +130,12 @@ class Test_AWS_Provider:
def test_aws_provider_assume_role_with_mfa(self):
# Variables
role_name = "test-role"
role_arn = f"arn:aws:iam::{ACCOUNT_ID}:role/{role_name}"
role_arn = f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:role/{role_name}"
session_duration_seconds = 900
audited_regions = ["eu-west-1"]
sessionName = "ProwlerAsessmentSession"
# Boto 3 client to create our user
iam_client = boto3.client("iam", region_name=AWS_REGION)
iam_client = boto3.client("iam", region_name=AWS_REGION_US_EAST_1)
# IAM user
iam_user = iam_client.create_user(UserName="test-user")["User"]
access_key = iam_client.create_access_key(UserName=iam_user["UserName"])[
@@ -167,46 +147,29 @@ class Test_AWS_Provider:
session = boto3.session.Session(
aws_access_key_id=access_key_id,
aws_secret_access_key=secret_access_key,
region_name=AWS_REGION,
region_name=AWS_REGION_US_EAST_1,
)
# Fulfil the input session object for Prowler
audit_info = AWS_Audit_Info(
session_config=None,
original_session=session,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition=None,
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
audit_info = set_mocked_aws_audit_info(
audited_regions=[AWS_REGION_EU_WEST_1],
assumed_role_info=AWS_Assume_Role(
role_arn=role_arn,
session_duration=session_duration_seconds,
external_id=None,
mfa_enabled=True,
),
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
original_session=session,
profile_region=AWS_REGION_US_EAST_1,
)
# Call assume_role
aws_provider = AWS_Provider(audit_info)
# Patch MFA
with patch(
"prowler.providers.aws.aws_provider.input_role_mfa_token_and_code",
return_value=(f"arn:aws:iam::{ACCOUNT_ID}:mfa/test-role-mfa", "111111"),
return_value=(
f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:mfa/test-role-mfa",
"111111",
),
):
assume_role_response = assume_role(
aws_provider.aws_session, aws_provider.role_info
@@ -225,7 +188,7 @@ class Test_AWS_Provider:
# Assumed Role
assert (
assume_role_response["AssumedRoleUser"]["Arn"]
== f"arn:aws:sts::{ACCOUNT_ID}:assumed-role/{role_name}/{sessionName}"
== f"arn:aws:sts::{AWS_ACCOUNT_NUMBER}:assumed-role/{role_name}/{sessionName}"
)
# AssumedRoleUser
@@ -245,12 +208,12 @@ class Test_AWS_Provider:
def test_aws_provider_assume_role_without_mfa(self):
# Variables
role_name = "test-role"
role_arn = f"arn:aws:iam::{ACCOUNT_ID}:role/{role_name}"
role_arn = f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:role/{role_name}"
session_duration_seconds = 900
audited_regions = "eu-west-1"
sessionName = "ProwlerAsessmentSession"
# Boto 3 client to create our user
iam_client = boto3.client("iam", region_name=AWS_REGION)
iam_client = boto3.client("iam", region_name=AWS_REGION_US_EAST_1)
# IAM user
iam_user = iam_client.create_user(UserName="test-user")["User"]
access_key = iam_client.create_access_key(UserName=iam_user["UserName"])[
@@ -262,41 +225,21 @@ class Test_AWS_Provider:
session = boto3.session.Session(
aws_access_key_id=access_key_id,
aws_secret_access_key=secret_access_key,
region_name=AWS_REGION,
region_name=AWS_REGION_US_EAST_1,
)
# Fulfil the input session object for Prowler
audit_info = AWS_Audit_Info(
session_config=None,
original_session=session,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition=None,
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
audit_info = set_mocked_aws_audit_info(
audited_regions=[AWS_REGION_EU_WEST_1],
assumed_role_info=AWS_Assume_Role(
role_arn=role_arn,
session_duration=session_duration_seconds,
external_id=None,
mfa_enabled=False,
),
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
original_session=session,
profile_region=AWS_REGION_US_EAST_1,
)
# Call assume_role
aws_provider = AWS_Provider(audit_info)
assume_role_response = assume_role(
aws_provider.aws_session, aws_provider.role_info
@@ -315,7 +258,7 @@ class Test_AWS_Provider:
# Assumed Role
assert (
assume_role_response["AssumedRoleUser"]["Arn"]
== f"arn:aws:sts::{ACCOUNT_ID}:assumed-role/{role_name}/{sessionName}"
== f"arn:aws:sts::{AWS_ACCOUNT_NUMBER}:assumed-role/{role_name}/{sessionName}"
)
# AssumedRoleUser
@@ -335,14 +278,14 @@ class Test_AWS_Provider:
def test_assume_role_with_sts_endpoint_region(self):
# Variables
role_name = "test-role"
role_arn = f"arn:aws:iam::{ACCOUNT_ID}:role/{role_name}"
role_arn = f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:role/{role_name}"
session_duration_seconds = 900
aws_region = "eu-west-1"
sts_endpoint_region = aws_region
audited_regions = [aws_region]
AWS_REGION_US_EAST_1 = AWS_REGION_EU_WEST_1
sts_endpoint_region = AWS_REGION_US_EAST_1
sessionName = "ProwlerAsessmentSession"
# Boto 3 client to create our user
iam_client = boto3.client("iam", region_name=AWS_REGION)
iam_client = boto3.client("iam", region_name=AWS_REGION_US_EAST_1)
# IAM user
iam_user = iam_client.create_user(UserName="test-user")["User"]
access_key = iam_client.create_access_key(UserName=iam_user["UserName"])[
@@ -354,41 +297,21 @@ class Test_AWS_Provider:
session = boto3.session.Session(
aws_access_key_id=access_key_id,
aws_secret_access_key=secret_access_key,
region_name=AWS_REGION,
region_name=AWS_REGION_US_EAST_1,
)
# Fulfil the input session object for Prowler
audit_info = AWS_Audit_Info(
session_config=None,
original_session=session,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition=None,
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
audit_info = set_mocked_aws_audit_info(
audited_regions=[AWS_REGION_EU_WEST_1],
assumed_role_info=AWS_Assume_Role(
role_arn=role_arn,
session_duration=session_duration_seconds,
external_id=None,
mfa_enabled=False,
),
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
original_session=session,
profile_region=AWS_REGION_US_EAST_1,
)
# Call assume_role
aws_provider = AWS_Provider(audit_info)
assume_role_response = assume_role(
aws_provider.aws_session, aws_provider.role_info, sts_endpoint_region
@@ -407,7 +330,7 @@ class Test_AWS_Provider:
# Assumed Role
assert (
assume_role_response["AssumedRoleUser"]["Arn"]
== f"arn:aws:sts::{ACCOUNT_ID}:assumed-role/{role_name}/{sessionName}"
== f"arn:aws:sts::{AWS_ACCOUNT_NUMBER}:assumed-role/{role_name}/{sessionName}"
)
# AssumedRoleUser
@@ -423,368 +346,78 @@ class Test_AWS_Provider:
) == 21 + 1 + len(sessionName)
def test_generate_regional_clients(self):
# New Boto3 session with the previously create user
session = boto3.session.Session(
region_name=AWS_REGION,
)
audited_regions = ["eu-west-1", AWS_REGION]
# Fulfil the input session object for Prowler
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session,
audited_account=None,
audited_account_arn=None,
audited_partition="aws",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions = [AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
audit_info = set_mocked_aws_audit_info(
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
audit_session=boto3.session.Session(
region_name=AWS_REGION_US_EAST_1,
),
enabled_regions=audited_regions,
)
generate_regional_clients_response = generate_regional_clients(
"ec2", audit_info
)
assert set(generate_regional_clients_response.keys()) == set(audited_regions)
def test_generate_regional_clients_global_service(self):
# New Boto3 session with the previously create user
session = boto3.session.Session(
region_name=AWS_REGION,
)
audited_regions = ["eu-west-1", AWS_REGION]
profile_region = AWS_REGION
# Fulfil the input session object for Prowler
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session,
audited_account=None,
audited_account_arn=None,
audited_partition="aws",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=profile_region,
credentials=None,
assumed_role_info=None,
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
generate_regional_clients_response = generate_regional_clients(
"route53", audit_info, global_service=True
)
assert list(generate_regional_clients_response.keys()) == [profile_region]
def test_generate_regional_clients_cn_partition(self):
# New Boto3 session with the previously create user
session = boto3.session.Session(
region_name=AWS_REGION,
)
audited_regions = ["cn-northwest-1", "cn-north-1"]
# Fulfil the input session object for Prowler
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session,
audited_account=None,
audited_account_arn=None,
audited_partition="aws-cn",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audit_info = set_mocked_aws_audit_info(
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
audit_session=boto3.session.Session(
region_name=AWS_REGION_US_EAST_1,
),
enabled_regions=audited_regions,
)
generate_regional_clients_response = generate_regional_clients(
"shield", audit_info, global_service=True
"shield", audit_info
)
# Shield does not exist in China
assert generate_regional_clients_response == {}
def test_get_default_region(self):
audited_regions = ["eu-west-1"]
profile_region = "eu-west-1"
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition="aws",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=profile_region,
credentials=None,
assumed_role_info=None,
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
audit_info = set_mocked_aws_audit_info(
profile_region=AWS_REGION_EU_WEST_1,
audited_regions=[AWS_REGION_EU_WEST_1],
)
assert get_default_region("ec2", audit_info) == "eu-west-1"
assert get_default_region("ec2", audit_info) == AWS_REGION_EU_WEST_1
def test_get_default_region_profile_region_not_audited(self):
audited_regions = ["eu-west-1"]
profile_region = "us-east-2"
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition="aws",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=profile_region,
credentials=None,
assumed_role_info=None,
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
audit_info = set_mocked_aws_audit_info(
profile_region=AWS_REGION_US_EAST_2,
audited_regions=[AWS_REGION_EU_WEST_1],
)
assert get_default_region("ec2", audit_info) == "eu-west-1"
assert get_default_region("ec2", audit_info) == AWS_REGION_EU_WEST_1
def test_get_default_region_non_profile_region(self):
audited_regions = ["eu-west-1"]
profile_region = None
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition="aws",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=profile_region,
credentials=None,
assumed_role_info=None,
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
audit_info = set_mocked_aws_audit_info(
audited_regions=[AWS_REGION_EU_WEST_1],
)
assert get_default_region("ec2", audit_info) == "eu-west-1"
assert get_default_region("ec2", audit_info) == AWS_REGION_EU_WEST_1
def test_get_default_region_non_profile_or_audited_region(self):
audited_regions = None
profile_region = None
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition="aws",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=profile_region,
credentials=None,
assumed_role_info=None,
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
assert get_default_region("ec2", audit_info) == "us-east-1"
def test_aws_get_global_region(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition="aws",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=None,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
assert get_default_region("ec2", audit_info) == "us-east-1"
audit_info = set_mocked_aws_audit_info()
assert get_default_region("ec2", audit_info) == AWS_REGION_US_EAST_1
def test_aws_gov_get_global_region(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition="aws-us-gov",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=None,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
audit_info = set_mocked_aws_audit_info(
audited_partition=AWS_GOV_CLOUD_PARTITION
)
assert get_global_region(audit_info) == "us-gov-east-1"
assert get_global_region(audit_info) == AWS_REGION_GOV_CLOUD_US_EAST_1
def test_aws_cn_get_global_region(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition="aws-cn",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=None,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
assert get_global_region(audit_info) == "cn-north-1"
audit_info = set_mocked_aws_audit_info(audited_partition=AWS_CHINA_PARTITION)
assert get_global_region(audit_info) == AWS_REGION_CHINA_NORHT_1
def test_aws_iso_get_global_region(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition="aws-iso",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=None,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
assert get_global_region(audit_info) == "aws-iso-global"
audit_info = set_mocked_aws_audit_info(audited_partition=AWS_ISO_PARTITION)
assert get_global_region(audit_info) == AWS_REGION_ISO_GLOBAL
def test_get_available_aws_service_regions_with_us_east_1_audited(self):
audited_regions = ["us-east-1"]
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition="aws",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=audited_regions,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
audit_info = set_mocked_aws_audit_info(audited_regions=[AWS_REGION_US_EAST_1])
with patch(
"prowler.providers.aws.aws_provider.parse_json_file",
return_value={
@@ -799,7 +432,7 @@ class Test_AWS_Provider:
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
AWS_REGION_EU_WEST_1,
"eu-west-2",
"eu-west-3",
"me-central-1",
@@ -815,33 +448,13 @@ class Test_AWS_Provider:
}
},
):
assert get_available_aws_service_regions("ec2", audit_info) == ["us-east-1"]
assert get_available_aws_service_regions("ec2", audit_info) == {
AWS_REGION_US_EAST_1
}
def test_get_available_aws_service_regions_with_all_regions_audited(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=None,
audited_account=None,
audited_account_arn=None,
audited_partition="aws",
audited_identity_arn=None,
audited_user_id=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=None,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
audit_info = set_mocked_aws_audit_info()
with patch(
"prowler.providers.aws.aws_provider.parse_json_file",
return_value={
@@ -856,7 +469,7 @@ class Test_AWS_Provider:
"eu-north-1",
"eu-south-1",
"eu-south-2",
"eu-west-1",
AWS_REGION_EU_WEST_1,
"eu-west-2",
"eu-west-3",
"me-central-1",

View File

@@ -1,5 +1,5 @@
import yaml
from boto3 import resource, session
from boto3 import resource
from mock import MagicMock
from moto import mock_dynamodb, mock_s3
@@ -19,6 +19,7 @@ from tests.providers.aws.audit_info_utils import (
AWS_ACCOUNT_NUMBER,
AWS_REGION_EU_WEST_1,
AWS_REGION_US_EAST_1,
set_mocked_aws_audit_info,
)

View File

@@ -21,6 +21,49 @@ from tests.providers.aws.audit_info_utils import (
set_mocked_aws_audit_info,
)
def get_security_hub_finding(status: str):
return {
"SchemaVersion": "2018-10-08",
"Id": f"prowler-iam_user_accesskey_unused-{AWS_ACCOUNT_NUMBER}-{AWS_REGION_EU_WEST_1}-ee26b0dd4",
"ProductArn": f"arn:aws:securityhub:{AWS_REGION_EU_WEST_1}::product/prowler/prowler",
"RecordState": "ACTIVE",
"ProductFields": {
"ProviderName": "Prowler",
"ProviderVersion": prowler_version,
"ProwlerResourceName": "test",
},
"GeneratorId": "prowler-iam_user_accesskey_unused",
"AwsAccountId": f"{AWS_ACCOUNT_NUMBER}",
"Types": ["Software and Configuration Checks"],
"FirstObservedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
"UpdatedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
"CreatedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
"Severity": {"Label": "LOW"},
"Title": "Ensure Access Keys unused are disabled",
"Description": "test",
"Resources": [
{
"Type": "AwsIamAccessAnalyzer",
"Id": "test",
"Partition": "aws",
"Region": f"{AWS_REGION_EU_WEST_1}",
}
],
"Compliance": {
"Status": status,
"RelatedRequirements": [],
"AssociatedStandards": [],
},
"Remediation": {
"Recommendation": {
"Text": "Run sudo yum update and cross your fingers and toes.",
"Url": "https://myfp.com/recommendations/dangerous_things_and_how_to_fix_them.html",
}
},
}
# Mocking Security Hub Get Findings
make_api_call = botocore.client.BaseClient._make_api_call
@@ -64,10 +107,13 @@ class Test_SecurityHub:
return finding
def set_mocked_output_options(self, is_quiet):
def set_mocked_output_options(
self, is_quiet: bool = False, send_sh_only_fails: bool = False
):
output_options = MagicMock
output_options.bulk_checks_metadata = {}
output_options.is_quiet = is_quiet
output_options.send_sh_only_fails = send_sh_only_fails
return output_options
@@ -98,47 +144,7 @@ class Test_SecurityHub:
output_options,
enabled_regions,
) == {
AWS_REGION_EU_WEST_1: [
{
"SchemaVersion": "2018-10-08",
"Id": f"prowler-iam_user_accesskey_unused-{AWS_ACCOUNT_NUMBER}-{AWS_REGION_EU_WEST_1}-ee26b0dd4",
"ProductArn": f"arn:aws:securityhub:{AWS_REGION_EU_WEST_1}::product/prowler/prowler",
"RecordState": "ACTIVE",
"ProductFields": {
"ProviderName": "Prowler",
"ProviderVersion": prowler_version,
"ProwlerResourceName": "test",
},
"GeneratorId": "prowler-iam_user_accesskey_unused",
"AwsAccountId": f"{AWS_ACCOUNT_NUMBER}",
"Types": ["Software and Configuration Checks"],
"FirstObservedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
"UpdatedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
"CreatedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
"Severity": {"Label": "LOW"},
"Title": "Ensure Access Keys unused are disabled",
"Description": "test",
"Resources": [
{
"Type": "AwsIamAccessAnalyzer",
"Id": "test",
"Partition": "aws",
"Region": f"{AWS_REGION_EU_WEST_1}",
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [],
"AssociatedStandards": [],
},
"Remediation": {
"Recommendation": {
"Text": "Run sudo yum update and cross your fingers and toes.",
"Url": "https://myfp.com/recommendations/dangerous_things_and_how_to_fix_them.html",
}
},
}
],
AWS_REGION_EU_WEST_1: [get_security_hub_finding("PASSED")],
}
def test_prepare_security_hub_findings_quiet_MANUAL_finding(self):
@@ -171,7 +177,7 @@ class Test_SecurityHub:
enabled_regions,
) == {AWS_REGION_EU_WEST_1: []}
def test_prepare_security_hub_findings_quiet(self):
def test_prepare_security_hub_findings_quiet_PASS(self):
enabled_regions = [AWS_REGION_EU_WEST_1]
output_options = self.set_mocked_output_options(is_quiet=True)
findings = [self.generate_finding("PASS", AWS_REGION_EU_WEST_1)]
@@ -186,6 +192,51 @@ class Test_SecurityHub:
enabled_regions,
) == {AWS_REGION_EU_WEST_1: []}
def test_prepare_security_hub_findings_quiet_FAIL(self):
enabled_regions = [AWS_REGION_EU_WEST_1]
output_options = self.set_mocked_output_options(is_quiet=True)
findings = [self.generate_finding("FAIL", AWS_REGION_EU_WEST_1)]
audit_info = set_mocked_aws_audit_info(
audited_regions=[AWS_REGION_EU_WEST_1, AWS_REGION_EU_WEST_2]
)
assert prepare_security_hub_findings(
findings,
audit_info,
output_options,
enabled_regions,
) == {AWS_REGION_EU_WEST_1: [get_security_hub_finding("FAILED")]}
def test_prepare_security_hub_findings_send_sh_only_fails_PASS(self):
enabled_regions = [AWS_REGION_EU_WEST_1]
output_options = self.set_mocked_output_options(send_sh_only_fails=True)
findings = [self.generate_finding("PASS", AWS_REGION_EU_WEST_1)]
audit_info = set_mocked_aws_audit_info(
audited_regions=[AWS_REGION_EU_WEST_1, AWS_REGION_EU_WEST_2]
)
assert prepare_security_hub_findings(
findings,
audit_info,
output_options,
enabled_regions,
) == {AWS_REGION_EU_WEST_1: []}
def test_prepare_security_hub_findings_send_sh_only_fails_FAIL(self):
enabled_regions = [AWS_REGION_EU_WEST_1]
output_options = self.set_mocked_output_options(send_sh_only_fails=True)
findings = [self.generate_finding("FAIL", AWS_REGION_EU_WEST_1)]
audit_info = set_mocked_aws_audit_info(
audited_regions=[AWS_REGION_EU_WEST_1, AWS_REGION_EU_WEST_2]
)
assert prepare_security_hub_findings(
findings,
audit_info,
output_options,
enabled_regions,
) == {AWS_REGION_EU_WEST_1: [get_security_hub_finding("FAILED")]}
def test_prepare_security_hub_findings_no_audited_regions(self):
enabled_regions = [AWS_REGION_EU_WEST_1]
output_options = self.set_mocked_output_options(is_quiet=False)
@@ -198,47 +249,7 @@ class Test_SecurityHub:
output_options,
enabled_regions,
) == {
AWS_REGION_EU_WEST_1: [
{
"SchemaVersion": "2018-10-08",
"Id": f"prowler-iam_user_accesskey_unused-{AWS_ACCOUNT_NUMBER}-{AWS_REGION_EU_WEST_1}-ee26b0dd4",
"ProductArn": f"arn:aws:securityhub:{AWS_REGION_EU_WEST_1}::product/prowler/prowler",
"RecordState": "ACTIVE",
"ProductFields": {
"ProviderName": "Prowler",
"ProviderVersion": prowler_version,
"ProwlerResourceName": "test",
},
"GeneratorId": "prowler-iam_user_accesskey_unused",
"AwsAccountId": f"{AWS_ACCOUNT_NUMBER}",
"Types": ["Software and Configuration Checks"],
"FirstObservedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
"UpdatedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
"CreatedAt": timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ"),
"Severity": {"Label": "LOW"},
"Title": "Ensure Access Keys unused are disabled",
"Description": "test",
"Resources": [
{
"Type": "AwsIamAccessAnalyzer",
"Id": "test",
"Partition": "aws",
"Region": f"{AWS_REGION_EU_WEST_1}",
}
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": [],
"AssociatedStandards": [],
},
"Remediation": {
"Recommendation": {
"Text": "Run sudo yum update and cross your fingers and toes.",
"Url": "https://myfp.com/recommendations/dangerous_things_and_how_to_fix_them.html",
}
},
}
],
AWS_REGION_EU_WEST_1: [get_security_hub_finding("PASSED")],
}
@patch("botocore.client.BaseClient._make_api_call", new=mock_make_api_call)

View File

@@ -1,20 +1,21 @@
from boto3 import session
from mock import patch
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.aws.lib.service.service import AWSService
from prowler.providers.common.models import Audit_Metadata
AWS_ACCOUNT_NUMBER = "123456789012"
AWS_ACCOUNT_ARN = f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root"
AWS_PARTITION = "aws"
AWS_REGION = "us-east-1"
from tests.providers.aws.audit_info_utils import (
AWS_ACCOUNT_ARN,
AWS_ACCOUNT_NUMBER,
AWS_COMMERCIAL_PARTITION,
AWS_REGION_US_EAST_1,
set_mocked_aws_audit_info,
)
def mock_generate_regional_clients(service, audit_info, _):
regional_client = audit_info.audit_session.client(service, region_name=AWS_REGION)
regional_client.region = AWS_REGION
return {AWS_REGION: regional_client}
def mock_generate_regional_clients(service, audit_info):
regional_client = audit_info.audit_session.client(
service, region_name=AWS_REGION_US_EAST_1
)
regional_client.region = AWS_REGION_US_EAST_1
return {AWS_REGION_US_EAST_1: regional_client}
@patch(
@@ -22,50 +23,40 @@ def mock_generate_regional_clients(service, audit_info, _):
new=mock_generate_regional_clients,
)
class Test_AWSService:
# Mocked Audit Info
def set_mocked_audit_info(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audited_account=AWS_ACCOUNT_NUMBER,
audited_account_arn=AWS_ACCOUNT_ARN,
audited_user_id=None,
audited_partition=AWS_PARTITION,
audited_identity_arn=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=None,
organizations_metadata=None,
audit_resources=[],
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
return audit_info
def test_AWSService_init(self):
audit_info = self.set_mocked_audit_info()
service = AWSService("s3", audit_info)
service_name = "s3"
audit_info = set_mocked_aws_audit_info()
service = AWSService(service_name, audit_info)
assert service.audit_info == audit_info
assert service.audited_account == AWS_ACCOUNT_NUMBER
assert service.audited_account_arn == AWS_ACCOUNT_ARN
assert service.audited_partition == AWS_PARTITION
assert service.audited_partition == AWS_COMMERCIAL_PARTITION
assert service.audit_resources == []
assert service.audited_checks == []
assert service.session == audit_info.audit_session
assert service.service == "s3"
assert service.service == service_name
assert len(service.regional_clients) == 1
assert service.regional_clients[AWS_REGION].__class__.__name__ == "S3"
assert service.region == AWS_REGION
assert service.client.__class__.__name__ == "S3"
assert (
service.regional_clients[AWS_REGION_US_EAST_1].__class__.__name__
== service_name.upper()
)
assert service.region == AWS_REGION_US_EAST_1
assert service.client.__class__.__name__ == service_name.upper()
def test_AWSService_init_global_service(self):
service_name = "cloudfront"
audit_info = set_mocked_aws_audit_info()
service = AWSService(service_name, audit_info, global_service=True)
assert service.audit_info == audit_info
assert service.audited_account == AWS_ACCOUNT_NUMBER
assert service.audited_account_arn == AWS_ACCOUNT_ARN
assert service.audited_partition == AWS_COMMERCIAL_PARTITION
assert service.audit_resources == []
assert service.audited_checks == []
assert service.session == audit_info.audit_session
assert service.service == service_name
assert not hasattr(service, "regional_clients")
assert service.region == AWS_REGION_US_EAST_1
assert service.client.__class__.__name__ == "CloudFront"

View File

@@ -1,19 +1,15 @@
from unittest.mock import patch
import botocore
from boto3 import session
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.aws.services.accessanalyzer.accessanalyzer_service import (
AccessAnalyzer,
)
from prowler.providers.common.models import Audit_Metadata
# Mock Test Region
AWS_REGION = "eu-west-1"
AWS_ACCOUNT_NUMBER = "123456789012"
from tests.providers.aws.audit_info_utils import (
AWS_REGION_EU_WEST_1,
AWS_REGION_US_EAST_1,
set_mocked_aws_audit_info,
)
# Mocking Access Analyzer Calls
make_api_call = botocore.client.BaseClient._make_api_call
@@ -58,10 +54,12 @@ def mock_make_api_call(self, operation_name, kwarg):
return make_api_call(self, operation_name, kwarg)
def mock_generate_regional_clients(service, audit_info, _):
regional_client = audit_info.audit_session.client(service, region_name=AWS_REGION)
regional_client.region = AWS_REGION
return {AWS_REGION: regional_client}
def mock_generate_regional_clients(service, audit_info):
regional_client = audit_info.audit_session.client(
service, region_name=AWS_REGION_EU_WEST_1
)
regional_client.region = AWS_REGION_EU_WEST_1
return {AWS_REGION_EU_WEST_1: regional_client}
# Patch every AWS call using Boto3 and generate_regional_clients to have 1 client
@@ -71,66 +69,46 @@ def mock_generate_regional_clients(service, audit_info, _):
new=mock_generate_regional_clients,
)
class Test_AccessAnalyzer_Service:
def set_mocked_audit_info(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audited_account=AWS_ACCOUNT_NUMBER,
audited_account_arn=f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root",
audited_user_id=None,
audited_partition="aws",
audited_identity_arn=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=["us-east-1", "eu-west-1"],
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
return audit_info
# Test AccessAnalyzer Client
def test__get_client__(self):
access_analyzer = AccessAnalyzer(self.set_mocked_audit_info())
access_analyzer = AccessAnalyzer(
set_mocked_aws_audit_info([AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1])
)
assert (
access_analyzer.regional_clients[AWS_REGION].__class__.__name__
access_analyzer.regional_clients[AWS_REGION_EU_WEST_1].__class__.__name__
== "AccessAnalyzer"
)
# Test AccessAnalyzer Session
def test__get_session__(self):
access_analyzer = AccessAnalyzer(self.set_mocked_audit_info())
access_analyzer = AccessAnalyzer(
set_mocked_aws_audit_info([AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1])
)
assert access_analyzer.session.__class__.__name__ == "Session"
# Test AccessAnalyzer Service
def test__get_service__(self):
access_analyzer = AccessAnalyzer(self.set_mocked_audit_info())
access_analyzer = AccessAnalyzer(
set_mocked_aws_audit_info([AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1])
)
assert access_analyzer.service == "accessanalyzer"
def test__list_analyzers__(self):
access_analyzer = AccessAnalyzer(self.set_mocked_audit_info())
access_analyzer = AccessAnalyzer(
set_mocked_aws_audit_info([AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1])
)
assert len(access_analyzer.analyzers) == 1
assert access_analyzer.analyzers[0].arn == "ARN"
assert access_analyzer.analyzers[0].name == "Test Analyzer"
assert access_analyzer.analyzers[0].status == "ACTIVE"
assert access_analyzer.analyzers[0].tags == [{"test": "test"}]
assert access_analyzer.analyzers[0].type == "ACCOUNT"
assert access_analyzer.analyzers[0].region == AWS_REGION
assert access_analyzer.analyzers[0].region == AWS_REGION_EU_WEST_1
def test__list_findings__(self):
access_analyzer = AccessAnalyzer(self.set_mocked_audit_info())
access_analyzer = AccessAnalyzer(
set_mocked_aws_audit_info([AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1])
)
assert len(access_analyzer.analyzers) == 1
assert len(access_analyzer.analyzers[0].findings) == 1
assert access_analyzer.analyzers[0].findings[0].status == "ARCHIVED"

View File

@@ -1,14 +1,11 @@
import botocore
from boto3 import session
from mock import patch
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.aws.services.account.account_service import Account, Contact
from prowler.providers.common.models import Audit_Metadata
AWS_ACCOUNT_NUMBER = "123456789012"
AWS_ACCOUNT_ARN = f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root"
AWS_REGION = "us-east-1"
from tests.providers.aws.audit_info_utils import (
AWS_ACCOUNT_NUMBER,
set_mocked_aws_audit_info,
)
# Mocking Access Analyzer Calls
make_api_call = botocore.client.BaseClient._make_api_call
@@ -56,65 +53,34 @@ def mock_make_api_call(self, operation_name, kwargs):
# Patch every AWS call using Boto3
@patch("botocore.client.BaseClient._make_api_call", new=mock_make_api_call)
class Test_Account_Service:
# Mocked Audit Info
def set_mocked_audit_info(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audited_account=AWS_ACCOUNT_NUMBER,
audited_account_arn=AWS_ACCOUNT_ARN,
audited_user_id=None,
audited_partition="aws",
audited_identity_arn=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=None,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
return audit_info
# Test Account Service
def test_service(self):
audit_info = self.set_mocked_audit_info()
audit_info = set_mocked_aws_audit_info()
account = Account(audit_info)
assert account.service == "account"
# Test Account Client
def test_client(self):
audit_info = self.set_mocked_audit_info()
audit_info = set_mocked_aws_audit_info()
account = Account(audit_info)
assert account.client.__class__.__name__ == "Account"
# Test Account Session
def test__get_session__(self):
audit_info = self.set_mocked_audit_info()
audit_info = set_mocked_aws_audit_info()
account = Account(audit_info)
assert account.session.__class__.__name__ == "Session"
# Test Account Session
def test_audited_account(self):
audit_info = self.set_mocked_audit_info()
audit_info = set_mocked_aws_audit_info()
account = Account(audit_info)
assert account.audited_account == AWS_ACCOUNT_NUMBER
# Test Account Get Account Contacts
def test_get_account_contacts(self):
# Account client for this test class
audit_info = self.set_mocked_audit_info()
audit_info = set_mocked_aws_audit_info()
account = Account(audit_info)
assert account.number_of_contacts == 4
assert account.contact_base == Contact(

View File

@@ -2,26 +2,20 @@ import uuid
from datetime import datetime
import botocore
from boto3 import session
from freezegun import freeze_time
from mock import patch
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.aws.services.acm.acm_service import ACM
from prowler.providers.common.models import Audit_Metadata
# from moto import mock_acm
AWS_ACCOUNT_NUMBER = "123456789012"
AWS_REGION = "us-east-1"
from tests.providers.aws.audit_info_utils import (
AWS_ACCOUNT_NUMBER,
AWS_REGION_US_EAST_1,
set_mocked_aws_audit_info,
)
# Mocking Access Analyzer Calls
make_api_call = botocore.client.BaseClient._make_api_call
certificate_arn = (
f"arn:aws:acm:{AWS_REGION}:{AWS_ACCOUNT_NUMBER}:certificate/{str(uuid.uuid4())}"
)
certificate_arn = f"arn:aws:acm:{AWS_REGION_US_EAST_1}:{AWS_ACCOUNT_NUMBER}:certificate/{str(uuid.uuid4())}"
certificate_name = "test-certificate.com"
certificate_type = "AMAZON_ISSUED"
@@ -80,10 +74,12 @@ def mock_make_api_call(self, operation_name, kwargs):
# Mock generate_regional_clients()
def mock_generate_regional_clients(service, audit_info, _):
regional_client = audit_info.audit_session.client(service, region_name=AWS_REGION)
regional_client.region = AWS_REGION
return {AWS_REGION: regional_client}
def mock_generate_regional_clients(service, audit_info):
regional_client = audit_info.audit_session.client(
service, region_name=AWS_REGION_US_EAST_1
)
regional_client.region = AWS_REGION_US_EAST_1
return {AWS_REGION_US_EAST_1: regional_client}
# Patch every AWS call using Boto3 and generate_regional_clients to have 1 client
@@ -96,42 +92,11 @@ def mock_generate_regional_clients(service, audit_info, _):
@freeze_time("2023-01-01")
# FIXME: Pending Moto PR to update ACM responses
class Test_ACM_Service:
# Mocked Audit Info
def set_mocked_audit_info(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audited_account=AWS_ACCOUNT_NUMBER,
audited_account_arn=f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root",
audited_user_id=None,
audited_partition="aws",
audited_identity_arn=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=None,
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
return audit_info
# Test ACM Service
# @mock_acm
def test_service(self):
# ACM client for this test class
audit_info = self.set_mocked_audit_info()
audit_info = set_mocked_aws_audit_info()
acm = ACM(audit_info)
assert acm.service == "acm"
@@ -139,7 +104,7 @@ class Test_ACM_Service:
# @mock_acm
def test_client(self):
# ACM client for this test class
audit_info = self.set_mocked_audit_info()
audit_info = set_mocked_aws_audit_info()
acm = ACM(audit_info)
for regional_client in acm.regional_clients.values():
assert regional_client.__class__.__name__ == "ACM"
@@ -148,7 +113,7 @@ class Test_ACM_Service:
# @mock_acm
def test__get_session__(self):
# ACM client for this test class
audit_info = self.set_mocked_audit_info()
audit_info = set_mocked_aws_audit_info()
acm = ACM(audit_info)
assert acm.session.__class__.__name__ == "Session"
@@ -156,7 +121,7 @@ class Test_ACM_Service:
# @mock_acm
def test_audited_account(self):
# ACM client for this test class
audit_info = self.set_mocked_audit_info()
audit_info = set_mocked_aws_audit_info()
acm = ACM(audit_info)
assert acm.audited_account == AWS_ACCOUNT_NUMBER
@@ -171,7 +136,7 @@ class Test_ACM_Service:
# )
# ACM client for this test class
audit_info = self.set_mocked_audit_info()
audit_info = set_mocked_aws_audit_info()
acm = ACM(audit_info)
assert len(acm.certificates) == 1
assert acm.certificates[0].arn == certificate_arn
@@ -179,7 +144,7 @@ class Test_ACM_Service:
assert acm.certificates[0].type == certificate_type
assert acm.certificates[0].expiration_days == 365
assert acm.certificates[0].transparency_logging is False
assert acm.certificates[0].region == AWS_REGION
assert acm.certificates[0].region == AWS_REGION_US_EAST_1
# Test ACM List Tags
# @mock_acm
@@ -192,7 +157,7 @@ class Test_ACM_Service:
# )
# ACM client for this test class
audit_info = self.set_mocked_audit_info()
audit_info = set_mocked_aws_audit_info()
acm = ACM(audit_info)
assert len(acm.certificates) == 1
assert acm.certificates[0].tags == [

View File

@@ -1,55 +1,26 @@
from unittest import mock
from boto3 import client, session
from boto3 import client
from moto import mock_apigateway, mock_iam, mock_lambda
from moto.core import DEFAULT_ACCOUNT_ID as ACCOUNT_ID
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.common.models import Audit_Metadata
AWS_REGION = "us-east-1"
AWS_ACCOUNT_NUMBER = "123456789012"
from tests.providers.aws.audit_info_utils import (
AWS_ACCOUNT_NUMBER,
AWS_REGION_EU_WEST_1,
AWS_REGION_US_EAST_1,
set_mocked_aws_audit_info,
)
class Test_apigateway_restapi_authorizers_enabled:
def set_mocked_audit_info(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audited_account=AWS_ACCOUNT_NUMBER,
audited_account_arn=f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root",
audited_user_id=None,
audited_partition="aws",
audited_identity_arn=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=["us-east-1", "eu-west-1"],
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
return audit_info
@mock_apigateway
def test_apigateway_no_rest_apis(self):
from prowler.providers.aws.services.apigateway.apigateway_service import (
APIGateway,
)
current_audit_info = self.set_mocked_audit_info()
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
@@ -73,8 +44,8 @@ class Test_apigateway_restapi_authorizers_enabled:
@mock_lambda
def test_apigateway_one_rest_api_with_lambda_authorizer(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION)
lambda_client = client("lambda", region_name=AWS_REGION)
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
lambda_client = client("lambda", region_name=AWS_REGION_US_EAST_1)
iam_client = client("iam")
# Create APIGateway Rest API
role_arn = iam_client.create_role(
@@ -97,13 +68,15 @@ class Test_apigateway_restapi_authorizers_enabled:
name="test",
restApiId=rest_api["id"],
type="TOKEN",
authorizerUri=f"arn:aws:apigateway:{apigateway_client.meta.region_name}:lambda:path/2015-03-31/functions/arn:aws:lambda:{apigateway_client.meta.region_name}:{ACCOUNT_ID}:function:{authorizer['FunctionName']}/invocations",
authorizerUri=f"arn:aws:apigateway:{apigateway_client.meta.region_name}:lambda:path/2015-03-31/functions/arn:aws:lambda:{apigateway_client.meta.region_name}:{AWS_ACCOUNT_NUMBER}:function:{authorizer['FunctionName']}/invocations",
)
from prowler.providers.aws.services.apigateway.apigateway_service import (
APIGateway,
)
current_audit_info = self.set_mocked_audit_info()
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
@@ -124,20 +97,20 @@ class Test_apigateway_restapi_authorizers_enabled:
assert len(result) == 1
assert (
result[0].status_extended
== f"API Gateway test-rest-api ID {rest_api['id']} has an authorizer configured."
== f"API Gateway test-rest-api ID {rest_api['id']} has an authorizer configured at api level"
)
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION}::/restapis/{rest_api['id']}"
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
)
assert result[0].region == AWS_REGION
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].resource_tags == [{}]
@mock_apigateway
def test_apigateway_one_rest_api_without_lambda_authorizer(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION)
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
# Create APIGateway Rest API
rest_api = apigateway_client.create_rest_api(
name="test-rest-api",
@@ -146,7 +119,9 @@ class Test_apigateway_restapi_authorizers_enabled:
APIGateway,
)
current_audit_info = self.set_mocked_audit_info()
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
@@ -167,12 +142,342 @@ class Test_apigateway_restapi_authorizers_enabled:
assert len(result) == 1
assert (
result[0].status_extended
== f"API Gateway test-rest-api ID {rest_api['id']} does not have an authorizer configured."
== f"API Gateway test-rest-api ID {rest_api['id']} does not have an authorizer configured at api level."
)
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION}::/restapis/{rest_api['id']}"
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
)
assert result[0].region == AWS_REGION
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].resource_tags == [{}]
@mock_apigateway
@mock_iam
@mock_lambda
def test_apigateway_one_rest_api_without_api_or_methods_authorizer(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
rest_api = apigateway_client.create_rest_api(
name="test-rest-api",
)
default_resource_id = apigateway_client.get_resources(restApiId=rest_api["id"])[
"items"
][0]["id"]
api_resource = apigateway_client.create_resource(
restApiId=rest_api["id"], parentId=default_resource_id, pathPart="test"
)
apigateway_client.put_method(
restApiId=rest_api["id"],
resourceId=api_resource["id"],
httpMethod="GET",
authorizationType="NONE",
)
from prowler.providers.aws.services.apigateway.apigateway_service import (
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
new=current_audit_info,
), mock.patch(
"prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled.apigateway_client",
new=APIGateway(current_audit_info),
):
# Test Check
from prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled import (
apigateway_restapi_authorizers_enabled,
)
check = apigateway_restapi_authorizers_enabled()
result = check.execute()
assert result[0].status == "FAIL"
assert len(result) == 1
assert (
result[0].status_extended
== f"API Gateway test-rest-api ID {rest_api['id']} does not have authorizers at api level and the following paths and methods are unauthorized: /test -> GET."
)
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
)
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].resource_tags == [{}]
@mock_apigateway
@mock_iam
@mock_lambda
def test_apigateway_one_rest_api_without_api_auth_but_one_method_auth(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
rest_api = apigateway_client.create_rest_api(
name="test-rest-api",
)
default_resource_id = apigateway_client.get_resources(restApiId=rest_api["id"])[
"items"
][0]["id"]
api_resource = apigateway_client.create_resource(
restApiId=rest_api["id"], parentId=default_resource_id, pathPart="test"
)
apigateway_client.put_method(
restApiId=rest_api["id"],
resourceId=api_resource["id"],
httpMethod="GET",
authorizationType="AWS_IAM",
)
from prowler.providers.aws.services.apigateway.apigateway_service import (
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
new=current_audit_info,
), mock.patch(
"prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled.apigateway_client",
new=APIGateway(current_audit_info),
):
# Test Check
from prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled import (
apigateway_restapi_authorizers_enabled,
)
check = apigateway_restapi_authorizers_enabled()
result = check.execute()
assert result[0].status == "PASS"
assert len(result) == 1
assert (
result[0].status_extended
== f"API Gateway test-rest-api ID {rest_api['id']} has all methods authorized"
)
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
)
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].resource_tags == [{}]
@mock_apigateway
@mock_iam
@mock_lambda
def test_apigateway_one_rest_api_without_api_auth_but_methods_auth_and_not(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
rest_api = apigateway_client.create_rest_api(
name="test-rest-api",
)
default_resource_id = apigateway_client.get_resources(restApiId=rest_api["id"])[
"items"
][0]["id"]
api_resource = apigateway_client.create_resource(
restApiId=rest_api["id"], parentId=default_resource_id, pathPart="test"
)
apigateway_client.put_method(
restApiId=rest_api["id"],
resourceId=api_resource["id"],
httpMethod="POST",
authorizationType="AWS_IAM",
)
apigateway_client.put_method(
restApiId=rest_api["id"],
resourceId=api_resource["id"],
httpMethod="GET",
authorizationType="NONE",
)
from prowler.providers.aws.services.apigateway.apigateway_service import (
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
new=current_audit_info,
), mock.patch(
"prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled.apigateway_client",
new=APIGateway(current_audit_info),
):
# Test Check
from prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled import (
apigateway_restapi_authorizers_enabled,
)
check = apigateway_restapi_authorizers_enabled()
result = check.execute()
assert result[0].status == "FAIL"
assert len(result) == 1
assert (
result[0].status_extended
== f"API Gateway test-rest-api ID {rest_api['id']} does not have authorizers at api level and the following paths and methods are unauthorized: /test -> GET."
)
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
)
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].resource_tags == [{}]
@mock_apigateway
@mock_iam
@mock_lambda
def test_apigateway_one_rest_api_without_api_auth_but_methods_not_auth_and_auth(
self,
):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
rest_api = apigateway_client.create_rest_api(
name="test-rest-api",
)
default_resource_id = apigateway_client.get_resources(restApiId=rest_api["id"])[
"items"
][0]["id"]
api_resource = apigateway_client.create_resource(
restApiId=rest_api["id"], parentId=default_resource_id, pathPart="test"
)
apigateway_client.put_method(
restApiId=rest_api["id"],
resourceId=api_resource["id"],
httpMethod="GET",
authorizationType="NONE",
)
apigateway_client.put_method(
restApiId=rest_api["id"],
resourceId=api_resource["id"],
httpMethod="POST",
authorizationType="AWS_IAM",
)
from prowler.providers.aws.services.apigateway.apigateway_service import (
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
new=current_audit_info,
), mock.patch(
"prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled.apigateway_client",
new=APIGateway(current_audit_info),
):
# Test Check
from prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled import (
apigateway_restapi_authorizers_enabled,
)
check = apigateway_restapi_authorizers_enabled()
result = check.execute()
assert result[0].status == "FAIL"
assert len(result) == 1
assert (
result[0].status_extended
== f"API Gateway test-rest-api ID {rest_api['id']} does not have authorizers at api level and the following paths and methods are unauthorized: /test -> GET."
)
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
)
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].resource_tags == [{}]
@mock_apigateway
@mock_iam
@mock_lambda
def test_apigateway_one_rest_api_without_authorizers_with_various_resources_without_endpoints(
self,
):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
rest_api = apigateway_client.create_rest_api(
name="test-rest-api",
)
default_resource_id = apigateway_client.get_resources(restApiId=rest_api["id"])[
"items"
][0]["id"]
apigateway_client.create_resource(
restApiId=rest_api["id"], parentId=default_resource_id, pathPart="test"
)
apigateway_client.create_resource(
restApiId=rest_api["id"], parentId=default_resource_id, pathPart="test2"
)
from prowler.providers.aws.services.apigateway.apigateway_service import (
APIGateway,
)
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
new=current_audit_info,
), mock.patch(
"prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled.apigateway_client",
new=APIGateway(current_audit_info),
):
# Test Check
from prowler.providers.aws.services.apigateway.apigateway_restapi_authorizers_enabled.apigateway_restapi_authorizers_enabled import (
apigateway_restapi_authorizers_enabled,
)
check = apigateway_restapi_authorizers_enabled()
result = check.execute()
assert result[0].status == "FAIL"
assert len(result) == 1
assert (
result[0].status_extended
== f"API Gateway test-rest-api ID {rest_api['id']} does not have an authorizer configured at api level."
)
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
)
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].resource_tags == [{}]

View File

@@ -1,52 +1,21 @@
from unittest import mock
from boto3 import client, session
from boto3 import client
from moto import mock_apigateway
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.aws.services.apigateway.apigateway_service import Stage
from prowler.providers.common.models import Audit_Metadata
AWS_REGION = "us-east-1"
AWS_ACCOUNT_NUMBER = "123456789012"
from tests.providers.aws.audit_info_utils import (
AWS_REGION_EU_WEST_1,
AWS_REGION_US_EAST_1,
set_mocked_aws_audit_info,
)
class Test_apigateway_restapi_client_certificate_enabled:
def set_mocked_audit_info(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audited_account=AWS_ACCOUNT_NUMBER,
audited_account_arn=f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root",
audited_user_id=None,
audited_partition="aws",
audited_identity_arn=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=["us-east-1", "eu-west-1"],
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
return audit_info
@mock_apigateway
def test_apigateway_no_stages(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION)
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
# Create APIGateway Rest API
apigateway_client.create_rest_api(
name="test-rest-api",
@@ -55,7 +24,9 @@ class Test_apigateway_restapi_client_certificate_enabled:
APIGateway,
)
current_audit_info = self.set_mocked_audit_info()
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
@@ -77,7 +48,7 @@ class Test_apigateway_restapi_client_certificate_enabled:
@mock_apigateway
def test_apigateway_one_stage_without_certificate(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION)
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
# Create APIGateway Deployment Stage
rest_api = apigateway_client.create_rest_api(
name="test-rest-api",
@@ -113,7 +84,9 @@ class Test_apigateway_restapi_client_certificate_enabled:
APIGateway,
)
current_audit_info = self.set_mocked_audit_info()
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
@@ -139,15 +112,15 @@ class Test_apigateway_restapi_client_certificate_enabled:
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION}::/restapis/{rest_api['id']}/stages/test"
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}/stages/test"
)
assert result[0].region == AWS_REGION
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].resource_tags == [None]
@mock_apigateway
def test_apigateway_one_stage_with_certificate(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION)
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
# Create APIGateway Deployment Stage
rest_api = apigateway_client.create_rest_api(
name="test-rest-api",
@@ -156,7 +129,9 @@ class Test_apigateway_restapi_client_certificate_enabled:
APIGateway,
)
current_audit_info = self.set_mocked_audit_info()
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
@@ -173,7 +148,7 @@ class Test_apigateway_restapi_client_certificate_enabled:
service_client.rest_apis[0].stages.append(
Stage(
name="test",
arn=f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION}::/restapis/test-rest-api/stages/test",
arn=f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/test-rest-api/stages/test",
logging=True,
client_certificate=True,
waf=True,
@@ -192,7 +167,7 @@ class Test_apigateway_restapi_client_certificate_enabled:
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION}::/restapis/test-rest-api/stages/test"
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/test-rest-api/stages/test"
)
assert result[0].region == AWS_REGION
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].resource_tags == []

View File

@@ -1,54 +1,25 @@
from unittest import mock
from boto3 import client, session
from boto3 import client
from moto import mock_apigateway
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.common.models import Audit_Metadata
AWS_REGION = "us-east-1"
AWS_ACCOUNT_NUMBER = "123456789012"
from tests.providers.aws.audit_info_utils import (
AWS_REGION_EU_WEST_1,
AWS_REGION_US_EAST_1,
set_mocked_aws_audit_info,
)
class Test_apigateway_restapi_public:
def set_mocked_audit_info(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audited_account=AWS_ACCOUNT_NUMBER,
audited_account_arn=f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root",
audited_user_id=None,
audited_partition="aws",
audited_identity_arn=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=["us-east-1", "eu-west-1"],
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
return audit_info
@mock_apigateway
def test_apigateway_no_rest_apis(self):
from prowler.providers.aws.services.apigateway.apigateway_service import (
APIGateway,
)
current_audit_info = self.set_mocked_audit_info()
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
@@ -70,7 +41,7 @@ class Test_apigateway_restapi_public:
@mock_apigateway
def test_apigateway_one_private_rest_api(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION)
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
# Create APIGateway Deployment Stage
rest_api = apigateway_client.create_rest_api(
name="test-rest-api",
@@ -84,7 +55,9 @@ class Test_apigateway_restapi_public:
APIGateway,
)
current_audit_info = self.set_mocked_audit_info()
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
@@ -110,15 +83,15 @@ class Test_apigateway_restapi_public:
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION}::/restapis/{rest_api['id']}"
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
)
assert result[0].region == AWS_REGION
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].resource_tags == [{}]
@mock_apigateway
def test_apigateway_one_public_rest_api(self):
# Create APIGateway Mocked Resources
apigateway_client = client("apigateway", region_name=AWS_REGION)
apigateway_client = client("apigateway", region_name=AWS_REGION_US_EAST_1)
# Create APIGateway Deployment Stage
rest_api = apigateway_client.create_rest_api(
name="test-rest-api",
@@ -132,7 +105,9 @@ class Test_apigateway_restapi_public:
APIGateway,
)
current_audit_info = self.set_mocked_audit_info()
current_audit_info = current_audit_info = set_mocked_aws_audit_info(
[AWS_REGION_EU_WEST_1, AWS_REGION_US_EAST_1]
)
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
@@ -158,7 +133,7 @@ class Test_apigateway_restapi_public:
assert result[0].resource_id == "test-rest-api"
assert (
result[0].resource_arn
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION}::/restapis/{rest_api['id']}"
== f"arn:{current_audit_info.audited_partition}:apigateway:{AWS_REGION_US_EAST_1}::/restapis/{rest_api['id']}"
)
assert result[0].region == AWS_REGION
assert result[0].region == AWS_REGION_US_EAST_1
assert result[0].resource_tags == [{}]

Some files were not shown because too many files have changed in this diff Show More