Compare commits

...

406 Commits
3.1.4 ... 3.5.2

Author SHA1 Message Date
github-actions
fad5a1937c chore(release): 3.5.2 2023-05-18 14:47:37 +00:00
Sergio Garcia
635c257502 fix(ssm incidents): check if service available in aws partition (#2372) 2023-05-18 16:44:52 +02:00
Pepe Fagoaga
58a38c08d7 docs: format regions-and-partitions (#2371) 2023-05-18 16:35:54 +02:00
Pepe Fagoaga
8fbee7737b fix(resource_not_found): Handle error (#2370) 2023-05-18 16:26:08 +02:00
Pepe Fagoaga
e84f5f184e fix(sts): Use the right region to validate credentials (#2349)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-05-18 15:51:57 +02:00
Sergio Garcia
0bd26b19d7 chore(regions_update): Changes in regions for AWS services. (#2368)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-18 11:17:28 +02:00
Sergio Garcia
64f82d5d51 chore(regions_update): Changes in regions for AWS services. (#2366)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-17 11:52:16 +02:00
Sergio Garcia
f63ff994ce fix(action): solve pypi-release action creating the release branch (#2364) 2023-05-16 13:32:46 +02:00
Sergio Garcia
a10ee43271 release: 3.5.1 (#2363) 2023-05-16 11:42:08 +02:00
Sergio Garcia
54ed29e08d fix(route53): handle empty Records in Zones (#2351) 2023-05-16 10:51:43 +02:00
dependabot[bot]
cc097e7a3f build(deps-dev): bump docker from 6.1.1 to 6.1.2 (#2360)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-16 09:39:24 +02:00
dependabot[bot]
5de92ada43 build(deps): bump mkdocs-material from 9.1.8 to 9.1.12 (#2359)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-16 09:24:39 +02:00
dependabot[bot]
0c546211cf build(deps-dev): bump pytest-xdist from 3.2.1 to 3.3.0 (#2358)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-16 08:09:55 +02:00
dependabot[bot]
4dc5a3a67c build(deps): bump botocore from 1.29.125 to 1.29.134 (#2357)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-16 07:51:19 +02:00
dependabot[bot]
c51b226ceb build(deps): bump shodan from 1.28.0 to 1.29.0 (#2356)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-16 07:34:51 +02:00
dependabot[bot]
0a5ca6cf74 build(deps): bump pymdown-extensions from 9.11 to 10.0 (#2355)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-16 07:33:56 +02:00
Sergio Garcia
96957219e4 chore(regions_update): Changes in regions for AWS services. (#2353)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-16 07:32:41 +02:00
Sergio Garcia
32b7620db3 chore(regions_update): Changes in regions for AWS services. (#2350)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-12 11:37:53 +02:00
Sergio Garcia
347f65e089 chore(release): 3.5.0 (#2346) 2023-05-11 17:42:46 +02:00
Sergio Garcia
16628a427e fix(README): update Architecture image and PyPi links (#2345) 2023-05-11 17:29:17 +02:00
Sergio Garcia
ed16034a25 fix(README): order providers alphbetically (#2344) 2023-05-11 16:30:04 +02:00
Pepe Fagoaga
0c5f144e41 fix(poetry): Skip updates during pre-commit (#2342) 2023-05-11 12:17:21 +02:00
Sergio Garcia
acc7d6e7dc chore(regions_update): Changes in regions for AWS services. (#2341)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-11 11:41:39 +02:00
Sergio Garcia
84b4139052 chore(iam): add new permissions (#2339) 2023-05-11 11:35:32 +02:00
Sergio Garcia
9943643958 fix(s3): improve error handling (#2337) 2023-05-10 16:43:06 +02:00
Pepe Fagoaga
9ceaefb663 fix(access-analyzer): Handle ResourceNotFoundException (#2336) 2023-05-10 15:44:14 +02:00
Gabriel Soltz
ec03ea5bc1 feat(workspaces): New check workspaces_vpc_2private_1public_subnets_nat (#2286)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: n4ch04 <nachor1992@gmail.com>
2023-05-10 15:40:42 +02:00
Sergio Garcia
5855633c1f fix(resourceexplorer2): add resource id (#2335)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-05-10 14:48:34 +02:00
Pedro Martín
a53bc2bc2e feat(rds): new check rds_instance_deprecated_engine_version (#2298)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2023-05-10 14:48:12 +02:00
Sergio Garcia
88445820ed feat(slack): add Slack App integration (#2305)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-05-10 13:38:28 +02:00
Sergio Garcia
044ed3ae98 chore(regions_update): Changes in regions for AWS services. (#2334)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-10 13:30:24 +02:00
Pepe Fagoaga
6f48012234 fix(ecr): Refactor service (#2302)
Co-authored-by: Gabriel Soltz <thegaby@gmail.com>
Co-authored-by: Kay Agahd <kagahd@users.noreply.github.com>
Co-authored-by: Nacho Rivera <nachor1992@gmail.com>
Co-authored-by: Kevin Pullin <kevin.pullin@gmail.com>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-05-09 17:04:21 +02:00
Sergio Garcia
d344318dd4 feat(allowlist): allowlist a specific service (#2331) 2023-05-09 15:43:04 +02:00
Sergio Garcia
6273dd3d83 chore(regions_update): Changes in regions for AWS services. (#2330)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-09 12:21:07 +02:00
dependabot[bot]
0f3f3cbffd build(deps-dev): bump moto from 4.1.8 to 4.1.9 (#2328)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-05-09 11:38:41 +02:00
Pepe Fagoaga
3244123b21 fix(cloudfront_distributions_https_enabled): Add default case (#2329)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-05-09 11:09:18 +02:00
dependabot[bot]
cba2ee3622 build(deps): bump boto3 from 1.26.115 to 1.26.125 (#2327)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-09 08:48:15 +02:00
dependabot[bot]
25ed925df5 build(deps-dev): bump docker from 6.0.1 to 6.1.1 (#2326)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-09 08:22:03 +02:00
dependabot[bot]
8c5bd60bab build(deps-dev): bump pylint from 2.17.3 to 2.17.4 (#2325)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-09 07:59:21 +02:00
dependabot[bot]
c5510556a7 build(deps): bump mkdocs from 1.4.2 to 1.4.3 (#2324)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-09 07:38:43 +02:00
Sergio Garcia
bbcfca84ef fix(trustedadvisor): avoid not_available checks (#2323) 2023-05-08 17:55:31 +02:00
Sergio Garcia
1260e94c2a fix(cloudtrail): handle InsightNotEnabledException error (#2322) 2023-05-08 16:06:13 +02:00
Pepe Fagoaga
8a02574303 fix(sagemaker): Handle ValidationException (#2321) 2023-05-08 14:52:28 +02:00
Pepe Fagoaga
c930f08348 fix(emr): Handle InvalidRequestException (#2320) 2023-05-08 14:52:12 +02:00
Pepe Fagoaga
5204acb5d0 fix(iam): Handle ListRoleTags and policy errors (#2319) 2023-05-08 14:42:23 +02:00
Sergio Garcia
784aaa98c9 feat(iam): add iam_role_cross_account_readonlyaccess_policy check (#2312) 2023-05-08 13:27:51 +02:00
Sergio Garcia
745e2494bc chore(docs): improve GCP docs (#2318) 2023-05-08 13:26:23 +02:00
Sergio Garcia
c00792519d chore(docs): improve GCP docs (#2318) 2023-05-08 13:26:02 +02:00
Sergio Garcia
142fe5a12c chore(regions_update): Changes in regions for AWS services. (#2315)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-08 12:40:31 +02:00
Sergio Garcia
5b127f232e fix(typo): typo in backup_vaults_exist check title (#2317) 2023-05-08 12:29:08 +02:00
Kevin Pullin
c22bf01003 feat(allowlist): Support regexes in Tags to allow "or"-like conditional matching (#2300)
Co-authored-by: Kevin Pullin <kevinp@nexttrucking.com>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-05-05 14:56:27 +02:00
Nacho Rivera
05e4911d6f fix(vpc services): list to dicts in vpc and subnets (#2310) 2023-05-04 15:35:02 +02:00
Nacho Rivera
9b551ef0ba feat(pre-commit): added trufflehog to pre-commit (#2311) 2023-05-04 15:33:11 +02:00
Sergio Garcia
56a8bb2349 chore(regions_update): Changes in regions for AWS services. (#2309)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-04 12:30:10 +02:00
Pepe Fagoaga
8503c6a64d fix(client_error): Handle errors (#2308) 2023-05-04 11:06:24 +02:00
Pepe Fagoaga
820f18da4d release: 3.4.1 (#2303) 2023-05-03 19:24:17 +02:00
Kay Agahd
51a2432ebf fix(typo): remove redundant lines (#2307) 2023-05-03 19:23:48 +02:00
Gabriel Soltz
6639534e97 feat(ssmincidents): Use regional_client region instead of audit_profile region (#2306) 2023-05-03 19:22:30 +02:00
Gabriel Soltz
0621577c7d fix(backup): Return [] when None AdvancedBackupSettings (#2304) 2023-05-03 17:10:53 +02:00
Sergio Garcia
26a507e3db feat(route53): add route53_dangling_ip_subdomain_takeover check (#2288)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-05-03 11:47:36 +02:00
Sergio Garcia
244b540fe0 fix(s3): handle NoSuchBucket error (#2289) 2023-05-03 09:55:19 +02:00
Gabriel Soltz
030ca4c173 fix(backups): change severity and only check report_plans if plans exists (#2291)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-05-03 09:00:15 +02:00
dependabot[bot]
88a2810f29 build(deps): bump botocore from 1.29.115 to 1.29.125 (#2301)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-03 08:55:14 +02:00
dependabot[bot]
9164ee363a build(deps-dev): bump coverage from 7.2.3 to 7.2.5 (#2297)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-03 08:38:03 +02:00
dependabot[bot]
4cd47fdcc5 build(deps): bump google-api-python-client from 2.84.0 to 2.86.0 (#2296)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-03 08:11:36 +02:00
dependabot[bot]
708852a3cb build(deps): bump mkdocs-material from 9.1.6 to 9.1.8 (#2294)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-03 07:49:52 +02:00
Sergio Garcia
4a93bdf3ea chore(regions_update): Changes in regions for AWS services. (#2293)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-03 07:49:27 +02:00
Gabriel Soltz
22e7d2a811 feat(Organizations): New check organizations_tags_policies_enabled_and_attached (#2287)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-28 16:14:08 +02:00
Sergio Garcia
93eca1dff2 chore(regions_update): Changes in regions for AWS services. (#2290)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-28 13:19:46 +02:00
Gabriel Soltz
9afe7408cd feat(FMS): New Service FMS and Check fms_accounts_compliant (#2259)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: Nacho Rivera <nacho@verica.io>
2023-04-28 11:47:55 +02:00
Sergio Garcia
5dc2347a25 docs(security hub): improve security hub docs (#2285)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-27 16:22:49 +02:00
Pepe Fagoaga
e3a0124b10 fix(opensearch): Handle invalid JSON policy (#2262) 2023-04-27 12:05:43 +02:00
Gabriel Soltz
16af89c281 feat(autoscaling): new check autoscaling_group_multiple_az (#2273) 2023-04-26 15:10:04 +02:00
Sergio Garcia
621e4258c8 feat(s3): add s3_bucket_object_lock check (#2274) 2023-04-26 15:04:45 +02:00
Sergio Garcia
ac6272e739 fix(rds): check configurations for DB instances at cluster level (#2277)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-26 13:51:07 +02:00
Sergio Garcia
6e84f517a9 fix(apigateway2): correct paginator name (#2283) 2023-04-26 13:43:15 +02:00
Pepe Fagoaga
fdbdb3ad86 fix(sns_topics_not_publicly_accessible): Change PASS behaviour (#2282) 2023-04-26 12:51:51 +02:00
Sergio Garcia
7adcf5ca46 chore(regions_update): Changes in regions for AWS services. (#2280)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-26 11:59:34 +02:00
Gabriel Soltz
fe6716cf76 feat(NetworkFirewall): New Service and Check (#2261)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2023-04-26 11:58:11 +02:00
dependabot[bot]
3c2096db68 build(deps): bump azure-mgmt-security from 4.0.0 to 5.0.0 (#2270)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-25 11:59:30 +02:00
Pepe Fagoaga
58cad1a6b3 fix(log_group_retention): handle log groups that never expire (#2272) 2023-04-25 10:45:43 +02:00
dependabot[bot]
662e67ff16 build(deps): bump boto3 from 1.26.105 to 1.26.115 (#2269)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-25 10:35:15 +02:00
dependabot[bot]
8d577b872f build(deps-dev): bump moto from 4.1.7 to 4.1.8 (#2268)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-25 10:12:25 +02:00
dependabot[bot]
b55290f3cb build(deps-dev): bump pylint from 2.17.2 to 2.17.3 (#2267)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-25 09:20:15 +02:00
dependabot[bot]
e8d3eb7393 build(deps-dev): bump pytest from 7.3.0 to 7.3.1 (#2266)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-25 08:03:45 +02:00
Sergio Garcia
47fa16e35f chore(test): add CloudWatch and Logs tests (#2264) 2023-04-24 17:05:05 +02:00
Gabriel Soltz
a87f769b85 feat(DRS): New DRS Service and Checks (#2257)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-24 14:22:22 +02:00
Sergio Garcia
8e63fa4594 fix(version): execute check current version function only when -v (#2263) 2023-04-24 12:45:59 +02:00
Gabriel Soltz
63501a0d59 feat(inspector2): New Service and Check (#2250)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2023-04-24 12:15:16 +02:00
Sergio Garcia
828fb37ca8 chore(regions_update): Changes in regions for AWS services. (#2258)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-24 08:32:40 +02:00
Sergio Garcia
40f513d3b6 chore(regions_update): Changes in regions for AWS services. (#2251)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-21 12:10:15 +02:00
Sergio Garcia
f0b8b66a75 chore(test): add rds_instance_transport_encrypted test (#2252) 2023-04-21 12:09:47 +02:00
Sergio Garcia
d51cdc068b fix(iam_role_cross_service_confused_deputy_prevention): avoid service linked roles (#2249) 2023-04-21 10:42:05 +02:00
Sergio Garcia
f8b382e480 fix(version): update version to 3.4.0 (#2247) 2023-04-20 17:05:18 +02:00
Ronen Atias
1995f43b67 fix(redshift): correct description in redshift_cluster_automatic_upgrades (#2246) 2023-04-20 15:19:49 +02:00
Sergio Garcia
69e0392a8b fix(rds): exclude Aurora in rds_instance_transport_encrypted check (#2245) 2023-04-20 14:28:12 +02:00
Sergio Garcia
1f6319442e chore(docs): improve GCP docs (#2242) 2023-04-20 14:15:28 +02:00
Sergio Garcia
559c4c0c2c chore(regions_update): Changes in regions for AWS services. (#2243)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-20 11:43:02 +02:00
Sergio Garcia
feeb5b58d9 fix(checks): improve --list-checks function (#2240) 2023-04-19 17:00:20 +02:00
Sergio Garcia
7a00f79a56 fix(iam_policy_no_administrative_privileges): check attached policies and AWS-Managed (#2200)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-19 14:34:53 +02:00
Sergio Garcia
10d744704a fix(errors): solve ECR and CodeArtifact errors (#2239) 2023-04-19 13:27:19 +02:00
Gabriel Soltz
eee35f9cc3 feat(ssmincidents): New Service and Checks (#2219)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-19 12:26:20 +02:00
Gabriel Soltz
b3656761eb feat(check): New VPC checks (#2218) 2023-04-19 12:01:12 +02:00
Sergio Garcia
7b5fe34316 feat(html): add html to Azure and GCP (#2181)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-18 16:13:57 +02:00
Sergio Garcia
4536780a19 feat(check): new check ecr_registry_scan_images_on_push_enabled (#2237) 2023-04-18 15:45:21 +02:00
Sergio Garcia
05d866e6b3 chore(regions_update): Changes in regions for AWS services. (#2236)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-18 13:43:15 +02:00
dependabot[bot]
0d138cf473 build(deps): bump botocore from 1.29.105 to 1.29.115 (#2233)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-18 13:42:50 +02:00
dependabot[bot]
dbe539ac80 build(deps): bump boto3 from 1.26.90 to 1.26.105 (#2232)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-18 12:35:33 +02:00
dependabot[bot]
665a39d179 build(deps): bump azure-storage-blob from 12.15.0 to 12.16.0 (#2230)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-18 11:02:39 +02:00
dependabot[bot]
5fd5d8c8c5 build(deps-dev): bump coverage from 7.2.2 to 7.2.3 (#2234)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-18 08:03:44 +02:00
dependabot[bot]
2832b4564c build(deps-dev): bump moto from 4.1.6 to 4.1.7 (#2231)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-18 07:40:50 +02:00
dependabot[bot]
d4369a64ee build(deps): bump azure-mgmt-security from 3.0.0 to 4.0.0 (#2141)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-17 13:22:09 +02:00
Sergio Garcia
81fa1630b7 chore(regions_update): Changes in regions for AWS services. (#2227)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-17 11:18:41 +02:00
Sergio Garcia
a1c4b35205 chore(regions_update): Changes in regions for AWS services. (#2217)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-17 11:16:22 +02:00
Sergio Garcia
5e567f3e37 fix(iam tests): mock audit_info object (#2226)
Co-authored-by: n4ch04 <nachor1992@gmail.com>
2023-04-17 11:14:48 +02:00
Pepe Fagoaga
c4757684c1 fix(test): Mock audit into in SecurityHub CodeBuild (#2225) 2023-04-17 11:14:36 +02:00
Sergio Garcia
a55a6bf94b fix(test): Mock audit info in EC2 (#2224) 2023-04-17 10:54:56 +02:00
Pepe Fagoaga
fa1792eb77 fix(test): Mock audit into in CloudWatch (#2223) 2023-04-17 10:54:01 +02:00
Nacho Rivera
93a8f6e759 fix(rds tests): mocked audit_info object (#2222) 2023-04-17 10:06:25 +02:00
Nacho Rivera
4a614855d4 fix(s3 tests): audit_info object mocked (#2221) 2023-04-17 10:04:28 +02:00
Pepe Fagoaga
8bdd47f912 fix(test): Mock audit info in KMS (#2215) 2023-04-14 14:34:55 +02:00
Nacho Rivera
f9e82abadc fix(vpc tests): mock current_audit_info (#2214) 2023-04-14 14:31:34 +02:00
Gabriel Soltz
428fda81e2 feat(check): New GuardDuty check guardduty_centrally_managed (#2195)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-14 14:30:51 +02:00
Pepe Fagoaga
29c9ad602d fix(test): Mock audit into in Macie (#2213) 2023-04-14 14:29:19 +02:00
Pepe Fagoaga
44458e2a97 fix(test): Mock audit info codeartifact-config-ds (#2210) 2023-04-14 14:25:45 +02:00
Pepe Fagoaga
861fb1f54b fix(test): Mock audit into in Glacier (#2212) 2023-04-14 14:20:03 +02:00
Pepe Fagoaga
02534f4d55 fix(test): Mock audit info DynamoDB (#2211) 2023-04-14 14:19:08 +02:00
Pepe Fagoaga
5532cb95a2 fix(test): Mock audit info in appstream and autoscaling (#2209) 2023-04-14 14:06:07 +02:00
Pepe Fagoaga
9176e43fc9 fix(test): Mock audit info API Gateway (#2208) 2023-04-14 13:49:38 +02:00
Pepe Fagoaga
cb190f54fc fix(elb-test): Use a mocked current audit info (#2207) 2023-04-14 12:43:08 +02:00
Sergio Garcia
4be2539bc2 fix(resourceexplorer2): solve test and region (#2206) 2023-04-14 12:33:52 +02:00
Sergio Garcia
291e2adffa chore(regions_update): Changes in regions for AWS services. (#2205)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-14 12:32:58 +02:00
Gabriel Soltz
fa2ec63f45 feat(check): New Check and Service: resourceexplorer2_indexes_found (#2196)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2023-04-14 10:18:36 +02:00
Nacho Rivera
946c943457 fix(global services): fixed global services region (#2203)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-14 09:57:33 +02:00
Pepe Fagoaga
0e50766d6e fix(test): call cloudtrail_s3_dataevents_write_enabled check (#2204) 2023-04-14 09:35:29 +02:00
Sergio Garcia
58a1610ae0 chore(regions_update): Changes in regions for AWS services. (#2201)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-13 15:53:56 +02:00
Nacho Rivera
06dc21168a feat(orgs checks region): added region to all orgs checks (#2202) 2023-04-13 14:41:18 +02:00
Gabriel Soltz
305b67fbed feat(check): New check cloudtrail_bucket_requires_mfa_delete (#2194)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-13 14:18:31 +02:00
Sergio Garcia
4da6d152c3 feat(custom checks): add -x/--checks-folder for custom checks (#2191) 2023-04-13 13:44:25 +02:00
Sergio Garcia
25630f1ef5 chore(regions): sort AWS regions (#2198)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-12 13:24:14 +02:00
Sergio Garcia
9b01e3f1c9 chore(regions_update): Changes in regions for AWS services. (#2197)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-12 12:53:03 +02:00
Sergio Garcia
99450400eb chore(regions_update): Changes in regions for AWS services. (#2189)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-12 10:47:21 +02:00
Gabriel Soltz
2f8a8988d7 feat(checks): New IAM Checks no full access to critical services (#2183) 2023-04-12 07:47:21 +02:00
Sergio Garcia
9104d2e89e fix(kms): handle empty principal error (#2192) 2023-04-11 16:59:29 +02:00
Gabriel Soltz
e75022763c feat(checks): New iam_securityaudit_role_created (#2182) 2023-04-11 14:15:39 +02:00
Gabriel Soltz
f0f3fb337d feat(check): New CloudTrail check cloudtrail_insights_exist (#2184) 2023-04-11 13:49:54 +02:00
dependabot[bot]
f7f01a34c2 build(deps): bump google-api-python-client from 2.81.0 to 2.84.0 (#2188)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-11 12:13:41 +02:00
dependabot[bot]
f9f9ff0cb8 build(deps): bump alive-progress from 3.1.0 to 3.1.1 (#2187)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-11 08:13:17 +02:00
dependabot[bot]
522ba05ba8 build(deps): bump mkdocs-material from 9.1.5 to 9.1.6 (#2186)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-11 07:54:41 +02:00
Gabriel Soltz
f4f4093466 feat(backup): New backup service and checks (#2172)
Co-authored-by: Nacho Rivera <nacho@verica.io>
2023-04-11 07:43:40 +02:00
dependabot[bot]
2e16ab0c2c build(deps-dev): bump pytest from 7.2.2 to 7.3.0 (#2185)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-11 07:39:09 +02:00
Sergio Garcia
6f02606fb7 fix(iam): handle no display name error in service account (#2176) 2023-04-10 12:06:08 +02:00
Sergio Garcia
df40142b51 chore(regions_update): Changes in regions for AWS services. (#2180)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-10 12:05:48 +02:00
Sergio Garcia
cc290d488b chore(regions_update): Changes in regions for AWS services. (#2178)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-10 12:05:30 +02:00
Nacho Rivera
64328218fc feat(banner): azure credential banner (#2179) 2023-04-10 09:58:28 +02:00
Sergio Garcia
8d1356a085 fix(logging): add default resource id when no resources (#2177) 2023-04-10 08:02:40 +02:00
Sergio Garcia
4f39dd0f73 fix(version): handle request response property (#2175)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-05 15:17:30 +02:00
Pepe Fagoaga
54ffc8ae45 chore(release): 3.3.4 (#2174) 2023-04-05 14:18:07 +02:00
Sergio Garcia
78ab1944bd chore(regions_update): Changes in regions for AWS services. (#2173)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-05 12:32:25 +02:00
dependabot[bot]
434cf94657 build(deps-dev): bump moto from 4.1.5 to 4.1.6 (#2164)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-05 12:31:58 +02:00
Nacho Rivera
dcb893e230 fix(elbv2 desync check): Mixed elbv2 desync and smuggling (#2171) 2023-04-05 11:36:06 +02:00
Sergio Garcia
ce4fadc378 chore(regions_update): Changes in regions for AWS services. (#2170)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-05 08:47:19 +02:00
dependabot[bot]
5683d1b1bd build(deps): bump botocore from 1.29.100 to 1.29.105 (#2163)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-04 13:24:03 +02:00
dependabot[bot]
0eb88d0c10 build(deps): bump mkdocs-material from 9.1.4 to 9.1.5 (#2162)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-04 11:07:41 +02:00
Nacho Rivera
eb1367e54d fix(pipeline build): fixed wording when build and push (#2169) 2023-04-04 10:21:28 +02:00
dependabot[bot]
33a4786206 build(deps-dev): bump pylint from 2.17.0 to 2.17.2 (#2161)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-04 09:35:10 +02:00
Pepe Fagoaga
8c6606ad95 fix(dax): Call list_tags using the cluster ARN (#2167) 2023-04-04 09:30:36 +02:00
Pepe Fagoaga
cde9519a76 fix(iam): Handle LimitExceededException when calling generate_credential_report (#2168) 2023-04-04 09:29:27 +02:00
Pepe Fagoaga
7b2e0d79cb fix(cloudformation): Handle ValidationError (#2166) 2023-04-04 09:28:11 +02:00
Pepe Fagoaga
5b0da8e92a fix(rds): Handle DBSnapshotNotFound (#2165) 2023-04-04 09:27:36 +02:00
Michael Göhler
0126d2f77c fix(secretsmanager_automatic_rotation_enabled): Improve description for Secrets Manager secret rotation (#2156) 2023-04-03 11:01:29 +02:00
Sergio Garcia
0b436014c9 chore(regions_update): Changes in regions for AWS services. (#2159)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-03 11:01:15 +02:00
Igor Ceron
2cb7f223ed fix(docs): check extra_742 name adjusted in the V2 to V3 mapping (#2154) 2023-03-31 12:54:13 +02:00
Sergio Garcia
eca551ed98 chore(regions_update): Changes in regions for AWS services. (#2155)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-31 12:53:49 +02:00
Gabriel Soltz
608fd92861 feat(new_checks): New AWS Organizations related checks (#2133)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-03-30 17:36:23 +02:00
Sergio Garcia
e37d8fe45f chore(release): update Prowler Version to 3.3.2 (#2150)
Co-authored-by: github-actions <noreply@github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-03-30 11:33:33 +02:00
Sergio Garcia
4cce91ec97 chore(regions_update): Changes in regions for AWS services. (#2153)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-30 11:29:00 +02:00
Pepe Fagoaga
72fdde35dc fix(pypi): Set base branch when updating release version (#2152) 2023-03-30 10:59:58 +02:00
Pepe Fagoaga
d425187778 fix(pypi): Build from release branch (#2151) 2023-03-30 10:14:49 +02:00
Sergio Garcia
e419aa1f1a chore(regions_update): Changes in regions for AWS services. (#2149)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-29 11:45:35 +02:00
Pepe Fagoaga
5506547f7f fix(ssm): Handle ValidationException when retrieving documents (#2146) 2023-03-29 09:16:52 +02:00
Nacho Rivera
568ed72b3e fix(audit_info): azure subscriptions parsing error (#2147) 2023-03-29 09:15:53 +02:00
Nacho Rivera
e8cc0e6684 fix(delete check): delete check ec2_securitygroup_in_use_without_ingress_filtering (#2148) 2023-03-29 09:13:43 +02:00
Sergio Garcia
4331f69395 chore(regions_update): Changes in regions for AWS services. (#2145)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-28 13:08:02 +02:00
dependabot[bot]
7cc67ae7cb build(deps): bump botocore from 1.29.90 to 1.29.100 (#2142)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-28 13:07:23 +02:00
dependabot[bot]
244b3438fc build(deps): bump mkdocs-material from 9.1.3 to 9.1.4 (#2140)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-28 12:39:00 +02:00
Nacho Rivera
1a741f7ca0 fix(azure output): change default values of audit identity metadata (#2144) 2023-03-28 10:42:47 +02:00
dependabot[bot]
1447800e2b build(deps): bump pydantic from 1.10.6 to 1.10.7 (#2139)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-28 10:41:09 +02:00
Sergio Garcia
f968fe7512 fix(readme): add GCP provider to README introduction (#2143) 2023-03-28 10:40:56 +02:00
dependabot[bot]
0a2349fad7 build(deps): bump alive-progress from 3.0.1 to 3.1.0 (#2138)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-28 09:55:18 +02:00
Sergio Garcia
941b8cbc1e chore(docs): Developer Guide - how to create a new check (#2137) 2023-03-27 20:20:13 +02:00
Pepe Fagoaga
3b7b16acfd fix(resource_not_found): Handle error (#2136) 2023-03-27 17:27:50 +02:00
Nacho Rivera
fbc7bb68fc feat(defender service): retrieving key dicts with get (#2129) 2023-03-27 17:13:11 +02:00
Pepe Fagoaga
0d16880596 fix(s3): handle if ignore_public_acls is None (#2128) 2023-03-27 17:00:20 +02:00
Sergio Garcia
3b5218128f fix(brew): move brew formula action to the bottom (#2135) 2023-03-27 11:24:28 +02:00
Pepe Fagoaga
cb731bf1db fix(aws_provider): Fix assessment session name (#2132) 2023-03-25 00:11:16 +01:00
Sergio Garcia
7c4d6eb02d fix(gcp): handle error when Project ID is None (#2130) 2023-03-24 18:30:33 +01:00
Sergio Garcia
c14e7fb17a feat(gcp): add Google Cloud provider with 43 checks (#2125) 2023-03-24 13:38:41 +01:00
Sergio Garcia
fe57811bc5 chore(regions_update): Changes in regions for AWS services. (#2126)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-24 10:18:33 +01:00
Sergio Garcia
e073b48f7d chore(regions_update): Changes in regions for AWS services. (#2123)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-23 15:58:47 +01:00
Ben Nugent
a9df609593 fix(quickinventory): AttributError when creating inventory table (#2122) 2023-03-23 10:22:14 +01:00
Sergio Garcia
6c3db9646e fix(output bucket): solve IsADirectoryError using compliance flag (#2121) 2023-03-22 13:38:41 +01:00
Sergio Garcia
ff9c4c717e chore(regions_update): Changes in regions for AWS services. (#2120)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-22 12:18:44 +01:00
Sergio Garcia
182374b46f docs: improve reporting documentation (#2119) 2023-03-22 10:02:52 +01:00
Sergio Garcia
0871cda526 docs: improve quick inventory section (#2117) 2023-03-21 18:09:40 +01:00
Toni de la Fuente
1b47cba37a docs(developer-guide): added phase 1 of the developer guide (#1904)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2023-03-21 15:35:26 +01:00
Pepe Fagoaga
e5bef36905 docs: Remove list severities (#2116) 2023-03-21 14:18:07 +01:00
Sergio Garcia
706d723703 chore(version): check latest version (#2106) 2023-03-21 11:16:13 +01:00
Sergio Garcia
51eacbfac5 feat(allowlist): add tags filter to allowlist (#2105) 2023-03-21 11:14:59 +01:00
dependabot[bot]
5c2a411982 build(deps): bump boto3 from 1.26.86 to 1.26.90 (#2114)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-21 11:04:26 +01:00
Sergio Garcia
08d65cbc41 chore(regions_update): Changes in regions for AWS services. (#2115)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-21 11:03:54 +01:00
dependabot[bot]
9d2bf429c1 build(deps): bump mkdocs-material from 9.1.2 to 9.1.3 (#2113)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-21 10:18:36 +01:00
dependabot[bot]
d34f863bd4 build(deps-dev): bump moto from 4.1.4 to 4.1.5 (#2111)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-03-21 09:27:44 +01:00
Sergio Garcia
b4abf1c2c7 chore(regions_update): Changes in regions for AWS services. (#2104)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-21 08:32:26 +01:00
dependabot[bot]
68baaf589e build(deps-dev): bump coverage from 7.2.1 to 7.2.2 (#2112)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-21 08:18:47 +01:00
dependabot[bot]
be74e41d84 build(deps-dev): bump openapi-spec-validator from 0.5.5 to 0.5.6 (#2110)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-21 07:52:50 +01:00
Sergio Garcia
848122b0ec chore(release): update Prowler Version to 3.3.0 (#2102)
Co-authored-by: github-actions <noreply@github.com>
2023-03-16 22:30:02 +01:00
Nacho Rivera
0edcb7c0d9 fix(ulimit check): try except when checking ulimit (#2096)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2023-03-16 17:39:46 +01:00
Pepe Fagoaga
cc58e06b5e fix(providers): Move provider's logic outside main (#2043)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-03-16 17:32:53 +01:00
Sergio Garcia
0d6ca606ea fix(ec2_securitygroup_allow_wide_open_public_ipv4): correct check title (#2101) 2023-03-16 17:25:32 +01:00
Sergio Garcia
75ee93789f chore(regions_update): Changes in regions for AWS services. (#2095)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-16 17:14:40 +01:00
Sergio Garcia
05daddafbf feat(SecurityHub): add compliance details to Security Hub findings (#2100) 2023-03-16 17:11:55 +01:00
Nacho Rivera
7bbce6725d fix(ulimit check): test only when platform is not windows (#2094) 2023-03-16 08:38:37 +01:00
Nacho Rivera
789b211586 feat(lambda_cloudtrail check): improved logic and status extended (#2092) 2023-03-15 12:32:58 +01:00
Sergio Garcia
826a043748 chore(regions_update): Changes in regions for AWS services. (#2091)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-15 12:28:03 +01:00
Sergio Garcia
6761048298 fix(cloudwatch): solve inexistent filterPattern error (#2087) 2023-03-14 14:46:34 +01:00
Sergio Garcia
738fc9acad feat(compliance): add compliance field to HTML, CSV and JSON outputs including frameworks and reqs (#2060)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-03-14 14:20:46 +01:00
Sergio Garcia
43c0540de7 chore(regions_update): Changes in regions for AWS services. (#2085)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-14 13:11:02 +01:00
Sergio Garcia
2d1c3d8121 fix(emr): solve emr_cluster_publicly_accesible error (#2086) 2023-03-14 13:10:21 +01:00
dependabot[bot]
f48a5c650d build(deps-dev): bump pytest-xdist from 3.2.0 to 3.2.1 (#2084)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-14 10:21:17 +01:00
dependabot[bot]
66c18eddb8 build(deps): bump botocore from 1.29.86 to 1.29.90 (#2083)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-14 10:01:23 +01:00
dependabot[bot]
fdd2ee6365 build(deps-dev): bump bandit from 1.7.4 to 1.7.5 (#2082)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-14 09:03:46 +01:00
dependabot[bot]
c207f60ad8 build(deps): bump pydantic from 1.10.5 to 1.10.6 (#2081)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-14 08:02:28 +01:00
dependabot[bot]
0eaa95c8c0 build(deps): bump mkdocs-material from 9.1.1 to 9.1.2 (#2080)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-14 07:48:02 +01:00
Pepe Fagoaga
df2fca5935 fix(bug_report): typo in bug reporting template (#2078)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2023-03-13 18:42:34 +01:00
Toni de la Fuente
dcaf5d9c7d update(docs): update readme with new ECR alias (#2079) 2023-03-13 18:07:51 +01:00
Sergio Garcia
0112969a97 fix(compliance): add check to 2.1.5 CIS (#2077) 2023-03-13 09:25:51 +01:00
Sergio Garcia
3ec0f3d69c chore(regions_update): Changes in regions for AWS services. (#2075)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-13 07:51:13 +01:00
Pepe Fagoaga
5555d300a1 fix(bug_report): Update wording (#2074) 2023-03-10 12:21:51 +01:00
Nacho Rivera
8155ef4b60 feat(templates): New versions of issues and fr templates (#2072) 2023-03-10 10:32:17 +01:00
Sergio Garcia
a12402f6c8 chore(regions_update): Changes in regions for AWS services. (#2073)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-10 10:27:29 +01:00
Sergio Garcia
cf28b814cb fix(ec2): avoid terminated instances (#2063) 2023-03-10 08:11:35 +01:00
Pepe Fagoaga
b05f67db19 chore(actions): Missing cache in the PR (#2067) 2023-03-09 11:50:49 +01:00
Pepe Fagoaga
260f4659d5 chore(actions): Use GHA cache (#2066) 2023-03-09 10:29:16 +01:00
dependabot[bot]
9e700f298c build(deps-dev): bump pylint from 2.16.4 to 2.17.0 (#2062)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-08 15:41:22 +01:00
dependabot[bot]
56510734c4 build(deps): bump boto3 from 1.26.85 to 1.26.86 (#2061)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-08 15:14:18 +01:00
Pepe Fagoaga
3938a4d14e chore(dependabot): Change to weekly (#2057) 2023-03-08 14:41:34 +01:00
Sergio Garcia
fa3b9eeeaf chore(regions_update): Changes in regions for AWS services. (#2058)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-08 14:38:56 +01:00
dependabot[bot]
eb9d6fa25c build(deps): bump botocore from 1.29.85 to 1.29.86 (#2054)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-08 09:57:44 +01:00
Alex Nelson
b53307c1c2 docs: Corrected spelling mistake in multiacount (#2056) 2023-03-08 09:57:08 +01:00
dependabot[bot]
c3fc708a66 build(deps): bump boto3 from 1.26.82 to 1.26.85 (#2053)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-08 09:03:00 +01:00
Sergio Garcia
b34ffbe6d0 feat(inventory): add tags to quick inventory (#2051) 2023-03-07 14:20:50 +01:00
Sergio Garcia
f364315e48 chore(iam): update Prowler permissions (#2050) 2023-03-07 14:14:31 +01:00
Sergio Garcia
3ddb5a13a5 fix(ulimit): handle low ulimit OSError (#2042)
Co-authored-by: Toni de la Fuente <toni@blyx.com>
2023-03-07 13:19:24 +01:00
dependabot[bot]
a24cc399a4 build(deps-dev): bump moto from 4.1.3 to 4.1.4 (#2045)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-03-07 12:45:50 +01:00
Sergio Garcia
305f4b2688 chore(regions_update): Changes in regions for AWS services. (#2049)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-07 11:27:28 +01:00
dependabot[bot]
9823171d65 build(deps-dev): bump pylint from 2.16.3 to 2.16.4 (#2048)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 10:11:19 +01:00
dependabot[bot]
4761bd8fda build(deps): bump mkdocs-material from 9.1.0 to 9.1.1 (#2047)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 09:33:19 +01:00
dependabot[bot]
9c22698723 build(deps-dev): bump pytest from 7.2.1 to 7.2.2 (#2046)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 08:32:19 +01:00
dependabot[bot]
e3892bbcc6 build(deps): bump botocore from 1.29.84 to 1.29.85 (#2044)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 08:18:53 +01:00
Sergio Garcia
629b156f52 fix(quick inventory): add non-tagged s3 buckets to inventory (#2041) 2023-03-06 16:55:03 +01:00
Gary Mclean
c45dd47d34 fix(windows-path): --list-services bad split (#2028)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-03-06 14:00:07 +01:00
Sergio Garcia
ef8831f784 feat(quick_inventory): add regions to inventory table (#2026) 2023-03-06 13:41:30 +01:00
Sergio Garcia
c5a42cf5de feat(rds_instance_transport_encrypted): add new check (#1963)
Co-authored-by: Toni de la Fuente <toni@blyx.com>
2023-03-06 13:18:41 +01:00
dependabot[bot]
90ebbfc20f build(deps-dev): bump pylint from 2.16.2 to 2.16.3 (#2038)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-06 13:18:26 +01:00
Fennerr
17cd0dc91d feat(new_check): cloudwatch_log_group_no_secrets_in_logs (#1980)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: Jeffrey Souza <JeffreySouza@users.noreply.github.com>
2023-03-06 12:16:46 +01:00
dependabot[bot]
fa1f42af59 build(deps): bump botocore from 1.29.82 to 1.29.84 (#2037)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-06 12:14:48 +01:00
Sergio Garcia
f45ea1ab53 fix(check): change cloudformation_outputs_find_secrets name (#2027) 2023-03-06 12:11:58 +01:00
Sergio Garcia
0dde3fe483 chore(poetry): add poetry checks to pre-commit (#2040) 2023-03-06 11:44:04 +01:00
dependabot[bot]
277dc7dd09 build(deps-dev): bump freezegun from 1.2.1 to 1.2.2 (#2033)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-06 11:06:23 +01:00
dependabot[bot]
3215d0b856 build(deps-dev): bump coverage from 7.1.0 to 7.2.1 (#2032)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-06 09:55:19 +01:00
dependabot[bot]
0167d5efcd build(deps): bump mkdocs-material from 9.0.15 to 9.1.0 (#2031)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-06 09:15:44 +01:00
Sergio Garcia
b48ac808a6 chore(regions_update): Changes in regions for AWS services. (#2035)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-03 10:14:20 +01:00
dependabot[bot]
616524775c build(deps-dev): bump docker from 6.0.0 to 6.0.1 (#2030)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-03 10:02:11 +01:00
dependabot[bot]
5832849b11 build(deps): bump boto3 from 1.26.81 to 1.26.82 (#2029)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-03 09:43:43 +01:00
Sergio Garcia
467c5d01e9 fix(cloudtrail): list tags only in owned trails (#2025) 2023-03-02 16:16:19 +01:00
Sergio Garcia
24711a2f39 feat(tags): add resource tags to S-W services (#2020) 2023-03-02 14:21:05 +01:00
Nacho Rivera
24e8286f35 feat(): 7 chars in dispatch commit message (#2024) 2023-03-02 14:20:31 +01:00
Sergio Garcia
e8a1378ad0 feat(tags): add resource tags to G-R services (#2009) 2023-03-02 13:56:22 +01:00
Sergio Garcia
76bb418ea9 feat(tags): add resource tags to E services (#2007)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-03-02 13:55:26 +01:00
Nacho Rivera
cd8770a3e3 fix(actions): fixed dispatch commit message (#2023) 2023-03-02 13:55:03 +01:00
Sergio Garcia
da834c0935 feat(tags): add resource tags to C-D services (#2003)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-03-02 13:14:53 +01:00
Nacho Rivera
024ffb1117 fix(head): Pass head commit to dispatch action (#2022) 2023-03-02 12:06:41 +01:00
Nacho Rivera
eed7ab9793 fix(iam): refactor IAM service (#2010) 2023-03-02 11:16:05 +01:00
Sergio Garcia
032feb343f feat(tags): add resource tags in A services (#1997) 2023-03-02 10:59:49 +01:00
Pepe Fagoaga
eabccba3fa fix(actions): push should be true (#2019) 2023-03-02 10:37:29 +01:00
Nacho Rivera
d86d656316 feat(dispatch): add tag info to dispatch (#2002) 2023-03-02 10:31:30 +01:00
Sergio Garcia
fa73c91b0b chore(regions_update): Changes in regions for AWS services. (#2018)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-02 10:23:59 +01:00
Pepe Fagoaga
2eee50832d fix(actions): Stop using github storage (#2016) 2023-03-02 10:23:04 +01:00
Toni de la Fuente
b40736918b docs(install): Add brew and github installation to quick start (#1991) 2023-03-02 10:21:57 +01:00
Sergio Garcia
ffb1a2e30f chore(regions_update): Changes in regions for AWS services. (#1995)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-02 10:21:41 +01:00
Sergio Garcia
d6c3c0c6c1 feat(s3_bucket_level_public_access_block): new check (#1953) 2023-03-02 10:18:27 +01:00
dependabot[bot]
ee251721ac build(deps): bump botocore from 1.29.81 to 1.29.82 (#2015)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-02 09:53:24 +01:00
dependabot[bot]
fdbb9195d5 build(deps-dev): bump moto from 4.1.2 to 4.1.3 (#2014)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-02 09:23:48 +01:00
dependabot[bot]
c68b08d9af build(deps-dev): bump black from 22.10.0 to 22.12.0 (#2013)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-02 08:59:18 +01:00
dependabot[bot]
3653bbfca0 build(deps-dev): bump flake8 from 5.0.4 to 6.0.0 (#2012)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-02 08:32:41 +01:00
dependabot[bot]
05c7cc7277 build(deps): bump boto3 from 1.26.80 to 1.26.81 (#2011)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-02 07:54:33 +01:00
Sergio Garcia
5670bf099b chore(regions_update): Changes in regions for AWS services. (#2006)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-01 10:16:58 +01:00
Nacho Rivera
0c324b0f09 fix(awslambdacloudtrail): include advanced event and all lambdas in check (#1994) 2023-03-01 10:04:06 +01:00
dependabot[bot]
968557e38e build(deps): bump botocore from 1.29.80 to 1.29.81 (#2005)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-01 08:59:54 +01:00
dependabot[bot]
882cdebacb build(deps): bump boto3 from 1.26.79 to 1.26.80 (#2004) 2023-03-01 08:40:41 +01:00
Sergio Garcia
07753e1774 feat(encryption): add new encryption category (#1999) 2023-02-28 13:42:11 +01:00
Pepe Fagoaga
5b984507fc fix(emr): KeyError EmrManagedSlaveSecurityGroup (#2000) 2023-02-28 13:41:58 +01:00
Sergio Garcia
27df481967 chore(metadata): remove tags from metadata (#1998) 2023-02-28 12:27:59 +01:00
dependabot[bot]
0943031f23 build(deps): bump mkdocs-material from 9.0.14 to 9.0.15 (#1993)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-28 11:02:59 +01:00
dependabot[bot]
2d95168de0 build(deps): bump botocore from 1.29.79 to 1.29.80 (#1992)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-28 10:46:25 +01:00
Sergio Garcia
97cae8f92c chore(brew): bump new version to brew (#1990) 2023-02-27 18:07:05 +01:00
github-actions
eb213bac92 chore(release): 3.2.4 2023-02-27 14:25:52 +01:00
Sergio Garcia
8187788b2c fix(pypi-release.yml): create PR before replicating (#1986) 2023-02-27 14:16:53 +01:00
Sergio Garcia
c80e08abce fix(compliance): solve AWS compliance dir path (#1987) 2023-02-27 14:16:17 +01:00
github-actions[bot]
42fd851e5c chore(release): update Prowler Version to 3.2.3 (#1985)
Co-authored-by: github-actions <noreply@github.com>
Co-authored-by: jfagoagas <jfagoagas@users.noreply.github.com>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-02-27 13:59:28 +01:00
Pepe Fagoaga
70e4ebccab chore(codeowners): Update team to OSS (#1984) 2023-02-27 13:31:16 +01:00
Sergio Garcia
140f87c741 chore(readme): add brew stats (#1982) 2023-02-27 13:17:48 +01:00
Pepe Fagoaga
b0d756123e fix(action): Use PathContext to get version changes (#1983) 2023-02-27 13:17:09 +01:00
Pedro Martín González
6188c92916 chore(compliance): implements dynamic handling of available compliance frameworks (#1977)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-02-27 10:47:47 +01:00
dependabot[bot]
34c6f96728 build(deps): bump boto3 from 1.26.74 to 1.26.79 (#1981)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-27 09:45:45 +01:00
dependabot[bot]
50fd047c0b build(deps): bump botocore from 1.29.78 to 1.29.79 (#1978)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-27 09:14:29 +01:00
Sergio Garcia
5bcc05b536 chore(regions_update): Changes in regions for AWS services. (#1972)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-24 12:10:27 +01:00
Sergio Garcia
ce7d6c8dd5 fix(service errors): solve EMR, VPC and ELBv2 service errors (#1974) 2023-02-24 10:49:54 +01:00
dependabot[bot]
d87a1e28b4 build(deps): bump alive-progress from 2.4.1 to 3.0.1 (#1965)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-24 10:12:52 +01:00
Pepe Fagoaga
227306c572 fix(acm): Fix issues with list-certificates (#1970) 2023-02-24 10:12:38 +01:00
dependabot[bot]
45c2691f89 build(deps): bump mkdocs-material from 8.2.1 to 9.0.14 (#1964)
Signed-off-by: dependabot[bot] <support@github.com>
2023-02-24 10:03:52 +01:00
Pepe Fagoaga
d0c81245b8 fix(directoryservice): tzinfo without _ (#1971) 2023-02-24 10:03:34 +01:00
dependabot[bot]
e494afb1aa build(deps): bump botocore from 1.29.74 to 1.29.78 (#1968)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-24 09:43:14 +01:00
dependabot[bot]
ecc3c1cf3b build(deps): bump azure-storage-blob from 12.14.1 to 12.15.0 (#1966)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-24 08:42:44 +01:00
dependabot[bot]
228b16416a build(deps): bump colorama from 0.4.5 to 0.4.6 (#1967)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-24 07:56:47 +01:00
Nacho Rivera
17eb74842a fix(cloudfront): handle empty objects in checks (#1962) 2023-02-23 16:57:44 +01:00
Nacho Rivera
c01ff74c73 fix(kms): handle if describe_keys returns no value 2023-02-23 15:54:23 +01:00
Sergio Garcia
f88613b26d fix(toml): add toml dependency to pypi release action (#1960) 2023-02-23 15:24:46 +01:00
Sergio Garcia
3464f4241f chore(release): 3.2.2 (#1959)
Co-authored-by: github-actions <noreply@github.com>
2023-02-23 15:10:03 +01:00
Sergio Garcia
849b703828 chore(resource-based scan): execute only applicable checks (#1934) 2023-02-23 13:30:21 +01:00
Sergio Garcia
4b935a40b6 fix(metadata): remove us-east-1 in remediation (#1958) 2023-02-23 13:19:10 +01:00
Sergio Garcia
5873a23ccb fix(key errors): solver EMR and IAM errrors (#1957) 2023-02-23 13:15:00 +01:00
Nacho Rivera
eae2786825 fix(cloudtrail): Handle when the CloudTrail bucket is in another account (#1956) 2023-02-23 13:04:32 +01:00
github-actions[bot]
6407386de5 chore(regions_update): Changes in regions for AWS services. (#1952)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-23 12:24:36 +01:00
Sergio Garcia
3fe950723f fix(actions): add README to docker action and filter steps for releases (#1955) 2023-02-23 12:22:41 +01:00
Sergio Garcia
52bf6acd46 chore(regions): add secret token to avoid stuck checks (#1954) 2023-02-23 12:11:54 +01:00
Sergio Garcia
9590e7d7e0 chore(poetry): make python-poetry as packaging and dependency manager (#1935)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-02-23 11:50:29 +01:00
github-actions[bot]
7a08140a2d chore(regions_update): Changes in regions for AWS services. (#1950)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-23 08:42:36 +01:00
dependabot[bot]
d1491cfbd1 build(deps): bump boto3 from 1.26.74 to 1.26.76 (#1948)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-22 08:01:13 +01:00
dependabot[bot]
695b80549d build(deps): bump botocore from 1.29.75 to 1.29.76 (#1946)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-22 07:50:39 +01:00
Sergio Garcia
11c60a637f release: 3.2.1 (#1945) 2023-02-21 17:22:02 +01:00
Sergio Garcia
844ad70bb9 fix(cloudwatch): allow " in regex patterns (#1943) 2023-02-21 16:46:23 +01:00
Sergio Garcia
5ac7cde577 chore(iam_disable_N_days_credentials): improve checks logic (#1923) 2023-02-21 15:20:33 +01:00
Sergio Garcia
ce3ef0550f chore(Security Hub): add status extended to Security Hub (#1921) 2023-02-21 15:11:43 +01:00
Sergio Garcia
813f3e7d42 fix(errors): handle errors when S3 buckets or EC2 instances are deleted (#1942) 2023-02-21 12:31:23 +01:00
Sergio Garcia
d03f97af6b fix(regions): add unique branch name (#1941) 2023-02-21 11:53:36 +01:00
github-actions[bot]
019ab0286d chore(regions_update): Changes in regions for AWS services. (#1940)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-21 11:47:03 +01:00
Fennerr
c6647b4706 chore(secrets): Improve the status_extended with more information (#1937)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-02-21 11:37:20 +01:00
Sergio Garcia
f913536d88 fix(services): solve errors in EMR, RDS, S3 and VPC services (#1913) 2023-02-21 11:11:39 +01:00
dependabot[bot]
640d1bd176 build(deps-dev): bump moto from 4.1.2 to 4.1.3 (#1939)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-21 07:48:08 +01:00
dependabot[bot]
66baccf528 build(deps): bump botocore from 1.29.74 to 1.29.75 (#1938)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-21 07:32:44 +01:00
Sergio Garcia
6e6dacbace chore(security hub): add --skip-sh-update (#1911) 2023-02-20 09:58:00 +01:00
dependabot[bot]
cdbb10fb26 build(deps): bump boto3 from 1.26.72 to 1.26.74 (#1933)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-20 07:56:40 +01:00
dependabot[bot]
c34ba3918c build(deps): bump botocore from 1.29.73 to 1.29.74 (#1932)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-20 07:34:20 +01:00
Fennerr
fa228c876c fix(iam_rotate_access_key_90_days): check only active access keys (#1929)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-02-17 12:53:28 +01:00
dependabot[bot]
2f4d0af7d7 build(deps): bump botocore from 1.29.72 to 1.29.73 (#1926)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-17 12:14:23 +01:00
github-actions[bot]
2d3e5235a9 chore(regions_update): Changes in regions for AWS services. (#1927)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-17 11:13:13 +01:00
dependabot[bot]
8e91ccaa54 build(deps): bump boto3 from 1.26.71 to 1.26.72 (#1925)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-17 10:56:19 +01:00
Fennerr
6955658b36 fix(quick_inventory): handle ApiGateway resources (#1924)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-02-16 18:29:23 +01:00
Fennerr
dbb44401fd fix(ecs_task_definitions_no_environment_secrets): dump_env_vars is reintialised (#1922) 2023-02-16 15:59:53 +01:00
dependabot[bot]
b42ed70c84 build(deps): bump botocore from 1.29.71 to 1.29.72 (#1919)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-16 14:21:46 +01:00
dependabot[bot]
a28276d823 build(deps): bump pydantic from 1.10.4 to 1.10.5 (#1918)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-16 13:51:37 +01:00
Pepe Fagoaga
fa4b27dd0e fix(compliance): Set Version as optional and fix list (#1899)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-02-16 12:47:39 +01:00
dependabot[bot]
0be44d5c49 build(deps): bump boto3 from 1.26.70 to 1.26.71 (#1920)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-16 12:38:10 +01:00
github-actions[bot]
2514596276 chore(regions_update): Changes in regions for AWS services. (#1910)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-16 11:56:10 +01:00
dependabot[bot]
7008d2a953 build(deps): bump botocore from 1.29.70 to 1.29.71 (#1909)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-15 07:39:16 +01:00
dependabot[bot]
2539fedfc4 build(deps): bump boto3 from 1.26.69 to 1.26.70 (#1908)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-15 07:12:18 +01:00
Ignacio Dominguez
b453df7591 fix(iam-credentials-expiration): IAM password policy expires passwords fix (#1903)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-02-14 13:54:58 +01:00
Pepe Fagoaga
9e5d5edcba fix(codebuild): Handle endTime in builds (#1900) 2023-02-14 11:27:53 +01:00
Nacho Rivera
2d5de6ff99 fix(cross account): cloudtrail s3 bucket logging (#1902) 2023-02-14 11:23:31 +01:00
github-actions[bot]
259e9f1c17 chore(regions_update): Changes in regions for AWS services. (#1901)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-14 10:28:04 +01:00
dependabot[bot]
daeb53009e build(deps): bump botocore from 1.29.69 to 1.29.70 (#1898)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-14 08:27:14 +01:00
dependabot[bot]
f12d271ca5 build(deps): bump boto3 from 1.26.51 to 1.26.69 (#1897)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-14 07:55:26 +01:00
dependabot[bot]
965185ca3b build(deps-dev): bump pylint from 2.16.1 to 2.16.2 (#1896) 2023-02-14 07:35:29 +01:00
Pepe Fagoaga
9c484f6a78 Release: 3.2.0 (#1894) 2023-02-13 15:42:57 +01:00
Fennerr
de18c3c722 docs: Minor changes to logging (#1893) 2023-02-13 15:31:23 +01:00
Fennerr
9be753b281 docs: Minor changes to the intro paragraph (#1892) 2023-02-13 15:20:48 +01:00
Pepe Fagoaga
d6ae122de1 docs: Boto3 configuration (#1885)
Co-authored-by: Toni de la Fuente <toni@blyx.com>
2023-02-13 15:20:33 +01:00
Pepe Fagoaga
c6b90044f2 chore(Dockerfile): Remove build files (#1886) 2023-02-13 15:19:05 +01:00
Nacho Rivera
14898b6422 fix(Azure_Audit_Info): Added audited_resources field (#1891) 2023-02-13 15:17:11 +01:00
Fennerr
26294b0759 docs: Update AWS Role Assumption (#1890) 2023-02-13 15:13:22 +01:00
Nacho Rivera
6da45b5c2b fix(list_checks): arn filtering checks after audit_info set (#1887) 2023-02-13 14:57:42 +01:00
Acknosyn
674332fddd update(logging): fix plural grammar for checks execution message (#1680)
Co-authored-by: Francesco Badraun <francesco.badraun@zxsecurity.co.nz>
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-02-13 14:33:34 +01:00
Sergio Garcia
ab8942d05a fix(service errors): solve errors in IAM, S3, Lambda, DS, Cloudfront services (#1882)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-02-13 10:35:04 +01:00
github-actions[bot]
29790b8a5c chore(regions_update): Changes in regions for AWS services. (#1884)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-13 10:01:43 +01:00
dependabot[bot]
4a4c26ffeb build(deps): bump botocore from 1.29.51 to 1.29.69 (#1883)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-13 09:19:01 +01:00
Sergio Garcia
25c9bc07b2 chore(compliance): add manual checks to compliance CSV (#1872)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-02-10 12:38:13 +01:00
Nacho Rivera
d22d4c4c83 fix(cloudtrail_multi_region_enabled): reformat check (#1880) 2023-02-10 12:34:53 +01:00
Sergio Garcia
d88640fd20 fix(errors): solve several services errors (AccessAnalyzer, AppStream, KMS, S3, SQS, R53, IAM, CodeArtifact and EC2) (#1879) 2023-02-10 12:26:00 +01:00
github-actions[bot]
57a2fca3a4 chore(regions_update): Changes in regions for AWS services. (#1878)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-10 11:25:00 +01:00
Sergio Garcia
f796688c84 fix(metadata): typo in appstream_fleet_session_disconnect_timeout.metadata.json (#1875) 2023-02-09 16:22:19 +01:00
alexr3y
d6bbf8b7cc update(compliance): ENS RD2022 Spanish security framework updates (#1809)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-02-09 14:14:38 +01:00
Nacho Rivera
37ec460f64 fix(hardware mfa): changed hardware mfa description (#1873) 2023-02-09 14:06:54 +01:00
Sergio Garcia
004b9c95e4 fix(key_errors): handle Key Errors in Lambda and EMR (#1871)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-09 10:32:00 +01:00
github-actions[bot]
86e27b465a chore(regions_update): Changes in regions for AWS services. (#1870)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-09 10:17:18 +01:00
Nacho Rivera
5e9afddc3a fix(permissive role assumption): actions list handling (#1869) 2023-02-09 10:06:53 +01:00
Pepe Fagoaga
de281535b1 feat(boto3-config): Use standard retrier (#1868)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2023-02-09 09:58:47 +01:00
Pedro Martín González
9df7def14e feat(compliance): Add 17 new security compliance frameworks for AWS (#1824)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-09 07:39:57 +01:00
Sergio Garcia
5b9db9795d feat(new check): add accessanalyzer_enabled check (#1864)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-08 17:39:25 +01:00
Sergio Garcia
7d2ce7e6ab fix(action): do not trigger action when editing release (#1865) 2023-02-08 17:34:02 +01:00
Oleksandr Mykytenko
3e807af2b2 fix(checks): added validation for non-existing VPC endpoint policy (#1859)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-08 12:13:22 +01:00
Oleksandr Mykytenko
4c64dc7885 Fixed elbv2 service for GWLB resources (#1860)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-08 10:38:34 +01:00
github-actions[bot]
e7a7874b34 chore(regions_update): Changes in regions for AWS services. (#1863)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-08 10:36:03 +01:00
dependabot[bot]
c78a47788b build(deps): bump cryptography from 39.0.0 to 39.0.1 (#1862) 2023-02-08 08:02:47 +01:00
dependabot[bot]
922698c5d9 build(deps-dev): bump pytest-xdist from 3.1.0 to 3.2.0 (#1858) 2023-02-07 18:04:30 +01:00
1224 changed files with 62217 additions and 13565 deletions

2
.github/CODEOWNERS vendored
View File

@@ -1 +1 @@
* @prowler-cloud/prowler-team
* @prowler-cloud/prowler-oss

View File

@@ -1,52 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: "[Bug]: "
labels: bug, status/needs-triage
assignees: ''
---
<!--
Please use this template to create your bug report. By providing as much info as possible you help us understand the issue, reproduce it and resolve it for you quicker. Therefore, take a couple of extra minutes to make sure you have provided all info needed.
PROTIP: record your screen and attach it as a gif to showcase the issue.
- How to record and attach gif: https://bit.ly/2Mi8T6K
-->
**What happened?**
A clear and concise description of what the bug is or what is not working as expected
**How to reproduce it**
Steps to reproduce the behavior:
1. What command are you running?
2. Cloud provider you are launching
3. Environment you have like single account, multi-account, organizations, multi or single subsctiption, etc.
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots or Logs**
If applicable, add screenshots to help explain your problem.
Also, you can add logs (anonymize them first!). Here a command that may help to share a log
`prowler <your arguments> --log-level DEBUG --log-file $(date +%F)_debug.log` then attach here the log file.
**From where are you running Prowler?**
Please, complete the following information:
- Resource: (e.g. EC2 instance, Fargate task, Docker container manually, EKS, Cloud9, CodeBuild, workstation, etc.)
- OS: [e.g. Amazon Linux 2, Mac, Alpine, Windows, etc. ]
- Prowler Version [`prowler --version`]:
- Python version [`python --version`]:
- Pip version [`pip --version`]:
- Installation method (Are you running it from pip package or cloning the github repo?):
- Others:
**Additional context**
Add any other context about the problem here.

97
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@@ -0,0 +1,97 @@
name: 🐞 Bug Report
description: Create a report to help us improve
title: "[Bug]: "
labels: ["bug", "status/needs-triage"]
body:
- type: textarea
id: reproduce
attributes:
label: Steps to Reproduce
description: Steps to reproduce the behavior
placeholder: |-
1. What command are you running?
2. Cloud provider you are launching
3. Environment you have, like single account, multi-account, organizations, multi or single subscription, etc.
4. See error
validations:
required: true
- type: textarea
id: expected
attributes:
label: Expected behavior
description: A clear and concise description of what you expected to happen.
validations:
required: true
- type: textarea
id: actual
attributes:
label: Actual Result with Screenshots or Logs
description: If applicable, add screenshots to help explain your problem. Also, you can add logs (anonymize them first!). Here a command that may help to share a log `prowler <your arguments> --log-level DEBUG --log-file $(date +%F)_debug.log` then attach here the log file.
validations:
required: true
- type: dropdown
id: type
attributes:
label: How did you install Prowler?
options:
- Cloning the repository from github.com (git clone)
- From pip package (pip install prowler)
- From brew (brew install prowler)
- Docker (docker pull toniblyx/prowler)
validations:
required: true
- type: textarea
id: environment
attributes:
label: Environment Resource
description: From where are you running Prowler?
placeholder: |-
1. EC2 instance
2. Fargate task
3. Docker container locally
4. EKS
5. Cloud9
6. CodeBuild
7. Workstation
8. Other(please specify)
validations:
required: true
- type: textarea
id: os
attributes:
label: OS used
description: Which OS are you using?
placeholder: |-
1. Amazon Linux 2
2. MacOS
3. Alpine Linux
4. Windows
5. Other(please specify)
validations:
required: true
- type: input
id: prowler-version
attributes:
label: Prowler version
description: Which Prowler version are you using?
placeholder: |-
prowler --version
validations:
required: true
- type: input
id: pip-version
attributes:
label: Pip version
description: Which pip version are you using?
placeholder: |-
pip --version
validations:
required: true
- type: textarea
id: additional
attributes:
description: Additional context
label: Context
validations:
required: false

View File

@@ -0,0 +1,36 @@
name: 💡 Feature Request
description: Suggest an idea for this project
labels: ["enhancement", "status/needs-triage"]
body:
- type: textarea
id: Problem
attributes:
label: New feature motivation
description: Is your feature request related to a problem? Please describe
placeholder: |-
1. A clear and concise description of what the problem is. Ex. I'm always frustrated when
validations:
required: true
- type: textarea
id: Solution
attributes:
label: Solution Proposed
description: A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
id: Alternatives
attributes:
label: Describe alternatives you've considered
description: A clear and concise description of any alternative solutions or features you've considered.
validations:
required: true
- type: textarea
id: Context
attributes:
label: Additional context
description: Add any other context or screenshots about the feature request here.
validations:
required: false

View File

@@ -1,20 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement, status/needs-triage
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@@ -8,7 +8,7 @@ updates:
- package-ecosystem: "pip" # See documentation for possible values
directory: "/" # Location of package manifests
schedule:
interval: "daily"
interval: "weekly"
target-branch: master
labels:
- "dependencies"

View File

@@ -10,113 +10,53 @@ on:
- "docs/**"
release:
types: [published, edited]
types: [published]
env:
AWS_REGION_STG: eu-west-1
AWS_REGION_PLATFORM: eu-west-1
AWS_REGION_PRO: us-east-1
AWS_REGION: us-east-1
IMAGE_NAME: prowler
LATEST_TAG: latest
STABLE_TAG: stable
TEMPORARY_TAG: temporary
DOCKERFILE_PATH: ./Dockerfile
PYTHON_VERSION: 3.9
jobs:
# Lint Dockerfile using Hadolint
# dockerfile-linter:
# runs-on: ubuntu-latest
# steps:
# -
# name: Checkout
# uses: actions/checkout@v3
# -
# name: Install Hadolint
# run: |
# VERSION=$(curl --silent "https://api.github.com/repos/hadolint/hadolint/releases/latest" | \
# grep '"tag_name":' | \
# sed -E 's/.*"v([^"]+)".*/\1/' \
# ) && curl -L -o /tmp/hadolint https://github.com/hadolint/hadolint/releases/download/v${VERSION}/hadolint-Linux-x86_64 \
# && chmod +x /tmp/hadolint
# -
# name: Run Hadolint
# run: |
# /tmp/hadolint util/Dockerfile
# Build Prowler OSS container
container-build:
container-build-push:
# needs: dockerfile-linter
runs-on: ubuntu-latest
env:
POETRY_VIRTUALENVS_CREATE: "false"
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Build
uses: docker/build-push-action@v2
with:
# Without pushing to registries
push: false
tags: ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }}
file: ${{ env.DOCKERFILE_PATH }}
outputs: type=docker,dest=/tmp/${{ env.IMAGE_NAME }}.tar
- name: Share image between jobs
uses: actions/upload-artifact@v2
with:
name: ${{ env.IMAGE_NAME }}.tar
path: /tmp/${{ env.IMAGE_NAME }}.tar
# Lint Prowler OSS container using Dockle
# container-linter:
# needs: container-build
# runs-on: ubuntu-latest
# steps:
# -
# name: Get container image from shared
# uses: actions/download-artifact@v2
# with:
# name: ${{ env.IMAGE_NAME }}.tar
# path: /tmp
# -
# name: Load Docker image
# run: |
# docker load --input /tmp/${{ env.IMAGE_NAME }}.tar
# docker image ls -a
# -
# name: Install Dockle
# run: |
# VERSION=$(curl --silent "https://api.github.com/repos/goodwithtech/dockle/releases/latest" | \
# grep '"tag_name":' | \
# sed -E 's/.*"v([^"]+)".*/\1/' \
# ) && curl -L -o dockle.deb https://github.com/goodwithtech/dockle/releases/download/v${VERSION}/dockle_${VERSION}_Linux-64bit.deb \
# && sudo dpkg -i dockle.deb && rm dockle.deb
# -
# name: Run Dockle
# run: dockle ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }}
# Push Prowler OSS container to registries
container-push:
# needs: container-linter
needs: container-build
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read # This is required for actions/checkout
steps:
- name: Get container image from shared
uses: actions/download-artifact@v2
- name: Setup python (release)
if: github.event_name == 'release'
uses: actions/setup-python@v2
with:
name: ${{ env.IMAGE_NAME }}.tar
path: /tmp
- name: Load Docker image
python-version: ${{ env.PYTHON_VERSION }}
- name: Install dependencies (release)
if: github.event_name == 'release'
run: |
docker load --input /tmp/${{ env.IMAGE_NAME }}.tar
docker image ls -a
pipx install poetry
pipx inject poetry poetry-bumpversion
- name: Update Prowler version (release)
if: github.event_name == 'release'
run: |
poetry version ${{ github.event.release.tag_name }}
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to Public ECR
uses: docker/login-action@v2
with:
@@ -124,55 +64,53 @@ jobs:
username: ${{ secrets.PUBLIC_ECR_AWS_ACCESS_KEY_ID }}
password: ${{ secrets.PUBLIC_ECR_AWS_SECRET_ACCESS_KEY }}
env:
AWS_REGION: ${{ env.AWS_REGION_PRO }}
AWS_REGION: ${{ env.AWS_REGION }}
- name: Tag (latest)
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Build and push container image (latest)
if: github.event_name == 'push'
run: |
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.LATEST_TAG }}
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.LATEST_TAG }}
- # Push to master branch - push "latest" tag
name: Push (latest)
if: github.event_name == 'push'
run: |
docker push ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.LATEST_TAG }}
docker push ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.LATEST_TAG }}
- # Tag the new release (stable and release tag)
name: Tag (release)
if: github.event_name == 'release'
run: |
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ github.event.release.tag_name }}
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ github.event.release.tag_name }}
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
- # Push the new release (stable and release tag)
name: Push (release)
if: github.event_name == 'release'
run: |
docker push ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ github.event.release.tag_name }}
docker push ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ github.event.release.tag_name }}
docker push ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
docker push ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
- name: Delete artifacts
if: always()
uses: geekyeggo/delete-artifact@v1
uses: docker/build-push-action@v2
with:
name: ${{ env.IMAGE_NAME }}.tar
push: true
tags: |
${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.LATEST_TAG }}
${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.LATEST_TAG }}
file: ${{ env.DOCKERFILE_PATH }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Build and push container image (release)
if: github.event_name == 'release'
uses: docker/build-push-action@v2
with:
# Use local context to get changes
# https://github.com/docker/build-push-action#path-context
context: .
push: true
tags: |
${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ github.event.release.tag_name }}
${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ github.event.release.tag_name }}
${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
file: ${{ env.DOCKERFILE_PATH }}
cache-from: type=gha
cache-to: type=gha,mode=max
dispatch-action:
needs: container-push
needs: container-build-push
runs-on: ubuntu-latest
steps:
- name: Get latest commit info
if: github.event_name == 'push'
run: |
LATEST_COMMIT_HASH=$(echo ${{ github.event.after }} | cut -b -7)
echo "LATEST_COMMIT_HASH=${LATEST_COMMIT_HASH}" >> $GITHUB_ENV
- name: Dispatch event for latest
if: github.event_name == 'push'
run: |
curl https://api.github.com/repos/${{ secrets.DISPATCH_OWNER }}/${{ secrets.DISPATCH_REPO }}/dispatches -H "Accept: application/vnd.github+json" -H "Authorization: Bearer ${{ secrets.ACCESS_TOKEN }}" -H "X-GitHub-Api-Version: 2022-11-28" --data '{"event_type":"dispatch","client_payload":{"version":"latest"}'
curl https://api.github.com/repos/${{ secrets.DISPATCH_OWNER }}/${{ secrets.DISPATCH_REPO }}/dispatches -H "Accept: application/vnd.github+json" -H "Authorization: Bearer ${{ secrets.ACCESS_TOKEN }}" -H "X-GitHub-Api-Version: 2022-11-28" --data '{"event_type":"dispatch","client_payload":{"version":"latest", "tag": "${{ env.LATEST_COMMIT_HASH }}"}}'
- name: Dispatch event for release
if: github.event_name == 'release'
run: |

View File

@@ -17,42 +17,48 @@ jobs:
steps:
- uses: actions/checkout@v3
- name: Install poetry
run: |
python -m pip install --upgrade pip
pipx install poetry
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: 'poetry'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pipenv
pipenv install --dev
pipenv run pip list
poetry install
poetry run pip list
VERSION=$(curl --silent "https://api.github.com/repos/hadolint/hadolint/releases/latest" | \
grep '"tag_name":' | \
sed -E 's/.*"v([^"]+)".*/\1/' \
) && curl -L -o /tmp/hadolint "https://github.com/hadolint/hadolint/releases/download/v${VERSION}/hadolint-Linux-x86_64" \
&& chmod +x /tmp/hadolint
- name: Poetry check
run: |
poetry lock --check
- name: Lint with flake8
run: |
pipenv run flake8 . --ignore=E266,W503,E203,E501,W605,E128 --exclude contrib
poetry run flake8 . --ignore=E266,W503,E203,E501,W605,E128 --exclude contrib
- name: Checking format with black
run: |
pipenv run black --check .
poetry run black --check .
- name: Lint with pylint
run: |
pipenv run pylint --disable=W,C,R,E -j 0 -rn -sn prowler/
poetry run pylint --disable=W,C,R,E -j 0 -rn -sn prowler/
- name: Bandit
run: |
pipenv run bandit -q -lll -x '*_test.py,./contrib/' -r .
poetry run bandit -q -lll -x '*_test.py,./contrib/' -r .
- name: Safety
run: |
pipenv run safety check
poetry run safety check
- name: Vulture
run: |
pipenv run vulture --exclude "contrib" --min-confidence 100 .
poetry run vulture --exclude "contrib" --min-confidence 100 .
- name: Hadolint
run: |
/tmp/hadolint Dockerfile --ignore=DL3013
- name: Test with pytest
run: |
pipenv run pytest tests -n auto
poetry run pytest tests -n auto

View File

@@ -5,37 +5,75 @@ on:
types: [published]
env:
GITHUB_BRANCH: ${{ github.event.release.tag_name }}
RELEASE_TAG: ${{ github.event.release.tag_name }}
jobs:
release-prowler-job:
runs-on: ubuntu-latest
env:
POETRY_VIRTUALENVS_CREATE: "false"
name: Release Prowler to PyPI
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v3
with:
ref: ${{ env.GITHUB_BRANCH }}
- name: setup python
uses: actions/setup-python@v2
with:
python-version: 3.9 #install the python needed
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install build toml --upgrade
- name: Build package
run: python -m build
- name: Publish prowler-cloud package to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
pipx install poetry
pipx inject poetry poetry-bumpversion
- name: setup python
uses: actions/setup-python@v4
with:
password: ${{ secrets.PYPI_API_TOKEN }}
python-version: 3.9
cache: 'poetry'
- name: Change version and Build package
run: |
poetry version ${{ env.RELEASE_TAG }}
git config user.name "github-actions"
git config user.email "<noreply@github.com>"
git add prowler/config/config.py pyproject.toml
git commit -m "chore(release): ${{ env.RELEASE_TAG }}" --no-verify
git tag -fa ${{ env.RELEASE_TAG }} -m "chore(release): ${{ env.RELEASE_TAG }}"
git push -f origin ${{ env.RELEASE_TAG }}
git checkout -B release-${{ env.RELEASE_TAG }}
poetry build
- name: Publish prowler package to PyPI
run: |
poetry config pypi-token.pypi ${{ secrets.PYPI_API_TOKEN }}
poetry publish
# Create pull request with new version
- name: Create Pull Request
uses: peter-evans/create-pull-request@v4
with:
token: ${{ secrets.PROWLER_ACCESS_TOKEN }}
commit-message: "chore(release): update Prowler Version to ${{ env.RELEASE_TAG }}."
base: master
branch: release-${{ env.RELEASE_TAG }}
labels: "status/waiting-for-revision, severity/low"
title: "chore(release): update Prowler Version to ${{ env.RELEASE_TAG }}"
body: |
### Description
This PR updates Prowler Version to ${{ env.RELEASE_TAG }}.
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
- name: Replicate PyPi Package
run: |
rm -rf ./dist && rm -rf ./build && rm -rf prowler_cloud.egg-info
rm -rf ./dist && rm -rf ./build && rm -rf prowler.egg-info
pip install toml
python util/replicate_pypi_package.py
python -m build
- name: Publish prowler package to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
poetry build
- name: Publish prowler-cloud package to PyPI
run: |
poetry config pypi-token.pypi ${{ secrets.PYPI_API_TOKEN }}
poetry publish
# Create pull request to github.com/Homebrew/homebrew-core to update prowler formula
- name: Bump Homebrew formula
uses: mislav/bump-homebrew-formula-action@v2
with:
password: ${{ secrets.PYPI_API_TOKEN }}
formula-name: prowler
base-branch: release-${{ env.RELEASE_TAG }}
env:
COMMITTER_TOKEN: ${{ secrets.PROWLER_ACCESS_TOKEN }}

View File

@@ -52,9 +52,9 @@ jobs:
- name: Create Pull Request
uses: peter-evans/create-pull-request@v4
with:
token: ${{ secrets.GITHUB_TOKEN }}
token: ${{ secrets.PROWLER_ACCESS_TOKEN }}
commit-message: "feat(regions_update): Update regions for AWS services."
branch: "aws-services-regions-updated"
branch: "aws-services-regions-updated-${{ github.sha }}"
labels: "status/waiting-for-revision, severity/low"
title: "chore(regions_update): Changes in regions for AWS services."
body: |

View File

@@ -1,7 +1,7 @@
repos:
## GENERAL
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.3.0
rev: v4.4.0
hooks:
- id: check-merge-conflict
- id: check-yaml
@@ -13,14 +13,22 @@ repos:
- id: pretty-format-json
args: ["--autofix", --no-sort-keys, --no-ensure-ascii]
## TOML
- repo: https://github.com/macisamuele/language-formatters-pre-commit-hooks
rev: v2.7.0
hooks:
- id: pretty-format-toml
args: [--autofix]
files: pyproject.toml
## BASH
- repo: https://github.com/koalaman/shellcheck-precommit
rev: v0.8.0
rev: v0.9.0
hooks:
- id: shellcheck
## PYTHON
- repo: https://github.com/myint/autoflake
rev: v1.7.7
rev: v2.0.1
hooks:
- id: autoflake
args:
@@ -31,30 +39,32 @@ repos:
]
- repo: https://github.com/timothycrosley/isort
rev: 5.10.1
rev: 5.12.0
hooks:
- id: isort
args: ["--profile", "black"]
- repo: https://github.com/psf/black
rev: 22.10.0
rev: 23.1.0
hooks:
- id: black
- repo: https://github.com/pycqa/flake8
rev: 5.0.4
rev: 6.0.0
hooks:
- id: flake8
exclude: contrib
args: ["--ignore=E266,W503,E203,E501,W605"]
- repo: https://github.com/haizaar/check-pipfile-lock
rev: v0.0.5
- repo: https://github.com/python-poetry/poetry
rev: 1.4.0 # add version here
hooks:
- id: check-pipfile-lock
- id: poetry-check
- id: poetry-lock
args: ["--no-update"]
- repo: https://github.com/hadolint/hadolint
rev: v2.12.0
rev: v2.12.1-beta
hooks:
- id: hadolint
args: ["--ignore=DL3013"]
@@ -66,6 +76,15 @@ repos:
entry: bash -c 'pylint --disable=W,C,R,E -j 0 -rn -sn prowler/'
language: system
- id: trufflehog
name: TruffleHog
description: Detect secrets in your data.
# entry: bash -c 'trufflehog git file://. --only-verified --fail'
# For running trufflehog in docker, use the following entry instead:
entry: bash -c 'docker run -v "$(pwd):/workdir" -i --rm trufflesecurity/trufflehog:latest git file:///workdir --only-verified --fail'
language: system
stages: ["commit", "push"]
- id: pytest-check
name: pytest-check
entry: bash -c 'pytest tests -n auto'

23
.readthedocs.yaml Normal file
View File

@@ -0,0 +1,23 @@
# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
build:
os: "ubuntu-22.04"
tools:
python: "3.9"
jobs:
post_create_environment:
# Install poetry
# https://python-poetry.org/docs/#installing-manually
- pip install poetry
# Tell poetry to not use a virtual environment
- poetry config virtualenvs.create false
post_install:
- poetry install -E docs
mkdocs:
configuration: mkdocs.yml

View File

@@ -16,6 +16,7 @@ USER prowler
WORKDIR /home/prowler
COPY prowler/ /home/prowler/prowler/
COPY pyproject.toml /home/prowler
COPY README.md /home/prowler
# Install dependencies
ENV HOME='/home/prowler'
@@ -24,9 +25,9 @@ ENV PATH="$HOME/.local/bin:$PATH"
RUN pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir .
# Remove Prowler directory
# Remove Prowler directory and build files
USER 0
RUN rm -rf /home/prowler/prowler /home/prowler/pyproject.toml
RUN rm -rf /home/prowler/prowler /home/prowler/pyproject.toml /home/prowler/README.md /home/prowler/build /home/prowler/prowler.egg-info
USER prowler
ENTRYPOINT ["prowler"]

View File

@@ -24,11 +24,11 @@ lint: ## Lint Code
##@ PyPI
pypi-clean: ## Delete the distribution files
rm -rf ./dist && rm -rf ./build && rm -rf prowler_cloud.egg-info
rm -rf ./dist && rm -rf ./build && rm -rf prowler.egg-info
pypi-build: ## Build package
$(MAKE) pypi-clean && \
python3 -m build
poetry build
pypi-upload: ## Upload package
python3 -m twine upload --repository pypi dist/*

42
Pipfile
View File

@@ -1,42 +0,0 @@
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
colorama = "0.4.4"
boto3 = "1.26.3"
arnparse = "0.0.2"
botocore = "1.27.8"
pydantic = "1.9.1"
schema = "0.7.5"
shodan = "1.28.0"
detect-secrets = "1.4.0"
alive-progress = "2.4.1"
tabulate = "0.9.0"
azure-identity = "1.12.0"
azure-storage-blob = "12.14.1"
msgraph-core = "0.2.2"
azure-mgmt-subscription = "3.1.1"
azure-mgmt-authorization = "3.0.0"
azure-mgmt-security = "3.0.0"
azure-mgmt-storage = "21.0.0"
[dev-packages]
black = "22.10.0"
pylint = "2.16.1"
flake8 = "5.0.4"
bandit = "1.7.4"
safety = "2.3.1"
vulture = "2.7"
moto = "4.1.2"
docker = "6.0.0"
openapi-spec-validator = "0.5.5"
pytest = "7.2.1"
pytest-xdist = "2.5.0"
coverage = "7.1.0"
sure = "2.0.1"
freezegun = "1.2.1"
[requires]
python_version = "3.9"

1703
Pipfile.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -11,14 +11,14 @@
</p>
<p align="center">
<a href="https://join.slack.com/t/prowler-workspace/shared_invite/zt-1hix76xsl-2uq222JIXrC7Q8It~9ZNog"><img alt="Slack Shield" src="https://img.shields.io/badge/slack-prowler-brightgreen.svg?logo=slack"></a>
<a href="https://pypi.org/project/prowler-cloud/"><img alt="Python Version" src="https://img.shields.io/pypi/v/prowler.svg"></a>
<a href="https://pypi.python.org/pypi/prowler-cloud/"><img alt="Python Version" src="https://img.shields.io/pypi/pyversions/prowler.svg"></a>
<a href="https://pypistats.org/packages/prowler"><img alt="PyPI Prowler Downloads" src="https://img.shields.io/pypi/dw/prowler.svg"></a>
<a href="https://pypistats.org/packages/prowler-cloud"><img alt="PyPI Prowler-Cloud Downloads" src="https://img.shields.io/pypi/dw/prowler-cloud.svg"></a>
<a href="https://pypi.org/project/prowler/"><img alt="Python Version" src="https://img.shields.io/pypi/v/prowler.svg"></a>
<a href="https://pypi.python.org/pypi/prowler/"><img alt="Python Version" src="https://img.shields.io/pypi/pyversions/prowler.svg"></a>
<a href="https://pypistats.org/packages/prowler"><img alt="PyPI Prowler Downloads" src="https://img.shields.io/pypi/dw/prowler.svg?label=prowler%20downloads"></a>
<a href="https://pypistats.org/packages/prowler-cloud"><img alt="PyPI Prowler-Cloud Downloads" src="https://img.shields.io/pypi/dw/prowler-cloud.svg?label=prowler-cloud%20downloads"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/toniblyx/prowler"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker" src="https://img.shields.io/docker/cloud/build/toniblyx/prowler"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker" src="https://img.shields.io/docker/image-size/toniblyx/prowler"></a>
<a href="https://gallery.ecr.aws/o4g1s5r6/prowler"><img width="120" height=19" alt="AWS ECR Gallery" src="https://user-images.githubusercontent.com/3985464/151531396-b6535a68-c907-44eb-95a1-a09508178616.png"></a>
<a href="https://gallery.ecr.aws/prowler-cloud/prowler"><img width="120" height=19" alt="AWS ECR Gallery" src="https://user-images.githubusercontent.com/3985464/151531396-b6535a68-c907-44eb-95a1-a09508178616.png"></a>
</p>
<p align="center">
<a href="https://github.com/prowler-cloud/prowler"><img alt="Repo size" src="https://img.shields.io/github/repo-size/prowler-cloud/prowler"></a>
@@ -33,12 +33,17 @@
# Description
`Prowler` is an Open Source security tool to perform AWS and Azure security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness.
`Prowler` is an Open Source security tool to perform AWS, GCP and Azure security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness.
It contains hundreds of controls covering CIS, PCI-DSS, ISO27001, GDPR, HIPAA, FFIEC, SOC2, AWS FTR, ENS and custom security frameworks.
# 📖 Documentation
The full documentation can now be found at [https://docs.prowler.cloud](https://docs.prowler.cloud)
## Looking for Prowler v2 documentation?
For Prowler v2 Documentation, please go to https://github.com/prowler-cloud/prowler/tree/2.12.1.
# ⚙️ Install
## Pip package
@@ -48,6 +53,7 @@ Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-clo
pip install prowler
prowler -v
```
More details at https://docs.prowler.cloud
## Containers
@@ -60,30 +66,25 @@ The available versions of Prowler are the following:
The container images are available here:
- [DockerHub](https://hub.docker.com/r/toniblyx/prowler/tags)
- [AWS Public ECR](https://gallery.ecr.aws/o4g1s5r6/prowler)
- [AWS Public ECR](https://gallery.ecr.aws/prowler-cloud/prowler)
## From Github
Python >= 3.9 is required with pip and pipenv:
Python >= 3.9 is required with pip and poetry:
```
git clone https://github.com/prowler-cloud/prowler
cd prowler
pipenv shell
pipenv install
poetry shell
poetry install
python prowler.py -v
```
# 📖 Documentation
The full documentation can now be found at [https://docs.prowler.cloud](https://docs.prowler.cloud)
# 📐✏️ High level architecture
You can run Prowler from your workstation, an EC2 instance, Fargate or any other container, Codebuild, CloudShell and Cloud9.
![Architecture](https://github.com/prowler-cloud/prowler/blob/62c1ce73bbcdd6b9e5ba03dfcae26dfd165defd9/docs/img/architecture.png?raw=True)
![Architecture](https://github.com/prowler-cloud/prowler/assets/38561120/080261d9-773d-4af1-af79-217a273e3176)
# 📝 Requirements
@@ -162,6 +163,22 @@ Regarding the subscription scope, Prowler by default scans all the subscriptions
- `Reader`
## Google Cloud Platform
Prowler will follow the same credentials search as [Google authentication libraries](https://cloud.google.com/docs/authentication/application-default-credentials#search_order):
1. [GOOGLE_APPLICATION_CREDENTIALS environment variable](https://cloud.google.com/docs/authentication/application-default-credentials#GAC)
2. [User credentials set up by using the Google Cloud CLI](https://cloud.google.com/docs/authentication/application-default-credentials#personal)
3. [The attached service account, returned by the metadata server](https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa)
Those credentials must be associated to a user or service account with proper permissions to do all checks. To make sure, add the following roles to the member associated with the credentials:
- Viewer
- Security Reviewer
- Stackdriver Account Viewer
> `prowler` will scan the project associated with the credentials.
# 💻 Basic Usage
To run prowler, you will need to specify the provider (e.g aws or azure):
@@ -236,12 +253,14 @@ prowler azure [--sp-env-auth, --az-cli-auth, --browser-auth, --managed-identity-
```
> By default, `prowler` will scan all Azure subscriptions.
# 🎉 New Features
## Google Cloud Platform
Optionally, you can provide the location of an application credential JSON file with the following argument:
```console
prowler gcp --credentials-file path
```
- Python: we got rid of all bash and it is now all in Python.
- Faster: huge performance improvements (same account from 2.5 hours to 4 minutes).
- Developers and community: we have made it easier to contribute with new checks and new compliance frameworks. We also included unit tests.
- Multi-cloud: in addition to AWS, we have added Azure, we plan to include GCP and OCI soon, let us know if you want to contribute!
# 📃 License

View File

@@ -79,3 +79,21 @@ Regarding the subscription scope, Prowler by default scans all the subscriptions
- `Security Reader`
- `Reader`
## Google Cloud
### GCP Authentication
Prowler will follow the same credentials search as [Google authentication libraries](https://cloud.google.com/docs/authentication/application-default-credentials#search_order):
1. [GOOGLE_APPLICATION_CREDENTIALS environment variable](https://cloud.google.com/docs/authentication/application-default-credentials#GAC)
2. [User credentials set up by using the Google Cloud CLI](https://cloud.google.com/docs/authentication/application-default-credentials#personal)
3. [The attached service account, returned by the metadata server](https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa)
Those credentials must be associated to a user or service account with proper permissions to do all checks. To make sure, add the following roles to the member associated with the credentials:
- Viewer
- Security Reviewer
- Stackdriver Account Viewer
> `prowler` will scan the project associated with the credentials.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 258 KiB

After

Width:  |  Height:  |  Size: 283 KiB

BIN
docs/img/output-html.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 631 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 320 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 220 KiB

View File

@@ -5,7 +5,7 @@
# Prowler Documentation
**Welcome to [Prowler Open Source v3](https://github.com/prowler-cloud/prowler/) Documentation!** 📄
**Welcome to [Prowler Open Source v3](https://github.com/prowler-cloud/prowler/) Documentation!** 📄
For **Prowler v2 Documentation**, please go [here](https://github.com/prowler-cloud/prowler/tree/2.12.0) to the branch and its README.md.
@@ -16,7 +16,7 @@ For **Prowler v2 Documentation**, please go [here](https://github.com/prowler-cl
## About Prowler
**Prowler** is an Open Source security tool to perform AWS and Azure security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness.
**Prowler** is an Open Source security tool to perform AWS, Azure and Google Cloud security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness.
It contains hundreds of controls covering CIS, PCI-DSS, ISO27001, GDPR, HIPAA, FFIEC, SOC2, AWS FTR, ENS and custom security frameworks.
@@ -40,7 +40,7 @@ Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-clo
* `Python >= 3.9`
* `Python pip >= 3.9`
* AWS and/or Azure credentials
* AWS, GCP and/or Azure credentials
_Commands_:
@@ -54,7 +54,7 @@ Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-clo
_Requirements_:
* Have `docker` installed: https://docs.docker.com/get-docker/.
* AWS and/or Azure credentials
* AWS, GCP and/or Azure credentials
* In the command below, change `-v` to your local directory path in order to access the reports.
_Commands_:
@@ -71,7 +71,7 @@ Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-clo
_Requirements for Ubuntu 20.04.3 LTS_:
* AWS and/or Azure credentials
* AWS, GCP and/or Azure credentials
* Install python 3.9 with: `sudo apt-get install python3.9`
* Remove python 3.8 to avoid conflicts if you can: `sudo apt-get remove python3.8`
* Make sure you have the python3 distutils package installed: `sudo apt-get install python3-distutils`
@@ -87,11 +87,28 @@ Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-clo
prowler -v
```
=== "GitHub"
_Requirements for Developers_:
* AWS, GCP and/or Azure credentials
* `git`, `Python >= 3.9`, `pip` and `poetry` installed (`pip install poetry`)
_Commands_:
```
git clone https://github.com/prowler-cloud/prowler
cd prowler
poetry shell
poetry install
python prowler.py -v
```
=== "Amazon Linux 2"
_Requirements_:
* AWS and/or Azure credentials
* AWS, GCP and/or Azure credentials
* Latest Amazon Linux 2 should come with Python 3.9 already installed however it may need pip. Install Python pip 3.9 with: `sudo dnf install -y python3-pip`.
* Make sure setuptools for python is already installed with: `pip3 install setuptools`
@@ -103,6 +120,20 @@ Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-clo
prowler -v
```
=== "Brew"
_Requirements_:
* `Brew` installed in your Mac or Linux
* AWS, GCP and/or Azure credentials
_Commands_:
``` bash
brew install prowler
prowler -v
```
=== "AWS CloudShell"
Prowler can be easely executed in AWS CloudShell but it has some prerequsites to be able to to so. AWS CloudShell is a container running with `Amazon Linux release 2 (Karoo)` that comes with Python 3.7, since Prowler requires Python >= 3.9 we need to first install a newer version of Python. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
@@ -118,7 +149,7 @@ Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-clo
./configure --enable-optimizations
sudo make altinstall
python3.9 --version
cd
cd
```
_Commands_:
@@ -154,7 +185,7 @@ The available versions of Prowler are the following:
The container images are available here:
- [DockerHub](https://hub.docker.com/r/toniblyx/prowler/tags)
- [AWS Public ECR](https://gallery.ecr.aws/o4g1s5r6/prowler)
- [AWS Public ECR](https://gallery.ecr.aws/prowler-cloud/prowler)
## High level architecture
@@ -163,7 +194,7 @@ You can run Prowler from your workstation, an EC2 instance, Fargate or any other
![Architecture](img/architecture.png)
## Basic Usage
To run Prowler, you will need to specify the provider (e.g aws or azure):
To run Prowler, you will need to specify the provider (e.g aws, gcp or azure):
> If no provider specified, AWS will be used for backward compatibility with most of v2 options.
```console
@@ -195,6 +226,7 @@ For executing specific checks or services you can use options `-c`/`checks` or `
```console
prowler azure --checks storage_blob_public_access_level_is_disabled
prowler aws --services s3 ec2
prowler gcp --services iam compute
```
Also, checks and services can be excluded with options `-e`/`--excluded-checks` or `--excluded-services`:
@@ -202,6 +234,7 @@ Also, checks and services can be excluded with options `-e`/`--excluded-checks`
```console
prowler aws --excluded-checks s3_bucket_public_access
prowler azure --excluded-services defender iam
prowler gcp --excluded-services kms
```
More options and executions methods that will save your time in [Miscelaneous](tutorials/misc.md).
@@ -221,6 +254,8 @@ prowler aws --profile custom-profile -f us-east-1 eu-south-2
```
> By default, `prowler` will scan all AWS regions.
See more details about AWS Authentication in [Requirements](getting-started/requirements.md)
### Azure
With Azure you need to specify which auth method is going to be used:
@@ -239,9 +274,28 @@ prowler azure --browser-auth
prowler azure --managed-identity-auth
```
More details in [Requirements](getting-started/requirements.md)
See more details about Azure Authentication in [Requirements](getting-started/requirements.md)
Prowler by default scans all the subscriptions that is allowed to scan, if you want to scan a single subscription or various concrete subscriptions you can use the following flag (using az cli auth as example):
```console
prowler azure --az-cli-auth --subscription-ids <subscription ID 1> <subscription ID 2> ... <subscription ID N>
```
### Google Cloud
Prowler will use by default your User Account credentials, you can configure it using:
- `gcloud init` to use a new account
- `gcloud config set account <account>` to use an existing account
Then, obtain your access credentials using: `gcloud auth application-default login`
Otherwise, you can generate and download Service Account keys in JSON format (refer to https://cloud.google.com/iam/docs/creating-managing-service-account-keys) and provide the location of the file with the following argument:
```console
prowler gcp --credentials-file path
```
> `prowler` will scan the GCP project associated with the credentials.
See more details about GCP Authentication in [Requirements](getting-started/requirements.md)

View File

@@ -7,35 +7,52 @@ You can use `-w`/`--allowlist-file` with the path of your allowlist yaml file, b
## Allowlist Yaml File Syntax
### Account, Check and/or Region can be * to apply for all the cases
### Resources is a list that can have either Regex or Keywords:
### Account, Check and/or Region can be * to apply for all the cases.
### Resources and tags are lists that can have either Regex or Keywords.
### Tags is an optional list that matches on tuples of 'key=value' and are "ANDed" together.
### Use an alternation Regex to match one of multiple tags with "ORed" logic.
########################### ALLOWLIST EXAMPLE ###########################
Allowlist:
Accounts:
"123456789012":
Checks:
Checks:
"iam_user_hardware_mfa_enabled":
Regions:
Regions:
- "us-east-1"
Resources:
Resources:
- "user-1" # Will ignore user-1 in check iam_user_hardware_mfa_enabled
- "user-2" # Will ignore user-2 in check iam_user_hardware_mfa_enabled
"*":
Regions:
"ec2_*":
Regions:
- "*"
Resources:
- "test" # Will ignore every resource containing the string "test" in every account and region
Resources:
- "*" # Will ignore every EC2 check in every account and region
"*":
Regions:
- "*"
Resources:
- "test"
Tags:
- "test=test" # Will ignore every resource containing the string "test" and the tags 'test=test' and
- "project=test|project=stage" # either of ('project=test' OR project=stage) in account 123456789012 and every region
"*":
Checks:
Checks:
"s3_bucket_object_versioning":
Regions:
Regions:
- "eu-west-1"
- "us-east-1"
Resources:
Resources:
- "ci-logs" # Will ignore bucket "ci-logs" AND ALSO bucket "ci-logs-replica" in specified check and regions
- "logs" # Will ignore EVERY BUCKET containing the string "logs" in specified check and regions
- "[[:alnum:]]+-logs" # Will ignore all buckets containing the terms ci-logs, qa-logs, etc. in specified check and regions
"*":
Regions:
- "*"
Resources:
- "*"
Tags:
- "environment=dev" # Will ignore every resource containing the tag 'environment=dev' in every account and region
## Supported Allowlist Locations
@@ -70,6 +87,7 @@ prowler aws -w arn:aws:dynamodb:<region_name>:<account_id>:table/<table_name>
- Checks (String): This field can contain either a Prowler Check Name or an `*` (which applies to all the scanned checks).
- Regions (List): This field contains a list of regions where this allowlist rule is applied (it can also contains an `*` to apply all scanned regions).
- Resources (List): This field contains a list of regex expressions that applies to the resources that are wanted to be allowlisted.
- Tags (List): -Optional- This field contains a list of tuples in the form of 'key=value' that applies to the resources tags that are wanted to be allowlisted.
<img src="../img/allowlist-row.png"/>
@@ -101,7 +119,7 @@ generates an Allowlist:
```
def handler(event, context):
checks = {}
checks["vpc_flow_logs_enabled"] = { "Regions": [ "*" ], "Resources": [ "" ] }
checks["vpc_flow_logs_enabled"] = { "Regions": [ "*" ], "Resources": [ "" ], Optional("Tags"): [ "key:value" ] }
al = { "Allowlist": { "Accounts": { "*": { "Checks": checks } } } }
return al

View File

@@ -0,0 +1,31 @@
# Boto3 Retrier Configuration
Prowler's AWS Provider uses the Boto3 [Standard](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/retries.html) retry mode to assist in retrying client calls to AWS services when these kinds of errors or exceptions are experienced. This mode includes the following behaviours:
- A default value of 3 for maximum retry attempts. This can be overwritten with the `--aws-retries-max-attempts 5` argument.
- Retry attempts for an expanded list of errors/exceptions:
```
# Transient errors/exceptions
RequestTimeout
RequestTimeoutException
PriorRequestNotComplete
ConnectionError
HTTPClientError
# Service-side throttling/limit errors and exceptions
Throttling
ThrottlingException
ThrottledException
RequestThrottledException
TooManyRequestsException
ProvisionedThroughputExceededException
TransactionInProgressException
RequestLimitExceeded
BandwidthLimitExceeded
LimitExceededException
RequestThrottled
SlowDown
EC2ThrottledException
```
- Retry attempts on nondescriptive, transient error codes. Specifically, these HTTP status codes: 500, 502, 503, 504.
- Any retry attempt will include an exponential backoff by a base factor of 2 for a maximum backoff time of 20 seconds.

View File

@@ -1,6 +1,6 @@
# Scan Multiple AWS Accounts
Prowler can scan multiple accounts when it is ejecuted from one account that can assume a role in those given accounts to scan using [Assume Role feature](role-assumption.md) and [AWS Organizations integration feature](organizations.md).
Prowler can scan multiple accounts when it is executed from one account that can assume a role in those given accounts to scan using [Assume Role feature](role-assumption.md) and [AWS Organizations integration feature](organizations.md).
## Scan multiple specific accounts sequentially

View File

@@ -0,0 +1,81 @@
# AWS Regions and Partitions
By default Prowler is able to scan the following AWS partitions:
- Commercial: `aws`
- China: `aws-cn`
- GovCloud (US): `aws-us-gov`
> To check the available regions for each partition and service please refer to the following document [aws_regions_by_service.json](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/aws/aws_regions_by_service.json)
It is important to take into consideration that to scan the China (`aws-cn`) or GovCloud (`aws-us-gov`) partitions it is either required to have a valid region for that partition in your AWS credentials or to specify the regions you want to audit for that partition using the `-f/--region` flag.
> Please, refer to https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials for more information about the AWS credentials configuration.
You can get more information about the available partitions and regions in the following [Botocore](https://github.com/boto/botocore) [file](https://github.com/boto/botocore/blob/22a19ea7c4c2c4dd7df4ab8c32733cba0c7597a4/botocore/data/partitions.json).
## AWS China
To scan your AWS account in the China partition (`aws-cn`):
- Using the `-f/--region` flag:
```
prowler aws --region cn-north-1 cn-northwest-1
```
- Using the region configured in your AWS profile at `~/.aws/credentials` or `~/.aws/config`:
```
[default]
aws_access_key_id = XXXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXX
region = cn-north-1
```
> With this option all the partition regions will be scanned without the need of use the `-f/--region` flag
## AWS GovCloud (US)
To scan your AWS account in the GovCloud (US) partition (`aws-us-gov`):
- Using the `-f/--region` flag:
```
prowler aws --region us-gov-east-1 us-gov-west-1
```
- Using the region configured in your AWS profile at `~/.aws/credentials` or `~/.aws/config`:
```
[default]
aws_access_key_id = XXXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXX
region = us-gov-east-1
```
> With this option all the partition regions will be scanned without the need of use the `-f/--region` flag
## AWS ISO (US & Europe)
For the AWS ISO partitions, which are known as "secret partitions" and are air-gapped from the Internet, there is no builtin way to scan it. If you want to audit an AWS account in one of the AWS ISO partitions you should manually update the [aws_regions_by_service.json](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/aws/aws_regions_by_service.json) and include the partition, region and services, e.g.:
```json
"iam": {
"regions": {
"aws": [
"eu-west-1",
"us-east-1",
],
"aws-cn": [
"cn-north-1",
"cn-northwest-1"
],
"aws-us-gov": [
"us-gov-east-1",
"us-gov-west-1"
],
"aws-iso": [
"aws-iso-global",
"us-iso-east-1",
"us-iso-west-1"
],
"aws-iso-b": [
"aws-iso-b-global",
"us-isob-east-1"
],
"aws-iso-e": [],
}
},
```

View File

@@ -5,6 +5,13 @@ Prowler uses the AWS SDK (Boto3) underneath so it uses the same authentication m
However, there are few ways to run Prowler against multiple accounts using IAM Assume Role feature depending on each use case:
1. You can just set up your custom profile inside `~/.aws/config` with all needed information about the role to assume then call it with `prowler aws -p/--profile your-custom-profile`.
- An example profile that performs role-chaining is given below. The `credential_source` can either be set to `Environment`, `Ec2InstanceMetadata`, or `EcsContainer`.
- Alternatively, you could use the `source_profile` instead of `credential_source` to specify a separate named profile that contains IAM user credentials with permission to assume the target the role. More information can be found [here](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html).
```
[profile crossaccountrole]
role_arn = arn:aws:iam::234567890123:role/SomeRole
credential_source = EcsContainer
```
2. You can use `-R`/`--role <role_arn>` and Prowler will get those temporary credentials using `Boto3` and run against that given account.
```sh
@@ -20,6 +27,6 @@ prowler aws -T/--session-duration <seconds> -I/--external-id <external_id> -R ar
To create a role to be assumed in one or multiple accounts you can use either as CloudFormation Stack or StackSet the following [template](https://github.com/prowler-cloud/prowler/blob/master/permissions/create_role_to_assume_cfn.yaml) and adapt it.
> _NOTE 1 about Session Duration_: Depending on the mount of checks you run and the size of your infrastructure, Prowler may require more than 1 hour to finish. Use option `-T <seconds>` to allow up to 12h (43200 seconds). To allow more than 1h you need to modify _"Maximum CLI/API session duration"_ for that particular role, read more [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session).
> _NOTE 1 about Session Duration_: Depending on the amount of checks you run and the size of your infrastructure, Prowler may require more than 1 hour to finish. Use option `-T <seconds>` to allow up to 12h (43200 seconds). To allow more than 1h you need to modify _"Maximum CLI/API session duration"_ for that particular role, read more [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session).
> _NOTE 2 about Session Duration_: Bear in mind that if you are using roles assumed by role chaining there is a hard limit of 1 hour so consider not using role chaining if possible, read more about that, in foot note 1 below the table [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html).

View File

@@ -13,26 +13,55 @@ Before sending findings to Prowler, you will need to perform next steps:
- Using the AWS Management Console:
![Screenshot 2020-10-29 at 10 26 02 PM](https://user-images.githubusercontent.com/3985464/97634660-5ade3400-1a36-11eb-9a92-4a45cc98c158.png)
3. Allow Prowler to import its findings to AWS Security Hub by adding the policy below to the role or user running Prowler:
- [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/iam/prowler-security-hub.json)
- [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-security-hub.json)
Once it is enabled, it is as simple as running the command below (for all regions):
```sh
./prowler aws -S
prowler aws -S
```
or for only one filtered region like eu-west-1:
```sh
./prowler -S -f eu-west-1
prowler -S -f eu-west-1
```
> **Note 1**: It is recommended to send only fails to Security Hub and that is possible adding `-q` to the command.
> **Note 2**: Since Prowler perform checks to all regions by defauls you may need to filter by region when runing Security Hub integration, as shown in the example above. Remember to enable Security Hub in the region or regions you need by calling `aws securityhub enable-security-hub --region <region>` and run Prowler with the option `-f <region>` (if no region is used it will try to push findings in all regions hubs).
> **Note 2**: Since Prowler perform checks to all regions by default you may need to filter by region when runing Security Hub integration, as shown in the example above. Remember to enable Security Hub in the region or regions you need by calling `aws securityhub enable-security-hub --region <region>` and run Prowler with the option `-f <region>` (if no region is used it will try to push findings in all regions hubs). Prowler will send findings to the Security Hub on the region where the scanned resource is located.
> **Note 3** to have updated findings in Security Hub you have to run Prowler periodically. Once a day or every certain amount of hours.
> **Note 3**: To have updated findings in Security Hub you have to run Prowler periodically. Once a day or every certain amount of hours.
Once you run findings for first time you will be able to see Prowler findings in Findings section:
![Screenshot 2020-10-29 at 10 29 05 PM](https://user-images.githubusercontent.com/3985464/97634676-66c9f600-1a36-11eb-9341-70feb06f6331.png)
## Send findings to Security Hub assuming an IAM Role
When you are auditing a multi-account AWS environment, you can send findings to a Security Hub of another account by assuming an IAM role from that account using the `-R` flag in the Prowler command:
```sh
prowler -S -R arn:aws:iam::123456789012:role/ProwlerExecRole
```
> Remember that the used role needs to have permissions to send findings to Security Hub. To get more information about the permissions required, please refer to the following IAM policy [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-security-hub.json)
## Send only failed findings to Security Hub
When using Security Hub it is recommended to send only the failed findings generated. To follow that recommendation you could add the `-q` flag to the Prowler command:
```sh
prowler -S -q
```
## Skip sending updates of findings to Security Hub
By default, Prowler archives all its findings in Security Hub that have not appeared in the last scan.
You can skip this logic by using the option `--skip-sh-update` so Prowler will not archive older findings:
```sh
prowler -S --skip-sh-update
```

View File

@@ -1,6 +1,6 @@
# Check mapping between Prowler v3 and v2
Prowler v3 comes with different identifiers but we maintained the same checks than v2. The reason of the change is because in previows versions of Prowler, check names were mostly based on CIS Benchmark for AWS, in v3 all checks are independent from any security framework and they have its own name and ID.
Prowler v3 comes with different identifiers but we maintained the same checks that were implemented in v2. The reason for this change is because in previows versions of Prowler, check names were mostly based on CIS Benchmark for AWS. In v3 all checks are independent from any security framework and they have its own name and ID.
If you need more information about how new compliance implementation works in Prowler v3 see [Compliance](../../compliance/) section.
@@ -31,7 +31,7 @@ checks_v3_to_v2_mapping = {
"awslambda_function_url_cors_policy": "extra7180",
"awslambda_function_url_public": "extra7179",
"awslambda_function_using_supported_runtimes": "extra762",
"cloudformation_outputs_find_secrets": "extra742",
"cloudformation_stack_outputs_find_secrets": "extra742",
"cloudformation_stacks_termination_protection_enabled": "extra7154",
"cloudfront_distributions_field_level_encryption_enabled": "extra767",
"cloudfront_distributions_geo_restrictions_enabled": "extra732",
@@ -113,7 +113,6 @@ checks_v3_to_v2_mapping = {
"ec2_securitygroup_allow_wide_open_public_ipv4": "extra778",
"ec2_securitygroup_default_restrict_traffic": "check43",
"ec2_securitygroup_from_launch_wizard": "extra7173",
"ec2_securitygroup_in_use_without_ingress_filtering": "extra74",
"ec2_securitygroup_not_used": "extra75",
"ec2_securitygroup_with_many_ingress_egress_rules": "extra777",
"ecr_repositories_lifecycle_policy_enabled": "extra7194",
@@ -138,7 +137,6 @@ checks_v3_to_v2_mapping = {
"elbv2_internet_facing": "extra79",
"elbv2_listeners_underneath": "extra7158",
"elbv2_logging_enabled": "extra717",
"elbv2_request_smugling": "extra7142",
"elbv2_ssl_listeners": "extra793",
"elbv2_waf_acl_attached": "extra7129",
"emr_cluster_account_public_block_enabled": "extra7178",

View File

@@ -11,6 +11,24 @@ Currently, the available frameworks are:
- `cis_1.4_aws`
- `cis_1.5_aws`
- `ens_rd2022_aws`
- `aws_audit_manager_control_tower_guardrails_aws`
- `aws_foundational_security_best_practices_aws`
- `cisa_aws`
- `fedramp_low_revision_4_aws`
- `fedramp_moderate_revision_4_aws`
- `ffiec_aws`
- `gdpr_aws`
- `gxp_eu_annex_11_aws`
- `gxp_21_cfr_part_11_aws`
- `hipaa_aws`
- `nist_800_53_revision_4_aws`
- `nist_800_53_revision_5_aws`
- `nist_800_171_revision_2_aws`
- `nist_csf_1.1_aws`
- `pci_3.2.1_aws`
- `rbi_cyber_security_framework_aws`
- `soc2_aws`
## List Requirements of Compliance Frameworks
For each compliance framework, you can use option `--list-compliance-requirements` to list its requirements:
@@ -63,36 +81,4 @@ Standard results will be shown and additionally the framework information as the
## Create and contribute adding other Security Frameworks
If you want to create or contribute with your own security frameworks or add public ones to Prowler you need to make sure the checks are available if not you have to create your own. Then create a compliance file per provider like in `prowler/compliance/aws/` and name it as `<framework>_<version>_<provider>.json` then follow the following format to create yours.
Each file version of a framework will have the following structure at high level with the case that each framework needs to be generally identified), one requirement can be also called one control but one requirement can be linked to multiple prowler checks.:
- `Framework`: string. Indistiguish name of the framework, like CIS
- `Provider`: string. Provider where the framework applies, such as AWS, Azure, OCI,...
- `Version`: string. Version of the framework itself, like 1.4 for CIS.
- `Requirements`: array of objects. Include all requirements or controls with the mapping to Prowler.
- `Requirements_Id`: string. Unique identifier per each requirement in the specific framework
- `Requirements_Description`: string. Description as in the framework.
- `Requirements_Attributes`: array of objects. Includes all needed attributes per each requirement, like levels, sections, etc. Whatever helps to create a dedicated report with the result of the findings. Attributes would be taken as closely as possible from the framework's own terminology directly.
- `Requirements_Checks`: array. Prowler checks that are needed to prove this requirement. It can be one or multiple checks. In case of no automation possible this can be empty.
```
{
"Framework": "<framework>-<provider>",
"Version": "<version>",
"Requirements": [
{
"Id": "<unique-id>",
"Description": "Requiemente full description",
"Checks": [
"Here is the prowler check or checks that is going to be executed"
],
"Attributes": [
{
<Add here your custom attributes.>
}
]
}
```
Finally, to have a proper output file for your reports, your framework data model has to be created in `prowler/lib/outputs/models.py` and also the CLI table output in `prowler/lib/outputs/compliance.py`.
This information is part of the Developer Guide and can be found here: https://docs.prowler.cloud/en/latest/tutorials/developer-guide/.

View File

@@ -0,0 +1,281 @@
# Developer Guide
You can extend Prowler in many different ways, in most cases you will want to create your own checks and compliance security frameworks, here is where you can learn about how to get started with it. We also include how to create custom outputs, integrations and more.
## Get the code and install all dependencies
First of all, you need a version of Python 3.9 or higher and also pip installed to be able to install all dependencies requred. Once that is satisfied go a head and clone the repo:
```
git clone https://github.com/prowler-cloud/prowler
cd prowler
```
For isolation and avoid conflicts with other environments, we recommend usage of `poetry`:
```
pip install poetry
```
Then install all dependencies including the ones for developers:
```
poetry install
poetry shell
```
## Contributing with your code or fixes to Prowler
This repo has git pre-commit hooks managed via the pre-commit tool. Install it how ever you like, then in the root of this repo run:
```
pre-commit install
```
You should get an output like the following:
```
pre-commit installed at .git/hooks/pre-commit
```
Before we merge any of your pull requests we pass checks to the code, we use the following tools and automation to make sure the code is secure and dependencies up-to-dated (these should have been already installed if you ran `pipenv install -d`):
- `bandit` for code security review.
- `safety` and `dependabot` for dependencies.
- `hadolint` and `dockle` for our containers security.
- `snyk` in Docker Hub.
- `clair` in Amazon ECR.
- `vulture`, `flake8`, `black` and `pylint` for formatting and best practices.
You can see all dependencies in file `Pipfile`.
## Create a new check for a Provider
### If the check you want to create belongs to an existing service
To create a new check, you will need to create a folder inside the specific service, i.e. `prowler/providers/<provider>/services/<service>/<check_name>/`, with the name of check following the pattern: `service_subservice_action`.
Inside that folder, create the following files:
- An empty `__init__.py`: to make Python treat this check folder as a package.
- A `check_name.py` containing the check's logic, for example:
```
# Import the Check_Report of the specific provider
from prowler.lib.check.models import Check, Check_Report_AWS
# Import the client of the specific service
from prowler.providers.aws.services.ec2.ec2_client import ec2_client
# Create the class for the check
class ec2_ebs_volume_encryption(Check):
def execute(self):
findings = []
# Iterate the service's asset that want to be analyzed
for volume in ec2_client.volumes:
# Initialize a Check Report for each item and assign the region, resource_id, resource_arn and resource_tags
report = Check_Report_AWS(self.metadata())
report.region = volume.region
report.resource_id = volume.id
report.resource_arn = volume.arn
report.resource_tags = volume.tags
# Make the logic with conditions and create a PASS and a FAIL with a status and a status_extended
if volume.encrypted:
report.status = "PASS"
report.status_extended = f"EBS Snapshot {volume.id} is encrypted."
else:
report.status = "FAIL"
report.status_extended = f"EBS Snapshot {volume.id} is unencrypted."
findings.append(report) # Append a report for each item
return findings
```
- A `check_name.metadata.json` containing the check's metadata, for example:
```
{
"Provider": "aws",
"CheckID": "ec2_ebs_volume_encryption",
"CheckTitle": "Ensure there are no EBS Volumes unencrypted.",
"CheckType": [
"Data Protection"
],
"ServiceName": "ec2",
"SubServiceName": "volume",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"Severity": "medium",
"ResourceType": "AwsEc2Volume",
"Description": "Ensure there are no EBS Volumes unencrypted.",
"Risk": "Data encryption at rest prevents data visibility in the event of its unauthorized access or theft.",
"RelatedUrl": "",
"Remediation": {
"Code": {
"CLI": "",
"NativeIaC": "",
"Other": "",
"Terraform": ""
},
"Recommendation": {
"Text": "Encrypt all EBS volumes and Enable Encryption by default You can configure your AWS account to enforce the encryption of the new EBS volumes and snapshot copies that you create. For example; Amazon EBS encrypts the EBS volumes created when you launch an instance and the snapshots that you copy from an unencrypted snapshot.",
"Url": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html"
}
},
"Categories": [
"encryption"
],
"DependsOn": [],
"RelatedTo": [],
"Notes": ""
}
```
### If the check you want to create belongs to a service not supported already by Prowler you will need to create a new service first
To create a new service, you will need to create a folder inside the specific provider, i.e. `prowler/providers/<provider>/services/<service>/`.
Inside that folder, create the following files:
- An empty `__init__.py`: to make Python treat this service folder as a package.
- A `<service>_service.py`, containing all the service's logic and API Calls:
```
# You must import the following libraries
import threading
from typing import Optional
from pydantic import BaseModel
from prowler.lib.logger import logger
from prowler.lib.scan_filters.scan_filters import is_resource_filtered
from prowler.providers.aws.aws_provider import generate_regional_clients
# Create a class for the Service
################## <Service>
class <Service>:
def __init__(self, audit_info):
self.service = "<service>" # The name of the service boto3 client
self.session = audit_info.audit_session
self.audited_account = audit_info.audited_account
self.audit_resources = audit_info.audit_resources
self.regional_clients = generate_regional_clients(self.service, audit_info)
self.<items> = [] # Create an empty list of the items to be gathered, e.g., instances
self.__threading_call__(self.__describe_<items>__)
self.__describe_<item>__() # Optionally you can create another function to retrieve more data about each item
def __get_session__(self):
return self.session
def __threading_call__(self, call):
threads = []
for regional_client in self.regional_clients.values():
threads.append(threading.Thread(target=call, args=(regional_client,)))
for t in threads:
t.start()
for t in threads:
t.join()
def __describe_<items>__(self, regional_client):
"""Get ALL <Service> <Items>"""
logger.info("<Service> - Describing <Items>...")
try:
describe_<items>_paginator = regional_client.get_paginator("describe_<items>") # Paginator to get every item
for page in describe_<items>_paginator.paginate():
for <item> in page["<Items>"]:
if not self.audit_resources or (
is_resource_filtered(<item>["<item_arn>"], self.audit_resources)
):
self.<items>.append(
<Item>(
arn=stack["<item_arn>"],
name=stack["<item_name>"],
tags=stack.get("Tags", []),
region=regional_client.region,
)
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __describe_<item>__(self):
"""Get Details for a <Service> <Item>"""
logger.info("<Service> - Describing <Item> to get specific details...")
try:
for <item> in self.<items>:
<item>_details = self.regional_clients[<item>.region].describe_<item>(
<Attribute>=<item>.name
)
# For example, check if item is Public
<item>.public = <item>_details.get("Public", False)
except Exception as error:
logger.error(
f"{<item>.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
class <Item>(BaseModel):
"""<Item> holds a <Service> <Item>"""
arn: str
"""<Items>[].Arn"""
name: str
"""<Items>[].Name"""
public: bool
"""<Items>[].Public"""
tags: Optional[list] = []
region: str
```
- A `<service>_client_.py`, containing the initialization of the service's class we have just created so the service's checks can use them:
```
from prowler.providers.aws.lib.audit_info.audit_info import current_audit_info
from prowler.providers.aws.services.<service>.<service>_service import <Service>
<service>_client = <Service>(current_audit_info)
```
## Create a new security compliance framework
If you want to create or contribute with your own security frameworks or add public ones to Prowler you need to make sure the checks are available if not you have to create your own. Then create a compliance file per provider like in `prowler/compliance/aws/` and name it as `<framework>_<version>_<provider>.json` then follow the following format to create yours.
Each file version of a framework will have the following structure at high level with the case that each framework needs to be generally identified, one requirement can be also called one control but one requirement can be linked to multiple prowler checks.:
- `Framework`: string. Indistiguish name of the framework, like CIS
- `Provider`: string. Provider where the framework applies, such as AWS, Azure, OCI,...
- `Version`: string. Version of the framework itself, like 1.4 for CIS.
- `Requirements`: array of objects. Include all requirements or controls with the mapping to Prowler.
- `Requirements_Id`: string. Unique identifier per each requirement in the specific framework
- `Requirements_Description`: string. Description as in the framework.
- `Requirements_Attributes`: array of objects. Includes all needed attributes per each requirement, like levels, sections, etc. Whatever helps to create a dedicated report with the result of the findings. Attributes would be taken as closely as possible from the framework's own terminology directly.
- `Requirements_Checks`: array. Prowler checks that are needed to prove this requirement. It can be one or multiple checks. In case of no automation possible this can be empty.
```
{
"Framework": "<framework>-<provider>",
"Version": "<version>",
"Requirements": [
{
"Id": "<unique-id>",
"Description": "Requiemente full description",
"Checks": [
"Here is the prowler check or checks that is going to be executed"
],
"Attributes": [
{
<Add here your custom attributes.>
}
]
},
...
]
}
```
Finally, to have a proper output file for your reports, your framework data model has to be created in `prowler/lib/outputs/models.py` and also the CLI table output in `prowler/lib/outputs/compliance.py`.
## Create a custom output format
## Create a new integration
## Contribute with documentation
We use `mkdocs` to build this Prowler documentation site so you can easely contribute back with new docs or improving them.
1. Install `mkdocs` with your favorite package manager.
2. Inside the `prowler` repository folder run `mkdocs serve` and point your browser to `http://localhost:8000` and you will see live changes to your local copy of this documentation site.
3. Make all needed changes to docs or add new documents. To do so just edit existing md files inside `prowler/docs` and if you are adding a new section or file please make sure you add it to `mkdocs.yaml` file in the root folder of the Prowler repo.
4. Once you are done with changes, please send a pull request to us for review and merge. Thank you in advance!
## Want some swag as appreciation for your contribution?
If you are like us and you love swag, we are happy to thank you for your contribution with some laptop stickers or whatever other swag we may have at that time. Please, tell us more details and your pull request link in our [Slack workspace here](https://join.slack.com/t/prowler-workspace/shared_invite/zt-1hix76xsl-2uq222JIXrC7Q8It~9ZNog). You can also reach out to Toni de la Fuente on Twitter [here](https://twitter.com/ToniBlyx), his DMs are open.

View File

@@ -0,0 +1,29 @@
# GCP authentication
Prowler will use by default your User Account credentials, you can configure it using:
- `gcloud init` to use a new account
- `gcloud config set account <account>` to use an existing account
Then, obtain your access credentials using: `gcloud auth application-default login`
Otherwise, you can generate and download Service Account keys in JSON format (refer to https://cloud.google.com/iam/docs/creating-managing-service-account-keys) and provide the location of the file with the following argument:
```console
prowler gcp --credentials-file path
```
> `prowler` will scan the GCP project associated with the credentials.
Prowler will follow the same credentials search as [Google authentication libraries](https://cloud.google.com/docs/authentication/application-default-credentials#search_order):
1. [GOOGLE_APPLICATION_CREDENTIALS environment variable](https://cloud.google.com/docs/authentication/application-default-credentials#GAC)
2. [User credentials set up by using the Google Cloud CLI](https://cloud.google.com/docs/authentication/application-default-credentials#personal)
3. [The attached service account, returned by the metadata server](https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa)
Those credentials must be associated to a user or service account with proper permissions to do all checks. To make sure, add the following roles to the member associated with the credentials:
- Viewer
- Security Reviewer
- Stackdriver Account Viewer

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 200 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 456 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

View File

@@ -0,0 +1,36 @@
# Integrations
## Slack
Prowler can be integrated with [Slack](https://slack.com/) to send a summary of the execution having configured a Slack APP in your channel with the following command:
```sh
prowler <provider> --slack
```
![Prowler Slack Message](img/slack-prowler-message.png)
> Slack integration needs SLACK_API_TOKEN and SLACK_CHANNEL_ID environment variables.
### Configuration
To configure the Slack Integration, follow the next steps:
1. Create a Slack Application:
- Go to [Slack API page](https://api.slack.com/tutorials/tracks/getting-a-token), scroll down to the *Create app* button and select your workspace:
![Create Slack App](img/create-slack-app.png)
- Install the application in your selected workspaces:
![Install Slack App in Workspace](img/install-in-slack-workspace.png)
- Get the *Slack App OAuth Token* that Prowler needs to send the message:
![Slack App OAuth Token](img/slack-app-token.png)
2. Optionally, create a Slack Channel (you can use an existing one)
3. Integrate the created Slack App to your Slack channel:
- Click on the channel, go to the Integrations tab, and Add an App.
![Slack App Channel Integration](img/integrate-slack-app.png)
4. Set the following environment variables that Prowler will read:
- `SLACK_API_TOKEN`: the *Slack App OAuth Token* that was previously get.
- `SLACK_CHANNEL_ID`: the name of your Slack Channel where Prowler will send the message.

View File

@@ -1,16 +1,16 @@
# Logging
Prowler has a logging feature to be as transparent as possible so you can see every action that is going on will the tool is been executing.
Prowler has a logging feature to be as transparent as possible, so that you can see every action that is being performed whilst the tool is being executing.
## Set Log Level
## Set Log Level
There are different log levels depending on the logging information that is desired to be displayed:
- **DEBUG**: it will show low-level logs of Python.
- **INFO**: it will show all the API Calls that are being used in the provider.
- **WARNING**: it will show the resources that are being **allowlisted**.
- **ERROR**: it will show the errors, e.g., not authorized actions.
- **CRITICAL**: default log level, if a critical log appears, it will **exit** Prowlers execution.
- **DEBUG**: It will show low-level logs from Python.
- **INFO**: It will show all the API calls that are being invoked by the provider.
- **WARNING**: It will show all resources that are being **allowlisted**.
- **ERROR**: It will show any errors, e.g., not authorized actions.
- **CRITICAL**: The default log level. If a critical log appears, it will **exit** Prowlers execution.
You can establish the log level of Prowler with `--log-level` option:
@@ -20,9 +20,9 @@ prowler <provider> --log-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}
> By default, Prowler will run with the `CRITICAL` log level, since critical errors will abort the execution.
## Export Logs to File
## Export Logs to File
Prowler allows you to export the logs in json format with `--log-file` option:
Prowler allows you to export the logs in json format with the `--log-file` option:
```console
prowler <provider> --log-level {DEBUG,INFO,WARNING,ERROR,CRITICAL} --log-file <file_name>.json
@@ -45,4 +45,4 @@ An example of a log file will be the following:
"message": "eu-west-2 -- ClientError[124]: An error occurred (UnauthorizedOperation) when calling the DescribeNetworkAcls operation: You are not authorized to perform this operation."
}
> NOTE: Each finding is a `json` object.
> NOTE: Each finding is represented as a `json` object.

View File

@@ -51,15 +51,30 @@ prowler <provider> -e/--excluded-checks ec2 rds
```console
prowler <provider> -C/--checks-file <checks_list>.json
```
## Severities
Each check of Prowler has a severity, there are options related with it:
- List the available checks in the provider:
## Custom Checks
Prowler allows you to include your custom checks with the flag:
```console
prowler <provider> --list-severities
prowler <provider> -x/--checks-folder <custom_checks_folder>
```
- Execute specific severity(s):
> S3 URIs are also supported as folders for custom checks, e.g. s3://bucket/prefix/checks_folder/. Make sure that the used credentials have s3:GetObject permissions in the S3 path where the custom checks are located.
The custom checks folder must contain one subfolder per check, each subfolder must be named as the check and must contain:
- An empty `__init__.py`: to make Python treat this check folder as a package.
- A `check_name.py` containing the check's logic.
- A `check_name.metadata.json` containing the check's metadata.
>The check name must start with the service name followed by an underscore (e.g., ec2_instance_public_ip).
To see more information about how to write checks see the [Developer Guide](../developer-guide/#create-a-new-check-for-a-provider).
## Severities
Each of Prowler's checks has a severity, which can be:
- informational
- low
- medium
- high
- critical
To execute specific severity(s):
```console
prowler <provider> --severity critical high
```

View File

@@ -33,9 +33,8 @@ Several checks analyse resources that are exposed to the Internet, these are:
- ec2_instance_internet_facing_with_instance_profile
- ec2_instance_public_ip
- ec2_networkacl_allow_ingress_any_port
- ec2_securitygroup_allow_ingress_from_internet_to_any_port
- ec2_securitygroup_allow_wide_open_public_ipv4
- ec2_securitygroup_in_use_without_ingress_filtering
- ec2_securitygroup_allow_ingress_from_internet_to_any_port
- ecr_repositories_not_publicly_accessible
- eks_control_plane_endpoint_access_restricted
- eks_endpoints_not_publicly_accessible

View File

@@ -14,4 +14,6 @@ prowler <provider> -i
- Also, it creates by default a CSV and JSON to see detailed information about the resources extracted.
![Quick Inventory Example](../img/quick-inventory.png)
![Quick Inventory Example](../img/quick-inventory.jpg)
> The inventorying process is done with `resourcegroupstaggingapi` calls (except for the IAM resources which are done with Boto3 API calls.)

View File

@@ -46,9 +46,11 @@ Prowler supports natively the following output formats:
Hereunder is the structure for each of the supported report formats by Prowler:
### HTML
![HTML Output](../img/output-html.png)
### CSV
| ASSESSMENT_START_TIME | FINDING_UNIQUE_ID | PROVIDER | PROFILE | ACCOUNT_ID | ACCOUNT_NAME | ACCOUNT_EMAIL | ACCOUNT_ARN | ACCOUNT_ORG | ACCOUNT_TAGS | REGION | CHECK_ID | CHECK_TITLE | CHECK_TYPE | STATUS | STATUS_EXTENDED | SERVICE_NAME | SUBSERVICE_NAME | SEVERITY | RESOURCE_ID | RESOURCE_ARN | RESOURCE_TYPE | RESOURCE_DETAILS | RESOURCE_TAGS | DESCRIPTION | RISK | RELATED_URL | REMEDIATION_RECOMMENDATION_TEXT | REMEDIATION_RECOMMENDATION_URL | REMEDIATION_RECOMMENDATION_CODE_NATIVEIAC | REMEDIATION_RECOMMENDATION_CODE_TERRAFORM | REMEDIATION_RECOMMENDATION_CODE_CLI | REMEDIATION_RECOMMENDATION_CODE_OTHER | CATEGORIES | DEPENDS_ON | RELATED_TO | NOTES |
| ------- | ----------- | ------ | -------- | ------------ | ----------- | ---------- | ---------- | --------------------- | -------------------------- | -------------- | ----------------- | ------------------------ | --------------- | ---------- | ----------------- | --------- | -------------- | ----------------- | ------------------ | --------------------- | -------------------- | ------------------- | ------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- |
| ASSESSMENT_START_TIME | FINDING_UNIQUE_ID | PROVIDER | PROFILE | ACCOUNT_ID | ACCOUNT_NAME | ACCOUNT_EMAIL | ACCOUNT_ARN | ACCOUNT_ORG | ACCOUNT_TAGS | REGION | CHECK_ID | CHECK_TITLE | CHECK_TYPE | STATUS | STATUS_EXTENDED | SERVICE_NAME | SUBSERVICE_NAME | SEVERITY | RESOURCE_ID | RESOURCE_ARN | RESOURCE_TYPE | RESOURCE_DETAILS | RESOURCE_TAGS | DESCRIPTION | COMPLIANCE | RISK | RELATED_URL | REMEDIATION_RECOMMENDATION_TEXT | REMEDIATION_RECOMMENDATION_URL | REMEDIATION_RECOMMENDATION_CODE_NATIVEIAC | REMEDIATION_RECOMMENDATION_CODE_TERRAFORM | REMEDIATION_RECOMMENDATION_CODE_CLI | REMEDIATION_RECOMMENDATION_CODE_OTHER | CATEGORIES | DEPENDS_ON | RELATED_TO | NOTES |
| ------- | ----------- | ------ | -------- | ------------ | ----------- | ---------- | ---------- | --------------------- | -------------------------- | -------------- | ----------------- | ------------------------ | --------------- | ---------- | ----------------- | --------- | -------------- | ----------------- | ------------------ | --------------------- | -------------------- | ------------------- | ------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- |
### JSON
@@ -71,6 +73,10 @@ Hereunder is the structure for each of the supported report formats by Prowler:
"Severity": "low",
"ResourceId": "rds-instance-id",
"ResourceArn": "",
"ResourceTags": {
"test": "test",
"enironment": "dev"
},
"ResourceType": "AwsRdsDbInstance",
"ResourceDetails": "",
"Description": "Ensure RDS instances have minor version upgrade enabled.",
@@ -89,7 +95,15 @@ Hereunder is the structure for each of the supported report formats by Prowler:
}
},
"Categories": [],
"Notes": ""
"Notes": "",
"Compliance": {
"CIS-1.4": [
"1.20"
],
"CIS-1.5": [
"1.20"
]
}
},{
"AssessmentStartTime": "2022-12-01T14:16:57.354413",
"FindingUniqueId": "",
@@ -109,7 +123,7 @@ Hereunder is the structure for each of the supported report formats by Prowler:
"ResourceId": "rds-instance-id",
"ResourceArn": "",
"ResourceType": "AwsRdsDbInstance",
"ResourceDetails": "",
"ResourceTags": {},
"Description": "Ensure RDS instances have minor version upgrade enabled.",
"Risk": "Auto Minor Version Upgrade is a feature that you can enable to have your database automatically upgraded when a new minor database engine version is available. Minor version upgrades often patch security vulnerabilities and fix bugs and therefore should be applied.",
"RelatedUrl": "https://aws.amazon.com/blogs/database/best-practices-for-upgrading-amazon-rds-to-major-and-minor-versions-of-postgresql/",
@@ -126,7 +140,8 @@ Hereunder is the structure for each of the supported report formats by Prowler:
}
},
"Categories": [],
"Notes": ""
"Notes": "",
"Compliance: {}
}]
```
@@ -166,7 +181,30 @@ Hereunder is the structure for each of the supported report formats by Prowler:
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": []
"RelatedRequirements": [
"CISA your-systems-2 booting-up-thing-to-do-first-3",
"CIS-1.5 2.3.2",
"AWS-Foundational-Security-Best-Practices rds",
"RBI-Cyber-Security-Framework annex_i_6",
"FFIEC d3-cc-pm-b-1 d3-cc-pm-b-3"
],
"AssociatedStandards": [
{
"StandardsId": "CISA"
},
{
"StandardsId": "CIS-1.5"
},
{
"StandardsId": "AWS-Foundational-Security-Best-Practices"
},
{
"StandardsId": "RBI-Cyber-Security-Framework"
},
{
"StandardsId": "FFIEC"
}
]
},
"Remediation": {
"Recommendation": {
@@ -205,7 +243,30 @@ Hereunder is the structure for each of the supported report formats by Prowler:
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": []
"RelatedRequirements": [
"CISA your-systems-2 booting-up-thing-to-do-first-3",
"CIS-1.5 2.3.2",
"AWS-Foundational-Security-Best-Practices rds",
"RBI-Cyber-Security-Framework annex_i_6",
"FFIEC d3-cc-pm-b-1 d3-cc-pm-b-3"
],
"AssociatedStandards": [
{
"StandardsId": "CISA"
},
{
"StandardsId": "CIS-1.5"
},
{
"StandardsId": "AWS-Foundational-Security-Best-Practices"
},
{
"StandardsId": "RBI-Cyber-Security-Framework"
},
{
"StandardsId": "FFIEC"
}
]
},
"Remediation": {
"Recommendation": {

View File

@@ -25,35 +25,42 @@ repo_url: https://github.com/prowler-cloud/prowler/
repo_name: prowler-cloud/prowler
nav:
- Getting Started:
- Overview: index.md
- Requirements: getting-started/requirements.md
- Tutorials:
- Miscellaneous: tutorials/misc.md
- Reporting: tutorials/reporting.md
- Compliance: tutorials/compliance.md
- Quick Inventory: tutorials/quick-inventory.md
- Configuration File: tutorials/configuration_file.md
- Logging: tutorials/logging.md
- Allowlist: tutorials/allowlist.md
- Pentesting: tutorials/pentesting.md
- AWS:
- Getting Started:
- Overview: index.md
- Requirements: getting-started/requirements.md
- Tutorials:
- Miscellaneous: tutorials/misc.md
- Reporting: tutorials/reporting.md
- Compliance: tutorials/compliance.md
- Quick Inventory: tutorials/quick-inventory.md
- Integrations: tutorials/integrations.md
- Configuration File: tutorials/configuration_file.md
- Logging: tutorials/logging.md
- Allowlist: tutorials/allowlist.md
- Pentesting: tutorials/pentesting.md
- Developer Guide: tutorials/developer-guide.md
- AWS:
- Assume Role: tutorials/aws/role-assumption.md
- AWS Security Hub: tutorials/aws/securityhub.md
- AWS Organizations: tutorials/aws/organizations.md
- AWS Regions and Partitions: tutorials/aws/regions-and-partitions.md
- Scan Multiple AWS Accounts: tutorials/aws/multiaccount.md
- AWS CloudShell: tutorials/aws/cloudshell.md
- Checks v2 to v3 Mapping: tutorials/aws/v2_to_v3_checks_mapping.md
- Tag-based Scan: tutorials/aws/tag-based-scan.md
- Resource ARNs based Scan: tutorials/aws/resource-arn-based-scan.md
- Azure:
- Boto3 Configuration: tutorials/aws/boto3-configuration.md
- Azure:
- Authentication: tutorials/azure/authentication.md
- Subscriptions: tutorials/azure/subscriptions.md
- Security: security.md
- Contact Us: contact.md
- Troubleshooting: troubleshooting.md
- About: about.md
- ProwlerPro: https://prowler.pro
- Google Cloud:
- Authentication: tutorials/gcp/authentication.md
- Developer Guide: tutorials/developer-guide.md
- Security: security.md
- Contact Us: contact.md
- Troubleshooting: troubleshooting.md
- About: about.md
- ProwlerPro: https://prowler.pro
# Customization
extra:
consent:

View File

@@ -4,7 +4,7 @@ AWSTemplateFormatVersion: '2010-09-09'
# aws cloudformation create-stack \
# --capabilities CAPABILITY_IAM --capabilities CAPABILITY_NAMED_IAM \
# --template-body "file://create_role_to_assume_cfn.yaml" \
# --stack-name "ProwlerExecRole" \
# --stack-name "ProwlerScanRole" \
# --parameters "ParameterKey=AuthorisedARN,ParameterValue=arn:aws:iam::123456789012:root"
#
Description: |
@@ -13,7 +13,7 @@ Description: |
account to assume that role. The role name and the ARN of the trusted user can all be passed
to the CloudFormation stack as parameters. Then you can run Prowler to perform a security
assessment with a command like:
./prowler -A <THIS_ACCOUNT_ID> -R ProwlerExecRole
prowler --role ProwlerScanRole.ARN
Parameters:
AuthorisedARN:
Description: |
@@ -22,12 +22,12 @@ Parameters:
Type: String
ProwlerRoleName:
Description: |
Name of the IAM role that will have these policies attached. Default: ProwlerExecRole
Name of the IAM role that will have these policies attached. Default: ProwlerScanRole
Type: String
Default: 'ProwlerExecRole'
Default: 'ProwlerScanRole'
Resources:
ProwlerExecRole:
ProwlerScanRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
@@ -42,31 +42,49 @@ Resources:
# Bool:
# 'aws:MultiFactorAuthPresent': true
# This is 12h that is maximum allowed, Minimum is 3600 = 1h
# to take advantage of this use -T like in './prowler -A <ACCOUNT_ID_TO_ASSUME> -R ProwlerExecRole -T 43200 -M text,html'
# to take advantage of this use -T like in './prowler --role ProwlerScanRole.ARN -T 43200'
MaxSessionDuration: 43200
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/SecurityAudit'
- 'arn:aws:iam::aws:policy/job-function/ViewOnlyAccess'
RoleName: !Sub ${ProwlerRoleName}
Policies:
- PolicyName: ProwlerExecRoleAdditionalViewPrivileges
- PolicyName: ProwlerScanRoleAdditionalViewPrivileges
PolicyDocument:
Version : '2012-10-17'
Statement:
- Effect: Allow
Action:
- 'ds:ListAuthorizedApplications'
- 'account:Get*'
- 'appstream:Describe*'
- 'appstream:List*'
- 'codeartifact:List*'
- 'codebuild:BatchGet*'
- 'ds:Get*'
- 'ds:Describe*'
- 'ds:List*'
- 'ec2:GetEbsEncryptionByDefault'
- 'ecr:Describe*'
- 'elasticfilesystem:DescribeBackupPolicy'
- 'glue:GetConnections'
- 'glue:GetSecurityConfiguration'
- 'glue:GetSecurityConfiguration*'
- 'glue:SearchTables'
- 'lambda:GetFunction'
- 'lambda:GetFunction*'
- 'macie2:GetMacieSession'
- 's3:GetAccountPublicAccessBlock'
- 'shield:DescribeProtection'
- 'shield:GetSubscriptionState'
- 'securityhub:BatchImportFindings'
- 'securityhub:GetFindings'
- 'ssm:GetDocument'
- 'support:Describe*'
- 'tag:GetTagKeys'
Resource: '*'
- PolicyName: ProwlerScanRoleAdditionalViewPrivilegesApiGateway
PolicyDocument:
Version : '2012-10-17'
Statement:
- Effect: Allow
Action:
- 'apigateway:GET'
Resource: 'arn:aws:apigateway:*::/restapis/*'

View File

@@ -3,24 +3,34 @@
"Statement": [
{
"Action": [
"account:Get*",
"appstream:Describe*",
"appstream:List*",
"backup:List*",
"cloudtrail:GetInsightSelectors",
"codeartifact:List*",
"codebuild:BatchGet*",
"ds:Describe*",
"drs:Describe*",
"ds:Get*",
"ds:Describe*",
"ds:List*",
"ec2:GetEbsEncryptionByDefault",
"ecr:Describe*",
"ecr:GetRegistryScanningConfiguration",
"elasticfilesystem:DescribeBackupPolicy",
"glue:GetConnections",
"glue:GetSecurityConfiguration*",
"glue:SearchTables",
"lambda:GetFunction*",
"logs:FilterLogEvents",
"macie2:GetMacieSession",
"s3:GetAccountPublicAccessBlock",
"shield:DescribeProtection",
"shield:GetSubscriptionState",
"securityhub:BatchImportFindings",
"securityhub:GetFindings",
"ssm:GetDocument",
"ssm-incidents:List*",
"support:Describe*",
"tag:GetTagKeys"
],
@@ -34,7 +44,8 @@
"apigateway:GET"
],
"Resource": [
"arn:aws:apigateway:*::/restapis/*"
"arn:aws:apigateway:*::/restapis/*",
"arn:aws:apigateway:*::/apis/*"
]
}
]

2878
poetry.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
import sys
from prowler.lib.banner import print_banner
@@ -11,12 +12,15 @@ from prowler.lib.check.check import (
exclude_services_to_run,
execute_checks,
list_categories,
list_checks,
list_services,
parse_checks_from_folder,
print_categories,
print_checks,
print_compliance_frameworks,
print_compliance_requirements,
print_services,
remove_custom_checks_module,
)
from prowler.lib.check.checks_loader import load_checks_to_execute
from prowler.lib.check.compliance import update_checks_metadata_with_compliance
@@ -26,14 +30,18 @@ from prowler.lib.outputs.compliance import display_compliance_table
from prowler.lib.outputs.html import add_html_footer, fill_html_overview_statistics
from prowler.lib.outputs.json import close_json
from prowler.lib.outputs.outputs import extract_findings_statistics, send_to_s3_bucket
from prowler.lib.outputs.slack import send_slack_message
from prowler.lib.outputs.summary_table import display_summary_table
from prowler.providers.aws.lib.allowlist.allowlist import parse_allowlist_file
from prowler.providers.aws.lib.quick_inventory.quick_inventory import quick_inventory
from prowler.providers.aws.lib.security_hub.security_hub import (
resolve_security_hub_previous_findings,
)
from prowler.providers.common.audit_info import set_provider_audit_info
from prowler.providers.common.allowlist import set_provider_allowlist
from prowler.providers.common.audit_info import (
set_provider_audit_info,
set_provider_execution_parameters,
)
from prowler.providers.common.outputs import set_provider_output_options
from prowler.providers.common.quick_inventory import run_provider_quick_inventory
def prowler():
@@ -49,9 +57,13 @@ def prowler():
services = args.services
categories = args.categories
checks_file = args.checks_file
checks_folder = args.checks_folder
severities = args.severity
compliance_framework = args.compliance
if not args.no_banner:
print_banner(args)
# We treat the compliance framework as another output format
if compliance_framework:
args.output_modes.extend(compliance_framework)
@@ -59,9 +71,6 @@ def prowler():
# Set Logger configuration
set_logging_config(args.log_level, args.log_file, args.only_logs)
if not args.no_banner:
print_banner(args)
if args.list_services:
print_services(list_services(provider))
sys.exit()
@@ -78,30 +87,32 @@ def prowler():
# Load compliance frameworks
logger.debug("Loading compliance frameworks from .json files")
# Load the compliance framework if specified with --compliance
# If some compliance argument is specified we have to load it
if (
args.list_compliance
or args.list_compliance_requirements
or compliance_framework
):
bulk_compliance_frameworks = bulk_load_compliance_frameworks(provider)
# Complete checks metadata with the compliance framework specification
update_checks_metadata_with_compliance(
bulk_compliance_frameworks, bulk_checks_metadata
bulk_compliance_frameworks = bulk_load_compliance_frameworks(provider)
# Complete checks metadata with the compliance framework specification
update_checks_metadata_with_compliance(
bulk_compliance_frameworks, bulk_checks_metadata
)
if args.list_compliance:
print_compliance_frameworks(bulk_compliance_frameworks)
sys.exit()
if args.list_compliance_requirements:
print_compliance_requirements(
bulk_compliance_frameworks, args.list_compliance_requirements
)
if args.list_compliance:
print_compliance_frameworks(bulk_compliance_frameworks)
sys.exit()
if args.list_compliance_requirements:
print_compliance_requirements(
bulk_compliance_frameworks, args.list_compliance_requirements
)
sys.exit()
sys.exit()
# If -l/--list-checks passed as argument, print checks to execute and quit
if args.list_checks:
print_checks(provider, list_checks(provider), bulk_checks_metadata)
sys.exit()
# Set the audit info based on the selected provider
audit_info = set_provider_audit_info(provider, args.__dict__)
# Import custom checks from folder
if checks_folder:
parse_checks_from_folder(audit_info, checks_folder, provider)
# Load checks to execute
checks_to_execute = load_checks_to_execute(
bulk_checks_metadata,
@@ -113,7 +124,6 @@ def prowler():
compliance_framework,
categories,
provider,
audit_info,
)
# Exclude checks if -e/--excluded-checks
@@ -129,25 +139,22 @@ def prowler():
# Sort final check list
checks_to_execute = sorted(checks_to_execute)
# If -l/--list-checks passed as argument, print checks to execute and quit
if args.list_checks:
print_checks(provider, checks_to_execute, bulk_checks_metadata)
sys.exit()
# Once the audit_info is set and we have the eventual checks based on the resource identifier,
# it is time to check what Prowler's checks are going to be executed
if audit_info.audit_resources:
checks_to_execute = set_provider_execution_parameters(provider, audit_info)
# Parse content from Allowlist file and get it, if necessary, from S3
if provider == "aws" and args.allowlist_file:
allowlist_file = parse_allowlist_file(audit_info, args.allowlist_file)
else:
allowlist_file = None
# Parse Allowlist
allowlist_file = set_provider_allowlist(provider, audit_info, args)
# Setting output options based on the selected provider
# Set output options based on the selected provider
audit_output_options = set_provider_output_options(
provider, args, audit_info, allowlist_file, bulk_checks_metadata
)
# Quick Inventory for AWS
if provider == "aws" and args.quick_inventory:
quick_inventory(audit_info, args.output_directory)
# Run the quick inventory for the provider if available
if hasattr(args, "quick_inventory") and args.quick_inventory:
run_provider_quick_inventory(provider, audit_info, args.output_directory)
sys.exit()
# Execute checks
@@ -164,6 +171,21 @@ def prowler():
# Extract findings stats
stats = extract_findings_statistics(findings)
if args.slack:
if "SLACK_API_TOKEN" in os.environ and "SLACK_CHANNEL_ID" in os.environ:
_ = send_slack_message(
os.environ["SLACK_API_TOKEN"],
os.environ["SLACK_CHANNEL_ID"],
stats,
provider,
audit_info,
)
else:
logger.critical(
"Slack integration needs SLACK_API_TOKEN and SLACK_CHANNEL_ID environment variables (see more in https://docs.prowler.cloud/en/latest/tutorials/integrations/#slack)."
)
sys.exit(1)
if args.output_modes:
for mode in args.output_modes:
# Close json file if exists
@@ -197,7 +219,7 @@ def prowler():
)
# Resolve previous fails of Security Hub
if provider == "aws" and args.security_hub:
if provider == "aws" and args.security_hub and not args.skip_sh_update:
resolve_security_hub_previous_findings(args.output_directory, audit_info)
# Display summary table
@@ -210,14 +232,19 @@ def prowler():
)
if compliance_framework and findings:
# Display compliance table
display_compliance_table(
findings,
bulk_checks_metadata,
compliance_framework,
audit_output_options.output_filename,
audit_output_options.output_directory,
)
for compliance in compliance_framework:
# Display compliance table
display_compliance_table(
findings,
bulk_checks_metadata,
compliance,
audit_output_options.output_filename,
audit_output_options.output_directory,
)
# If custom checks were passed, remove the modules
if checks_folder:
remove_custom_checks_module(checks_folder, provider)
# If there are failed findings exit code 3, except if -z is input
if not args.ignore_exit_code_3 and stats["total_fail"] > 0:

View File

@@ -0,0 +1,214 @@
{
"Framework": "AWS-Audit-Manager-Control-Tower-Guardrails",
"Version": "",
"Provider": "AWS",
"Description": "AWS Control Tower is a management and governance service that you can use to navigate through the setup process and governance requirements that are involved in creating a multi-account AWS environment.",
"Requirements": [
{
"Id": "1.0.1",
"Name": "Disallow launch of EC2 instance types that are not EBS-optimized",
"Description": "Checks whether EBS optimization is enabled for your EC2 instances that can be EBS-optimized",
"Attributes": [
{
"ItemId": "1.0.1",
"Section": "EBS checks",
"Service": "ebs"
}
],
"Checks": []
},
{
"Id": "1.0.2",
"Name": "Disallow EBS volumes that are unattached to an EC2 instance",
"Description": "Checks whether EBS volumes are attached to EC2 instances",
"Attributes": [
{
"ItemId": "1.0.2",
"Section": "EBS checks",
"Service": "ebs"
}
],
"Checks": []
},
{
"Id": "1.0.3",
"Name": "Enable encryption for EBS volumes attached to EC2 instances",
"Description": "Checks whether EBS volumes that are in an attached state are encrypted",
"Attributes": [
{
"ItemId": "1.0.3",
"Section": "EBS checks",
"Service": "ebs"
}
],
"Checks": [
"ec2_ebs_default_encryption"
]
},
{
"Id": "2.0.1",
"Name": "Disallow internet connection through RDP",
"Description": "Checks whether security groups that are in use disallow unrestricted incoming TCP traffic to the specified",
"Attributes": [
{
"ItemId": "2.0.1",
"Section": "Disallow Internet Connection",
"Service": "vpc"
}
],
"Checks": [
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_3389"
]
},
{
"Id": "2.0.2",
"Name": "Disallow internet connection through SSH",
"Description": "Checks whether security groups that are in use disallow unrestricted incoming SSH traffic.",
"Attributes": [
{
"ItemId": "2.0.2",
"Section": "Disallow Internet Connection",
"Service": "vpc"
}
],
"Checks": [
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22"
]
},
{
"Id": "3.0.1",
"Name": "Disallow access to IAM users without MFA",
"Description": "Checks whether the AWS Identity and Access Management users have multi-factor authentication (MFA) enabled.",
"Attributes": [
{
"ItemId": "3.0.1",
"Section": "Multi-Factor Authentication",
"Service": "iam"
}
],
"Checks": [
"iam_user_mfa_enabled_console_access"
]
},
{
"Id": "3.0.2",
"Name": "Disallow console access to IAM users without MFA",
"Description": "Checks whether AWS Multi-Factor Authentication (MFA) is enabled for all AWS Identity and Access Management (IAM) users that use a console password.",
"Attributes": [
{
"ItemId": "3.0.2",
"Section": "Multi-Factor Authentication",
"Service": "iam"
}
],
"Checks": [
"iam_user_mfa_enabled_console_access"
]
},
{
"Id": "3.0.3",
"Name": "Enable MFA for the root user",
"Description": "Checks whether the root user of your AWS account requires multi-factor authentication for console sign-in.",
"Attributes": [
{
"ItemId": "3.0.3",
"Section": "Multi-Factor Authentication",
"Service": "iam"
}
],
"Checks": [
"iam_root_mfa_enabled"
]
},
{
"Id": "4.0.1",
"Name": "Disallow public access to RDS database instances",
"Description": "Checks whether the Amazon Relational Database Service (RDS) instances are not publicly accessible. The rule is non-compliant if the publiclyAccessible field is true in the instance configuration item.",
"Attributes": [
{
"ItemId": "4.0.1",
"Section": "Disallow Public Access",
"Service": "rds"
}
],
"Checks": [
"rds_instance_no_public_access"
]
},
{
"Id": "4.0.2",
"Name": "Disallow public access to RDS database snapshots",
"Description": "Checks if Amazon Relational Database Service (Amazon RDS) snapshots are public. The rule is non-compliant if any existing and new Amazon RDS snapshots are public.",
"Attributes": [
{
"ItemId": "4.0.2",
"Section": "Disallow Public Access",
"Service": "rds"
}
],
"Checks": [
"rds_snapshots_public_access"
]
},
{
"Id": "4.1.1",
"Name": "Disallow public read access to S3 buckets",
"Description": "Checks that your S3 buckets do not allow public read access.",
"Attributes": [
{
"ItemId": "4.1.1",
"Section": "Disallow Public Access",
"Service": "s3"
}
],
"Checks": [
"rds_instance_no_public_access"
]
},
{
"Id": "4.1.2",
"Name": "Disallow public write access to S3 buckets",
"Description": "Checks that your S3 buckets do not allow public write access.",
"Attributes": [
{
"ItemId": "4.1.2",
"Section": "Disallow Public Access",
"Service": "s3"
}
],
"Checks": [
"s3_bucket_policy_public_write_access"
]
},
{
"Id": "5.0.1",
"Name": "Disallow RDS database instances that are not storage encrypted ",
"Description": "Checks whether storage encryption is enabled for your RDS DB instances.",
"Attributes": [
{
"ItemId": "5.0.1",
"Section": "Disallow Instances",
"Service": "rds"
}
],
"Checks": [
"rds_instance_storage_encrypted"
]
},
{
"Id": "5.1.1",
"Name": "Disallow S3 buckets that are not versioning enabled",
"Description": "Checks whether versioning is enabled for your S3 buckets.",
"Attributes": [
{
"ItemId": "5.1.1",
"Section": "Disallow Instances",
"Service": "s3"
}
],
"Checks": [
"s3_bucket_object_versioning"
]
}
]
}

View File

@@ -0,0 +1,604 @@
{
"Framework": "AWS-Foundational-Security-Best-Practices",
"Version": "",
"Provider": "AWS",
"Description": "The AWS Foundational Security Best Practices standard is a set of controls that detect when your deployed accounts and resources deviate from security best practices.",
"Requirements": [
{
"Id": "account",
"Name": "Account",
"Description": "This section contains recommendations for configuring AWS Account.",
"Attributes": [
{
"ItemId": "account",
"Section": "Account",
"Service": "account"
}
],
"Checks": [
"account_security_contact_information_is_registered"
]
},
{
"Id": "acm",
"Name": "ACM",
"Description": "This section contains recommendations for configuring ACM resources.",
"Attributes": [
{
"ItemId": "acm",
"Section": "Acm",
"Service": "acm"
}
],
"Checks": [
"account_security_contact_information_is_registered"
]
},
{
"Id": "api-gateway",
"Name": "API Gateway",
"Description": "This section contains recommendations for configuring API Gateway resources.",
"Attributes": [
{
"ItemId": "api-gateway",
"Section": "API Gateway",
"Service": "apigateway"
}
],
"Checks": [
"apigateway_logging_enabled",
"apigateway_client_certificate_enabled",
"apigateway_waf_acl_attached",
"apigatewayv2_authorizers_enabled",
"apigatewayv2_access_logging_enabled"
]
},
{
"Id": "auto-scaling",
"Name": "Benchmark: Auto Scaling",
"Description": "This section contains recommendations for configuring Auto Scaling resources and options.",
"Attributes": [
{
"ItemId": "auto-scaling",
"Section": "Auto Scaling",
"Service": "autoscaling"
}
],
"Checks": []
},
{
"Id": "cloudformation",
"Name": "Benchmark: CloudFormation",
"Description": "This section contains recommendations for configuring CloudFormation resources and options.",
"Attributes": [
{
"ItemId": "cloudformation",
"Section": "CloudFormation",
"Service": "cloudformation"
}
],
"Checks": []
},
{
"Id": "cloudfront",
"Name": "Benchmark: CloudFront",
"Description": "This section contains recommendations for configuring CloudFront resources and options.",
"Attributes": [
{
"ItemId": "cloudfront",
"Section": "CloudFront",
"Service": "cloudfront"
}
],
"Checks": [
"cloudfront_distributions_https_enabled",
"cloudfront_distributions_logging_enabled",
"cloudfront_distributions_using_waf",
"cloudfront_distributions_field_level_encryption_enabled",
"cloudfront_distributions_using_deprecated_ssl_protocols"
]
},
{
"Id": "cloudtrail",
"Name": "Benchmark: CloudTrail",
"Description": "This section contains recommendations for configuring CloudTrail resources and options.",
"Attributes": [
{
"ItemId": "cloudtrail",
"Section": "CloudTrail",
"Service": "cloudtrail"
}
],
"Checks": [
"cloudtrail_multi_region_enabled",
"cloudtrail_kms_encryption_enabled",
"cloudtrail_log_file_validation_enabled",
"cloudtrail_cloudwatch_logging_enabled"
]
},
{
"Id": "codebuild",
"Name": "Benchmark: CodeBuild",
"Description": "This section contains recommendations for configuring CodeBuild resources and options.",
"Attributes": [
{
"ItemId": "codebuild",
"Section": "CodeBuild",
"Service": "codebuild"
}
],
"Checks": []
},
{
"Id": "config",
"Name": "Benchmark: Config",
"Description": "This section contains recommendations for configuring AWS Config.",
"Attributes": [
{
"ItemId": "config",
"Section": "Config",
"Service": "config"
}
],
"Checks": [
"config_recorder_all_regions_enabled"
]
},
{
"Id": "dms",
"Name": "Benchmark: DMS",
"Description": "This section contains recommendations for configuring AWS DMS resources and options.",
"Attributes": [
{
"ItemId": "dms",
"Section": "DMS",
"Service": "dms"
}
],
"Checks": []
},
{
"Id": "dynamodb",
"Name": "Benchmark: DynamoDB",
"Description": "This section contains recommendations for configuring AWS Dynamo DB resources and options.",
"Attributes": [
{
"ItemId": "dynamodb",
"Section": "DynamoDB",
"Service": "dynamodb"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"dynamodb_accelerator_cluster_encryption_enabled"
]
},
{
"Id": "ec2",
"Name": "Benchmark: EC2",
"Description": "This section contains recommendations for configuring EC2 resources and options.",
"Attributes": [
{
"ItemId": "ec2",
"Section": "EC2",
"Service": "ec2"
}
],
"Checks": [
"ec2_ebs_public_snapshot",
"ec2_securitygroup_default_restrict_traffic",
"ec2_ebs_volume_encryption",
"ec2_instance_older_than_specific_days",
"vpc_flow_logs_enabled",
"ec2_ebs_default_encryption",
"ec2_instance_imdsv2_enabled",
"ec2_instance_public_ip",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_not_used"
]
},
{
"Id": "ecr",
"Name": "Benchmark: Elastic Container Registry",
"Description": "This section contains recommendations for configuring AWS ECR resources and options.",
"Attributes": [
{
"ItemId": "ecr",
"Section": "ECR",
"Service": "ecr"
}
],
"Checks": [
"ecr_repositories_scan_images_on_push_enabled",
"ecr_repositories_lifecycle_policy_enabled"
]
},
{
"Id": "ecs",
"Name": "Benchmark: Elastic Container Service",
"Description": "This section contains recommendations for configuring ECS resources and options.",
"Attributes": [
{
"ItemId": "ecs",
"Section": "ECS",
"Service": "ecs"
}
],
"Checks": [
"ecs_task_definitions_no_environment_secrets"
]
},
{
"Id": "efs",
"Name": "Benchmark: EFS",
"Description": "This section contains recommendations for configuring AWS EFS resources and options.",
"Attributes": [
{
"ItemId": "efs",
"Section": "EFS",
"Service": "efs"
}
],
"Checks": [
"efs_encryption_at_rest_enabled",
"efs_have_backup_enabled"
]
},
{
"Id": "eks",
"Name": "Benchmark: EKS",
"Description": "This section contains recommendations for configuring AWS EKS resources and options.",
"Attributes": [
{
"ItemId": "eks",
"Section": "EKS",
"Service": "eks"
}
],
"Checks": []
},
{
"Id": "elastic-beanstalk",
"Name": "Benchmark: Elastic Beanstalk",
"Description": "This section contains recommendations for configuring AWS Elastic Beanstalk resources and options.",
"Attributes": [
{
"ItemId": "elastic-beanstalk",
"Section": "Elastic Beanstalk",
"Service": "elasticbeanstalk"
}
],
"Checks": []
},
{
"Id": "elb",
"Name": "Benchmark: ELB",
"Description": "This section contains recommendations for configuring Elastic Load Balancer resources and options.",
"Attributes": [
{
"ItemId": "elb",
"Section": "ELB",
"Service": "elb"
}
],
"Checks": [
"elbv2_logging_enabled",
"elb_logging_enabled",
"elbv2_deletion_protection",
"elbv2_desync_mitigation_mode"
]
},
{
"Id": "elbv2",
"Name": "Benchmark: ELBv2",
"Description": "This section contains recommendations for configuring Elastic Load Balancer resources and options.",
"Attributes": [
{
"ItemId": "elbv2",
"Section": "ELBv2",
"Service": "elbv2"
}
],
"Checks": []
},
{
"Id": "emr",
"Name": "Benchmark: EMR",
"Description": "This section contains recommendations for configuring EMR resources.",
"Attributes": [
{
"ItemId": "emr",
"Section": "EMR",
"Service": "emr"
}
],
"Checks": [
"emr_cluster_master_nodes_no_public_ip"
]
},
{
"Id": "elasticsearch",
"Name": "Benchmark: Elasticsearch",
"Description": "This section contains recommendations for configuring Elasticsearch resources and options.",
"Attributes": [
{
"ItemId": "elasticsearch",
"Section": "ElasticSearch",
"Service": "elasticsearch"
}
],
"Checks": [
"opensearch_service_domains_encryption_at_rest_enabled",
"opensearch_service_domains_node_to_node_encryption_enabled",
"opensearch_service_domains_audit_logging_enabled",
"opensearch_service_domains_audit_logging_enabled",
"opensearch_service_domains_https_communications_enforced"
]
},
{
"Id": "guardduty",
"Name": "Benchmark: GuardDuty",
"Description": "This section contains recommendations for configuring AWS GuardDuty resources and options.",
"Attributes": [
{
"ItemId": "guardduty",
"Section": "GuardDuty",
"Service": "guardduty"
}
],
"Checks": [
"guardduty_is_enabled"
]
},
{
"Id": "iam",
"Name": "Benchmark: IAM",
"Description": "This section contains recommendations for configuring AWS IAM resources and options.",
"Attributes": [
{
"ItemId": "iam",
"Section": "IAM",
"Service": "iam"
}
],
"Checks": [
"iam_rotate_access_key_90_days",
"iam_no_root_access_key",
"iam_user_mfa_enabled_console_access",
"iam_root_hardware_mfa_enabled",
"iam_password_policy_minimum_length_14",
"iam_disable_90_days_credentials",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges"
]
},
{
"Id": "kinesis",
"Name": "Benchmark: Kinesis",
"Description": "This section contains recommendations for configuring AWS Kinesis resources and options.",
"Attributes": [
{
"ItemId": "kinesis",
"Section": "Kinesis",
"Service": "kinesis"
}
],
"Checks": []
},
{
"Id": "kms",
"Name": "Benchmark: KMS",
"Description": "This section contains recommendations for configuring AWS KMS resources and options.",
"Attributes": [
{
"ItemId": "kms",
"Section": "KMS",
"Service": "kms"
}
],
"Checks": []
},
{
"Id": "lambda",
"Name": "Benchmark: Lambda",
"Description": "This section contains recommendations for configuring Lambda resources and options.",
"Attributes": [
{
"ItemId": "lambda",
"Section": "Lambda",
"Service": "lambda"
}
],
"Checks": [
"awslambda_function_url_public",
"awslambda_function_using_supported_runtimes"
]
},
{
"Id": "network-firewall",
"Name": "Benchmark: Network Firewall",
"Description": "This section contains recommendations for configuring Network Firewall resources and options.",
"Attributes": [
{
"ItemId": "network-firewall",
"Section": "Network Firewall",
"Service": "network-firewall"
}
],
"Checks": []
},
{
"Id": "opensearch",
"Name": "Benchmark: OpenSearch",
"Description": "This section contains recommendations for configuring OpenSearch resources and options.",
"Attributes": [
{
"ItemId": "opensearch",
"Section": "OpenSearch",
"Service": "opensearch"
}
],
"Checks": [
"opensearch_service_domains_not_publicly_accessible"
]
},
{
"Id": "rds",
"Name": "Benchmark: RDS",
"Description": "This section contains recommendations for configuring AWS RDS resources and options.",
"Attributes": [
{
"ItemId": "rds",
"Section": "RDS",
"Service": "rds"
}
],
"Checks": [
"rds_snapshots_public_access",
"rds_instance_no_public_access",
"rds_instance_storage_encrypted",
"rds_instance_storage_encrypted",
"rds_instance_multi_az",
"rds_instance_enhanced_monitoring_enabled",
"rds_instance_deletion_protection",
"rds_instance_integration_cloudwatch_logs",
"rds_instance_minor_version_upgrade_enabled",
"rds_instance_multi_az"
]
},
{
"Id": "redshift",
"Name": "Benchmark: Redshift",
"Description": "This section contains recommendations for configuring AWS Redshift resources and options.",
"Attributes": [
{
"ItemId": "redshift",
"Section": "Redshift",
"Service": "redshift"
}
],
"Checks": [
"redshift_cluster_public_access",
"redshift_cluster_automated_snapshot",
"redshift_cluster_automated_snapshot",
"redshift_cluster_automatic_upgrades"
]
},
{
"Id": "s3",
"Name": "Benchmark: S3",
"Description": "This section contains recommendations for configuring AWS S3 resources and options.",
"Attributes": [
{
"ItemId": "s3",
"Section": "S3",
"Service": "s3"
}
],
"Checks": [
"s3_account_level_public_access_blocks",
"s3_account_level_public_access_blocks",
"s3_bucket_policy_public_write_access",
"s3_bucket_default_encryption",
"s3_bucket_secure_transport_policy",
"s3_bucket_public_access",
"s3_bucket_server_access_logging_enabled",
"s3_bucket_object_versioning",
"s3_bucket_acl_prohibited"
]
},
{
"Id": "sagemaker",
"Name": "Benchmark: SageMaker",
"Description": "This section contains recommendations for configuring AWS Sagemaker resources and options.",
"Attributes": [
{
"ItemId": "sagemaker",
"Section": "SageMaker",
"Service": "sagemaker"
}
],
"Checks": [
"sagemaker_notebook_instance_without_direct_internet_access_configured",
"sagemaker_notebook_instance_vpc_settings_configured",
"sagemaker_notebook_instance_root_access_disabled"
]
},
{
"Id": "secretsmanager",
"Name": "Benchmark: Secrets Manager",
"Description": "This section contains recommendations for configuring AWS Secrets Manager resources.",
"Attributes": [
{
"ItemId": "secretsmanager",
"Section": "Secrets Manager",
"Service": "secretsmanager"
}
],
"Checks": [
"secretsmanager_automatic_rotation_enabled",
"secretsmanager_automatic_rotation_enabled"
]
},
{
"Id": "sns",
"Name": "Benchmark: SNS",
"Description": "This section contains recommendations for configuring AWS SNS resources and options.",
"Attributes": [
{
"ItemId": "sns",
"Section": "SNS",
"Service": "sns"
}
],
"Checks": [
"sns_topics_kms_encryption_at_rest_enabled"
]
},
{
"Id": "sqs",
"Name": "Benchmark: SQS",
"Description": "This section contains recommendations for configuring AWS SQS resources and options.",
"Attributes": [
{
"ItemId": "sqs",
"Section": "SQS",
"Service": "sqs"
}
],
"Checks": [
"sqs_queues_server_side_encryption_enabled"
]
},
{
"Id": "ssm",
"Name": "Benchmark: SSM",
"Description": "This section contains recommendations for configuring AWS Systems Manager resources and options.",
"Attributes": [
{
"ItemId": "ssm",
"Section": "SSM",
"Service": "ssm"
}
],
"Checks": [
"ec2_instance_managed_by_ssm",
"ssm_managed_compliant_patching",
"ssm_managed_compliant_patching"
]
},
{
"Id": "waf",
"Name": "Benchmark: WAF",
"Description": "This section contains recommendations for configuring AWS WAF resources and options.",
"Attributes": [
{
"ItemId": "waf",
"Section": "WAF",
"Service": "waf"
}
],
"Checks": []
}
]
}

View File

@@ -1,6 +1,8 @@
{
"Framework": "CIS-AWS",
"Framework": "CIS",
"Version": "1.4",
"Provider": "AWS",
"Description": "The CIS Benchmark for CIS Amazon Web Services Foundations Benchmark, v1.4.0, Level 1 and 2 provides prescriptive guidance for configuring security options for a subset of Amazon Web Services. It has an emphasis on foundational, testable, and architecture agnostic settings",
"Requirements": [
{
"Id": "1.1",
@@ -153,7 +155,8 @@
"Id": "1.16",
"Description": "Ensure IAM policies that allow full \"*:*\" administrative privileges are not attached",
"Checks": [
"iam_policy_no_administrative_privileges"
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges"
],
"Attributes": [
{
@@ -258,7 +261,7 @@
"Id": "1.20",
"Description": "Ensure that IAM Access analyzer is enabled for all regions",
"Checks": [
"accessanalyzer_enabled_without_findings"
"accessanalyzer_enabled"
],
"Attributes": [
{
@@ -531,6 +534,7 @@
"Id": "2.1.5",
"Description": "Ensure that S3 Buckets are configured with 'Block public access (bucket settings)'",
"Checks": [
"s3_bucket_level_public_access_block",
"s3_account_level_public_access_blocks"
],
"Attributes": [

View File

@@ -1,6 +1,8 @@
{
"Framework": "CIS-AWS",
"Framework": "CIS",
"Version": "1.5",
"Provider": "AWS",
"Description": "The CIS Amazon Web Services Foundations Benchmark provides prescriptive guidance for configuring security options for a subset of Amazon Web Services with an emphasis on foundational, testable, and architecture agnostic settings.",
"Requirements": [
{
"Id": "1.1",
@@ -153,7 +155,8 @@
"Id": "1.16",
"Description": "Ensure IAM policies that allow full \"*:*\" administrative privileges are not attached",
"Checks": [
"iam_policy_no_administrative_privileges"
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges"
],
"Attributes": [
{
@@ -258,7 +261,7 @@
"Id": "1.20",
"Description": "Ensure that IAM Access analyzer is enabled for all regions",
"Checks": [
"accessanalyzer_enabled_without_findings"
"accessanalyzer_enabled"
],
"Attributes": [
{
@@ -531,6 +534,7 @@
"Id": "2.1.5",
"Description": "Ensure that S3 Buckets are configured with 'Block public access (bucket settings)'",
"Checks": [
"s3_bucket_level_public_access_block",
"s3_account_level_public_access_blocks"
],
"Attributes": [

View File

@@ -0,0 +1,423 @@
{
"Framework": "CISA",
"Version": "",
"Provider": "AWS",
"Description": "Cybersecurity & Infrastructure Security Agency's (CISA) Cyber Essentials is a guide for leaders of small businesses as well as leaders of small and local government agencies to develop an actionable understanding of where to start implementing organizational cybersecurity practices.",
"Requirements": [
{
"Id": "your-systems-1",
"Name": "Your Systems-1",
"Description": "Learn what is on your network. Maintain inventories of hardware and software assets to know what is in play and at-risk from attack.",
"Attributes": [
{
"ItemId": "your-systems-1",
"Section": "your systems",
"Service": "aws"
}
],
"Checks": [
"ec2_instance_managed_by_ssm",
"ec2_instance_older_than_specific_days",
"ssm_managed_compliant_patching",
"ec2_elastic_ip_unassgined"
]
},
{
"Id": "your-systems-2",
"Name": "Your Systems-2",
"Description": "Leverage automatic updates for all operating systems and third-party software.",
"Attributes": [
{
"ItemId": "your-systems-2",
"Section": "your systems",
"Service": "aws"
}
],
"Checks": [
"rds_instance_minor_version_upgrade_enabled",
"redshift_cluster_automatic_upgrades",
"ssm_managed_compliant_patching"
]
},
{
"Id": "your-systems-3",
"Name": "Your Systems-3",
"Description": "Implement security configurations for all hardware and software assets.",
"Attributes": [
{
"ItemId": "your-systems-3",
"Section": "your systems",
"Service": "aws"
}
],
"Checks": [
"apigateway_client_certificate_enabled",
"apigateway_logging_enabled",
"apigateway_waf_acl_attached",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_kms_encryption_enabled",
"cloudtrail_log_file_validation_enabled",
"codebuild_project_user_controlled_buildspec",
"dynamodb_accelerator_cluster_encryption_enabled",
"dynamodb_tables_kms_cmk_encryption_enabled",
"dynamodb_tables_pitr_enabled",
"dynamodb_tables_pitr_enabled",
"ec2_ebs_volume_encryption",
"ec2_ebs_public_snapshot",
"ec2_ebs_default_encryption",
"ec2_instance_public_ip",
"efs_encryption_at_rest_enabled",
"efs_have_backup_enabled",
"elb_logging_enabled",
"elbv2_deletion_protection",
"elbv2_waf_acl_attached",
"elbv2_ssl_listeners",
"elb_ssl_listeners",
"emr_cluster_master_nodes_no_public_ip",
"opensearch_service_domains_encryption_at_rest_enabled",
"opensearch_service_domains_cloudwatch_logging_enabled",
"opensearch_service_domains_node_to_node_encryption_enabled",
"guardduty_is_enabled",
"iam_password_policy_minimum_length_14",
"iam_password_policy_lowercase",
"iam_password_policy_number",
"iam_password_policy_number",
"iam_password_policy_symbol",
"iam_password_policy_uppercase",
"iam_no_custom_policy_permissive_role_assumption",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_no_root_access_key",
"iam_rotate_access_key_90_days",
"iam_user_mfa_enabled_console_access",
"iam_user_mfa_enabled_console_access",
"iam_disable_90_days_credentials",
"kms_cmk_rotation_enabled",
"awslambda_function_not_publicly_accessible",
"awslambda_function_not_publicly_accessible",
"cloudwatch_log_group_kms_encryption_enabled",
"cloudwatch_log_group_kms_encryption_enabled",
"rds_instance_enhanced_monitoring_enabled",
"rds_instance_backup_enabled",
"rds_instance_deletion_protection",
"rds_instance_storage_encrypted",
"rds_instance_backup_enabled",
"rds_instance_integration_cloudwatch_logs",
"rds_instance_multi_az",
"rds_instance_no_public_access",
"rds_instance_storage_encrypted",
"rds_snapshots_public_access",
"redshift_cluster_automated_snapshot",
"redshift_cluster_audit_logging",
"redshift_cluster_public_access",
"s3_bucket_default_encryption",
"s3_bucket_secure_transport_policy",
"s3_bucket_server_access_logging_enabled",
"s3_bucket_public_access",
"s3_bucket_policy_public_write_access",
"s3_bucket_object_versioning",
"s3_account_level_public_access_blocks",
"s3_bucket_public_access",
"sagemaker_training_jobs_volume_and_output_encryption_enabled",
"sagemaker_notebook_instance_without_direct_internet_access_configured",
"sagemaker_notebook_instance_encryption_enabled",
"secretsmanager_automatic_rotation_enabled",
"securityhub_enabled",
"sns_topics_kms_encryption_at_rest_enabled",
"vpc_endpoint_connections_trust_boundaries",
"ec2_securitygroup_default_restrict_traffic",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_securitygroup_allow_ingress_from_internet_to_any_port"
]
},
{
"Id": "your_-urroundings-1",
"Name": "Your Surroundings-1",
"Description": "Learn who is on your network. Maintain inventories of network connections (user accounts, vendors, business partners, etc.).",
"Attributes": [
{
"ItemId": "your-surroundings-1",
"Section": "your surroundings",
"Service": "aws"
}
],
"Checks": [
"ec2_elastic_ip_unassgined",
"vpc_flow_logs_enabled"
]
},
{
"Id": "your-surroundings-2",
"Name": "Your Surroundings-2",
"Description": "Leverage multi-factor authentication for all users, starting with privileged, administrative and remote access users.",
"Attributes": [
{
"ItemId": "your-surroundings-2",
"Section": "your surroundings",
"Service": "aws"
}
],
"Checks": [
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_user_mfa_enabled_console_access"
]
},
{
"Id": "your-surroundings-3",
"Name": "Your Surroundings-3",
"Description": "Grant access and admin permissions based on need-to-know and least privilege.",
"Attributes": [
{
"ItemId": "your-surroundings-3",
"Section": "your surroundings",
"Service": "aws"
}
],
"Checks": [
"elbv2_ssl_listeners",
"iam_no_custom_policy_permissive_role_assumption",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_no_root_access_key"
]
},
{
"Id": "your-surroundings-4",
"Name": "Your Surroundings-4",
"Description": "Leverage unique passwords for all user accounts.",
"Attributes": [
{
"ItemId": "your-surroundings-4",
"Section": "your surroundings",
"Service": "aws"
}
],
"Checks": [
"iam_password_policy_minimum_length_14",
"iam_password_policy_lowercase",
"iam_password_policy_number",
"iam_password_policy_number",
"iam_password_policy_symbol",
"iam_password_policy_uppercase"
]
},
{
"Id": "your-data-1",
"Name": "Your Data-1",
"Description": "Learn how your data is protected.",
"Attributes": [
{
"ItemId": "your-data-1",
"Section": "your data",
"Service": "aws"
}
],
"Checks": [
"efs_encryption_at_rest_enabled",
"cloudtrail_kms_encryption_enabled",
"dynamodb_tables_kms_cmk_encryption_enabled",
"ec2_ebs_volume_encryption",
"ec2_ebs_default_encryption",
"opensearch_service_domains_encryption_at_rest_enabled",
"rds_instance_storage_encrypted",
"rds_instance_storage_encrypted",
"redshift_cluster_audit_logging",
"s3_bucket_default_encryption",
"sagemaker_training_jobs_volume_and_output_encryption_enabled",
"sagemaker_notebook_instance_encryption_enabled",
"sns_topics_kms_encryption_at_rest_enabled"
]
},
{
"Id": "your-data-2",
"Name": "Your Data-2",
"Description": "Learn what is happening on your network, manage network and perimeter components, host and device components, data-at-rest and in-transit, and user behavior activities.",
"Attributes": [
{
"ItemId": "your-data-2",
"Section": "your data",
"Service": "aws"
}
],
"Checks": [
"acm_certificates_expiration_check",
"apigateway_client_certificate_enabled",
"apigateway_logging_enabled",
"efs_have_backup_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudwatch_log_metric_filter_and_alarm_for_cloudtrail_configuration_changes_enabled",
"cloudwatch_log_group_kms_encryption_enabled",
"dynamodb_tables_kms_cmk_encryption_enabled",
"ec2_ebs_volume_encryption",
"ec2_instance_public_ip",
"efs_encryption_at_rest_enabled",
"elb_logging_enabled",
"elbv2_waf_acl_attached",
"elbv2_ssl_listeners",
"elb_ssl_listeners",
"emr_cluster_master_nodes_no_public_ip",
"opensearch_service_domains_encryption_at_rest_enabled",
"opensearch_service_domains_cloudwatch_logging_enabled",
"opensearch_service_domains_node_to_node_encryption_enabled",
"awslambda_function_not_publicly_accessible",
"awslambda_function_not_publicly_accessible",
"cloudwatch_log_group_kms_encryption_enabled",
"rds_instance_storage_encrypted",
"rds_instance_integration_cloudwatch_logs",
"rds_instance_no_public_access",
"rds_snapshots_public_access",
"rds_snapshots_public_access",
"redshift_cluster_audit_logging",
"redshift_cluster_public_access",
"s3_bucket_default_encryption",
"s3_bucket_secure_transport_policy",
"redshift_cluster_public_access",
"s3_bucket_server_access_logging_enabled",
"s3_bucket_public_access",
"s3_bucket_policy_public_write_access",
"s3_account_level_public_access_blocks",
"s3_bucket_acl_prohibited",
"sagemaker_training_jobs_volume_and_output_encryption_enabled",
"sagemaker_notebook_instance_without_direct_internet_access_configured",
"sagemaker_notebook_instance_encryption_enabled",
"sns_topics_kms_encryption_at_rest_enabled",
"ec2_securitygroup_default_restrict_traffic",
"vpc_flow_logs_enabled",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_securitygroup_allow_ingress_from_internet_to_any_port"
]
},
{
"Id": "your-data-3",
"Name": "Your Data-3",
"Description": "Domain name system protection.",
"Attributes": [
{
"ItemId": "your-data-3",
"Section": "your data",
"Service": "aws"
}
],
"Checks": [
"elbv2_waf_acl_attached"
]
},
{
"Id": "your-data-4",
"Name": "Your Data-4",
"Description": "Establish regular automated backups and redundancies of key systems.",
"Attributes": [
{
"ItemId": "your-data-4",
"Section": "your data",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"efs_have_backup_enabled",
"elbv2_deletion_protection",
"rds_instance_backup_enabled",
"rds_instance_deletion_protection",
"rds_instance_backup_enabled",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
},
{
"Id": "your-data-5",
"Name": "Your Data-5",
"Description": "Leverage protections for backups, including physical security, encryption and offline copies.",
"Attributes": [
{
"ItemId": "your-data-5",
"Section": "your data",
"Service": "aws"
}
],
"Checks": []
},
{
"Id": "your-crisis-response-2",
"Name": "Your Crisis Response-2",
"Description": "Lead development of an internal reporting structure to detect, communicate and contain attacks.",
"Attributes": [
{
"ItemId": "your-crisis-response-2",
"Section": "your crisis response",
"Service": "aws"
}
],
"Checks": [
"guardduty_is_enabled",
"securityhub_enabled"
]
},
{
"Id": "booting-up-thing-to-do-first-1",
"Name": "YBooting Up: Things to Do First-1",
"Description": "Lead development of an internal reporting structure to detect, communicate and contain attacks.",
"Attributes": [
{
"ItemId": "booting-up-thing-to-do-first-1",
"Section": "booting up thing to do first",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"dynamodb_tables_pitr_enabled",
"efs_have_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
},
{
"Id": "booting-up-thing-to-do-first-2",
"Name": "YBooting Up: Things to Do First-2",
"Description": "Require multi-factor authentication (MFA) for accessing your systems whenever possible. MFA should be required of all users, but start with privileged, administrative, and remote access users.",
"Attributes": [
{
"ItemId": "booting-up-thing-to-do-first-2",
"Section": "booting up thing to do first",
"Service": "aws"
}
],
"Checks": [
"iam_user_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_user_mfa_enabled_console_access",
"iam_user_hardware_mfa_enabled"
]
},
{
"Id": "booting-up-thing-to-do-first-3",
"Name": "YBooting Up: Things to Do First-3",
"Description": "Enable automatic updates whenever possible. Replace unsupported operating systems, applications and hardware. Test and deploy patches quickly.",
"Attributes": [
{
"ItemId": "booting-up-thing-to-do-first-1",
"Section": "booting up thing to do first",
"Service": "aws"
}
],
"Checks": [
"rds_instance_minor_version_upgrade_enabled",
"redshift_cluster_automatic_upgrades",
"ssm_managed_compliant_patching"
]
}
]
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,440 @@
{
"Framework": "FedRAMP-Low-Revision-4",
"Version": "",
"Provider": "AWS",
"Description": "The Federal Risk and Authorization Management Program (FedRAMP) was established in 2011. It provides a cost-effective, risk-based approach for the adoption and use of cloud services by the U.S. federal government. FedRAMP empowers federal agencies to use modern cloud technologies, with an emphasis on the security and protection of federal information.",
"Requirements": [
{
"Id": "ac-2",
"Name": "Account Management (AC-2)",
"Description": "Manage system accounts, group memberships, privileges, workflow, notifications, deactivations, and authorizations.",
"Attributes": [
{
"ItemId": "ac-2",
"Section": "Access Control (AC)",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_log_file_validation_enabled",
"cloudwatch_changes_to_network_acls_alarm_configured",
"opensearch_service_domains_cloudwatch_logging_enabled",
"guardduty_is_enabled",
"iam_password_policy_minimum_length_14",
"iam_policy_attached_only_to_group_or_roles",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_no_root_access_key",
"iam_rotate_access_key_90_days",
"iam_user_mfa_enabled_console_access",
"iam_user_hardware_mfa_enabled",
"iam_disable_90_days_credentials",
"rds_instance_integration_cloudwatch_logs",
"redshift_cluster_audit_logging",
"s3_bucket_server_access_logging_enabled",
"securityhub_enabled"
]
},
{
"Id": "ac-3",
"Name": "Account Management (AC-3)",
"Description": "The information system enforces approved authorizations for logical access to information and system resources in accordance with applicable access control policies.",
"Attributes": [
{
"ItemId": "ac-3",
"Section": "Access Control (AC)",
"Service": "aws"
}
],
"Checks": [
"ec2_ebs_public_snapshot",
"ec2_instance_public_ip",
"ec2_instance_imdsv2_enabled",
"emr_cluster_master_nodes_no_public_ip",
"iam_policy_attached_only_to_group_or_roles",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_no_root_access_key",
"iam_disable_90_days_credentials",
"awslambda_function_not_publicly_accessible",
"awslambda_function_url_public",
"rds_instance_no_public_access",
"rds_snapshots_public_access",
"redshift_cluster_public_access",
"s3_bucket_policy_public_write_access",
"s3_account_level_public_access_blocks",
"s3_bucket_public_access",
"sagemaker_notebook_instance_without_direct_internet_access_configured"
]
},
{
"Id": "ac-17",
"Name": "Remote Access (AC-17)",
"Description": "Authorize remote access systems prior to connection. Enforce remote connection requirements to information systems.",
"Attributes": [
{
"ItemId": "ac-17",
"Section": "Access Control (AC)",
"Service": "aws"
}
],
"Checks": [
"acm_certificates_expiration_check",
"ec2_ebs_public_snapshot",
"ec2_instance_public_ip",
"elb_ssl_listeners",
"emr_cluster_master_nodes_no_public_ip",
"guardduty_is_enabled",
"awslambda_function_not_publicly_accessible",
"awslambda_function_url_public",
"rds_instance_no_public_access",
"rds_snapshots_public_access",
"redshift_cluster_public_access",
"s3_bucket_secure_transport_policy",
"s3_bucket_policy_public_write_access",
"s3_account_level_public_access_blocks",
"s3_bucket_public_access",
"sagemaker_notebook_instance_without_direct_internet_access_configured",
"securityhub_enabled",
"ec2_securitygroup_default_restrict_traffic",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "au-2",
"Name": "Audit Events (AU-2)",
"Description": "The organization: a. Determines that the information system is capable of auditing the following events: [Assignment: organization-defined auditable events]; b. Coordinates the security audit function with other organizational entities requiring audit- related information to enhance mutual support and to help guide the selection of auditable events; c. Provides a rationale for why the auditable events are deemed to be adequate support after- the-fact investigations of security incidents",
"Attributes": [
{
"ItemId": "au-2",
"Section": "Audit and Accountability (AU)",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_log_file_validation_enabled",
"elbv2_logging_enabled",
"rds_instance_integration_cloudwatch_logs",
"redshift_cluster_audit_logging",
"s3_bucket_server_access_logging_enabled",
"vpc_flow_logs_enabled"
]
},
{
"Id": "au-9",
"Name": "Protection of Audit Information (AU-9)",
"Description": "The information system protects audit information and audit tools from unauthorized access, modification, and deletion.",
"Attributes": [
{
"ItemId": "au-9",
"Section": "Audit and Accountability (AU)",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_kms_encryption_enabled",
"cloudtrail_log_file_validation_enabled",
"cloudwatch_log_group_kms_encryption_enabled",
"s3_bucket_object_versioning"
]
},
{
"Id": "au-11",
"Name": "Audit Record Retention (AU-11)",
"Description": "The organization retains audit records for at least 90 days to provide support for after-the-fact investigations of security incidents and to meet regulatory and organizational information retention requirements.",
"Attributes": [
{
"ItemId": "au-11",
"Section": "Audit and Accountability (AU)",
"Service": "aws"
}
],
"Checks": [
"cloudwatch_log_group_retention_policy_specific_days_enabled"
]
},
{
"Id": "ca-7",
"Name": "Continuous Monitoring (CA-7)",
"Description": "Continuously monitor configuration management processes. Determine security impact, environment and operational risks.",
"Attributes": [
{
"ItemId": "ca-7",
"Section": "Security Assessment And Authorization (CA)",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudwatch_changes_to_network_acls_alarm_configured",
"ec2_instance_imdsv2_enabled",
"elbv2_waf_acl_attached",
"guardduty_is_enabled",
"rds_instance_enhanced_monitoring_enabled",
"redshift_cluster_audit_logging",
"securityhub_enabled"
]
},
{
"Id": "cm-2",
"Name": "Baseline Configuration (CM-2)",
"Description": "The organization develops, documents, and maintains under configuration control, a current baseline configuration of the information system.",
"Attributes": [
{
"ItemId": "cm-2",
"Section": "Configuration Management (CM)",
"Service": "aws"
}
],
"Checks": [
"apigateway_waf_acl_attached",
"ec2_ebs_public_snapshot",
"ec2_instance_public_ip",
"ec2_instance_older_than_specific_days",
"elbv2_deletion_protection",
"emr_cluster_master_nodes_no_public_ip",
"awslambda_function_not_publicly_accessible",
"awslambda_function_url_public",
"rds_instance_no_public_access",
"rds_snapshots_public_access",
"redshift_cluster_public_access",
"s3_bucket_public_access",
"s3_bucket_policy_public_write_access",
"s3_account_level_public_access_blocks",
"s3_bucket_public_access",
"sagemaker_notebook_instance_without_direct_internet_access_configured",
"ssm_managed_compliant_patching",
"ec2_securitygroup_default_restrict_traffic",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "cm-8",
"Name": "Information System Component Inventory (CM-8)",
"Description": "The organization develops and documents an inventory of information system components that accurately reflects the current information system, includes all components within the authorization boundary of the information system, is at the level of granularity deemed necessary for tracking and reporting and reviews and updates the information system component inventory.",
"Attributes": [
{
"ItemId": "cm-8",
"Section": "Configuration Management (CM)",
"Service": "aws"
}
],
"Checks": [
"ec2_instance_managed_by_ssm",
"guardduty_is_enabled",
"ssm_managed_compliant_patching",
"ssm_managed_compliant_patching"
]
},
{
"Id": "cp-9",
"Name": "Information System Backup (CP-9)",
"Description": "The organization conducts backups of user-level information, system-level information and information system documentation including security-related documentation contained in the information system and protects the confidentiality, integrity, and availability of backup information at storage locations.",
"Attributes": [
{
"ItemId": "cp-9",
"Section": "Contingency Planning (CP)",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"dynamodb_tables_pitr_enabled",
"efs_have_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
},
{
"Id": "cp-10",
"Name": "Information System Recovery And Reconstitution (CP-10)",
"Description": "The organization provides for the recovery and reconstitution of the information system to a known state after a disruption, compromise, or failure.",
"Attributes": [
{
"ItemId": "cp-10",
"Section": "Contingency Planning (CP)",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"dynamodb_tables_pitr_enabled",
"efs_have_backup_enabled",
"elbv2_deletion_protection",
"rds_instance_backup_enabled",
"rds_instance_multi_az",
"rds_instance_backup_enabled",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
},
{
"Id": "ia-2",
"Name": "Identification and Authentication (Organizational users) (IA-2)",
"Description": "The information system uniquely identifies and authenticates organizational users (or processes acting on behalf of organizational users).",
"Attributes": [
{
"ItemId": "ia-2",
"Section": "Identification and Authentication (IA)",
"Service": "aws"
}
],
"Checks": [
"iam_password_policy_minimum_length_14",
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_no_root_access_key",
"iam_user_mfa_enabled_console_access",
"iam_user_mfa_enabled_console_access"
]
},
{
"Id": "ir-4",
"Name": "Incident Handling (IR-4)",
"Description": "The organization implements an incident handling capability for security incidents that includes preparation, detection and analysis, containment, eradication, and recovery, coordinates incident handling activities with contingency planning activities and incorporates lessons learned from ongoing incident handling activities into incident response procedures, training, and testing, and implements the resulting changes accordingly.",
"Attributes": [
{
"ItemId": "ir-4",
"Section": "Incident Response (IR)",
"Service": "aws"
}
],
"Checks": [
"cloudwatch_changes_to_network_acls_alarm_configured",
"cloudwatch_changes_to_network_gateways_alarm_configured",
"cloudwatch_changes_to_network_route_tables_alarm_configured",
"cloudwatch_changes_to_vpcs_alarm_configured",
"guardduty_is_enabled",
"guardduty_no_high_severity_findings",
"securityhub_enabled"
]
},
{
"Id": "sa-3",
"Name": "System Development Life Cycle (SA-3)",
"Description": "The organization manages the information system using organization-defined system development life cycle, defines and documents information security roles and responsibilities throughout the system development life cycle, identifies individuals having information security roles and responsibilities and integrates the organizational information security risk management process into system development life cycle activities.",
"Attributes": [
{
"ItemId": "sa-3",
"Section": "System and Services Acquisition (SA)",
"Service": "aws"
}
],
"Checks": [
"ec2_instance_managed_by_ssm"
]
},
{
"Id": "sc-5",
"Name": "Denial Of Service Protection (SC-5)",
"Description": "The information system protects against or limits the effects of the following types of denial of service attacks: [Assignment: organization-defined types of denial of service attacks or references to sources for such information] by employing [Assignment: organization-defined security safeguards].",
"Attributes": [
{
"ItemId": "sc-5",
"Section": "System and Communications Protection (SC)",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"elbv2_deletion_protection",
"guardduty_is_enabled",
"rds_instance_backup_enabled",
"rds_instance_deletion_protection",
"rds_instance_multi_az",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
},
{
"Id": "sc-7",
"Name": "Boundary Protection (SC-7)",
"Description": "The information system: a. Monitors and controls communications at the external boundary of the system and at key internal boundaries within the system; b. Implements subnetworks for publicly accessible system components that are [Selection: physically; logically] separated from internal organizational networks; and c. Connects to external networks or information systems only through managed interfaces consisting of boundary protection devices arranged in accordance with an organizational security architecture.",
"Attributes": [
{
"ItemId": "sc-7",
"Section": "System and Communications Protection (SC)",
"Service": "aws"
}
],
"Checks": [
"ec2_ebs_public_snapshot",
"ec2_instance_public_ip",
"elbv2_waf_acl_attached",
"elb_ssl_listeners",
"emr_cluster_master_nodes_no_public_ip",
"opensearch_service_domains_node_to_node_encryption_enabled",
"awslambda_function_not_publicly_accessible",
"awslambda_function_url_public",
"rds_instance_no_public_access",
"rds_snapshots_public_access",
"redshift_cluster_public_access",
"s3_bucket_secure_transport_policy",
"s3_bucket_public_access",
"s3_bucket_policy_public_write_access",
"s3_account_level_public_access_blocks",
"s3_bucket_public_access",
"sagemaker_notebook_instance_without_direct_internet_access_configured",
"ec2_securitygroup_default_restrict_traffic",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "sc-12",
"Name": "Cryptographic Key Establishment And Management (SC-12)",
"Description": "The organization establishes and manages cryptographic keys for required cryptography employed within the information system in accordance with [Assignment: organization-defined requirements for key generation, distribution, storage, access, and destruction].",
"Attributes": [
{
"ItemId": "sc-12",
"Section": "System and Communications Protection (SC)",
"Service": "aws"
}
],
"Checks": [
"acm_certificates_expiration_check",
"kms_cmk_rotation_enabled"
]
},
{
"Id": "sc-13",
"Name": "Use of Cryptography (SC-13)",
"Description": "The information system implements FIPS-validated or NSA-approved cryptography in accordance with applicable federal laws, Executive Orders, directives, policies, regulations, and standards.",
"Attributes": [
{
"ItemId": "sc-13",
"Section": "System and Communications Protection (SC)",
"Service": "aws"
}
],
"Checks": [
"s3_bucket_default_encryption",
"sagemaker_training_jobs_volume_and_output_encryption_enabled",
"sagemaker_notebook_instance_encryption_enabled",
"sns_topics_kms_encryption_at_rest_enabled"
]
}
]
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,902 @@
{
"Framework": "FFIEC",
"Version": "",
"Provider": "AWS",
"Description": "In light of the increasing volume and sophistication of cyber threats, the Federal Financial Institutions Examination Council (FFIEC) developed the Cybersecurity Assessment Tool (Assessment), on behalf of its members, to help institutions identify their risks and determine their cybersecurity maturity.",
"Requirements": [
{
"Id": "d1-g-it-b-1",
"Name": "D1.G.IT.B.1",
"Description": "An inventory of organizational assets (e.g., hardware, software, data, and systems hosted externally) is maintained.",
"Attributes": [
{
"ItemId": "d1-g-it-b-1",
"Section": "Cyber Risk Management and Oversight (Domain 1)",
"SubSection": "Governance (G)",
"Service": "aws"
}
],
"Checks": [
"ec2_instance_managed_by_ssm",
"ec2_instance_older_than_specific_days",
"ec2_elastic_ip_unassgined"
]
},
{
"Id": "d1-rm-ra-b-2",
"Name": "D1.RM.RA.B.2",
"Description": "The risk assessment identifies Internet- based systems and high-risk transactions that warrant additional authentication controls.",
"Attributes": [
{
"ItemId": "d1-rm-ra-b-2",
"Section": "Cyber Risk Management and Oversight (Domain 1)",
"SubSection": "Risk Management (RM)",
"Service": "aws"
}
],
"Checks": [
"guardduty_is_enabled"
]
},
{
"Id": "d1-rm-rm-b-1",
"Name": "D1.RM.Rm.B.1",
"Description": "An information security and business continuity risk management function(s) exists within the institution.",
"Attributes": [
{
"ItemId": "d1-rm-rm-b-1",
"Section": "Cyber Risk Management and Oversight (Domain 1)",
"SubSection": "Risk Management (RM)",
"Service": "aws"
}
],
"Checks": [
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_multi_az",
"redshift_cluster_automated_snapshot"
]
},
{
"Id": "d2-is-is-b-1",
"Name": "D2.IS.Is.B.1",
"Description": "Information security threats are gathered and shared with applicable internal employees.",
"Attributes": [
{
"ItemId": "d2-is-is-b-1",
"Section": "Threat Intelligence and Collaboration (Domain 2)",
"SubSection": "Information Sharing (IS)",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_cloudwatch_logging_enabled",
"guardduty_is_enabled",
"securityhub_enabled"
]
},
{
"Id": "d2-ma-ma-b-1",
"Name": "D2.MA.Ma.B.1",
"Description": "Information security threats are gathered and shared with applicable internal employees.",
"Attributes": [
{
"ItemId": "d2-ma-ma-b-1",
"Section": "Threat Intelligence and Collaboration (Domain 2)",
"SubSection": "Monitoring and Analyzing (MA)",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"cloudwatch_log_group_retention_policy_specific_days_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
"opensearch_service_domains_cloudwatch_logging_enabled",
"rds_instance_integration_cloudwatch_logs",
"redshift_cluster_audit_logging",
"s3_bucket_server_access_logging_enabled",
"vpc_flow_logs_enabled"
]
},
{
"Id": "d2-ma-ma-b-2",
"Name": "D2.MA.Ma.B.2",
"Description": "Computer event logs are used for investigations once an event has occurred.",
"Attributes": [
{
"ItemId": "d2-ma-ma-b-2",
"Section": "Threat Intelligence and Collaboration (Domain 2)",
"SubSection": "Monitoring and Analyzing (MA)",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
"opensearch_service_domains_cloudwatch_logging_enabled",
"redshift_cluster_audit_logging",
"s3_bucket_server_access_logging_enabled",
"vpc_flow_logs_enabled"
]
},
{
"Id": "d2-ti-ti-b-1",
"Name": "D2.TI.Ti.B.1",
"Description": "The institution belongs or subscribes to a threat and vulnerability information-sharing source(s) that provides information on threats (e.g., FS-ISAC, US- CERT).",
"Attributes": [
{
"ItemId": "d2-ti-ti-b-1",
"Section": "Threat Intelligence and Collaboration (Domain 2)",
"SubSection": "Threat Intelligence (TI)",
"Service": "aws"
}
],
"Checks": [
"guardduty_is_enabled",
"securityhub_enabled"
]
},
{
"Id": "d2-ti-ti-b-2",
"Name": "D2.TI.Ti.B.2",
"Description": "Threat information is used to monitor threats and vulnerabilities.",
"Attributes": [
{
"ItemId": "d2-ti-ti-b-2",
"Section": "Threat Intelligence and Collaboration (Domain 2)",
"SubSection": "Threat Intelligence (TI)",
"Service": "aws"
}
],
"Checks": [
"guardduty_is_enabled",
"securityhub_enabled",
"ssm_managed_compliant_patching"
]
},
{
"Id": "d2-ti-ti-b-3",
"Name": "D2.TI.Ti.B.3",
"Description": "Threat information is used to enhance internal risk management and controls.",
"Attributes": [
{
"ItemId": "d2-ti-ti-b-3",
"Section": "Threat Intelligence and Collaboration (Domain 2)",
"SubSection": "Threat Intelligence (TI)",
"Service": "aws"
}
],
"Checks": [
"guardduty_is_enabled",
"securityhub_enabled"
]
},
{
"Id": "d3-cc-pm-b-1",
"Name": "D3.CC.PM.B.1",
"Description": "A patch management program is implemented and ensures that software and firmware patches are applied in a timely manner.",
"Attributes": [
{
"ItemId": "d3-cc-pm-b-1",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Corrective Controls (CC)",
"Service": "aws"
}
],
"Checks": [
"rds_instance_minor_version_upgrade_enabled",
"redshift_cluster_automatic_upgrades",
"ssm_managed_compliant_patching"
]
},
{
"Id": "d3-cc-pm-b-3",
"Name": "D3.CC.PM.B.3",
"Description": "Patch management reports are reviewed and reflect missing security patches.",
"Attributes": [
{
"ItemId": "d3-cc-pm-b-3",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Corrective Controls (CC)",
"Service": "aws"
}
],
"Checks": [
"rds_instance_minor_version_upgrade_enabled",
"redshift_cluster_automatic_upgrades",
"ssm_managed_compliant_patching"
]
},
{
"Id": "d3-dc-an-b-1",
"Name": "D3.DC.An.B.1",
"Description": "The institution is able to detect anomalous activities through monitoring across the environment.",
"Attributes": [
{
"ItemId": "d3-dc-an-b-1",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Detective Controls (DC)",
"Service": "aws"
}
],
"Checks": [
"guardduty_is_enabled",
"guardduty_no_high_severity_findings",
"securityhub_enabled"
]
},
{
"Id": "d3-dc-an-b-2",
"Name": "D3.DC.An.B.2",
"Description": "Customer transactions generating anomalous activity alerts are monitored and reviewed.",
"Attributes": [
{
"ItemId": "d3-dc-an-b-2",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Detective Controls (DC)",
"Service": "aws"
}
],
"Checks": [
"guardduty_is_enabled",
"securityhub_enabled"
]
},
{
"Id": "d3-dc-an-b-3",
"Name": "D3.DC.An.B.3",
"Description": "Logs of physical and/or logical access are reviewed following events.",
"Attributes": [
{
"ItemId": "d3-dc-an-b-3",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Detective Controls (DC)",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
"opensearch_service_domains_cloudwatch_logging_enabled",
"rds_instance_integration_cloudwatch_logs",
"s3_bucket_server_access_logging_enabled",
"vpc_flow_logs_enabled"
]
},
{
"Id": "d3-dc-an-b-4",
"Name": "D3.DC.An.B.4",
"Description": "Access to critical systems by third parties is monitored for unauthorized or unusual activity.",
"Attributes": [
{
"ItemId": "d3-dc-an-b-4",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Detective Controls (DC)",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
"opensearch_service_domains_cloudwatch_logging_enabled",
"rds_instance_integration_cloudwatch_logs",
"redshift_cluster_audit_logging",
"s3_bucket_server_access_logging_enabled",
"vpc_flow_logs_enabled"
]
},
{
"Id": "d3-dc-an-b-5",
"Name": "D3.DC.An.B.5",
"Description": "Elevated privileges are monitored.",
"Attributes": [
{
"ItemId": "d3-dc-an-b-5",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Detective Controls (DC)",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled"
]
},
{
"Id": "d3-dc-ev-b-1",
"Name": "D3.DC.Ev.B.1",
"Description": "A normal network activity baseline is established.",
"Attributes": [
{
"ItemId": "d3-dc-ev-b-1",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Detective Controls (DC)",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
"redshift_cluster_audit_logging",
"vpc_flow_logs_enabled"
]
},
{
"Id": "d3-dc-ev-b-2",
"Name": "D3.DC.Ev.B.2",
"Description": "Mechanisms (e.g., antivirus alerts, log event alerts) are in place to alert management to potential attacks.",
"Attributes": [
{
"ItemId": "d3-dc-ev-b-2",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Detective Controls (DC)",
"Service": "aws"
}
],
"Checks": [
"guardduty_is_enabled"
]
},
{
"Id": "d3-dc-ev-b-3",
"Name": "D3.DC.Ev.B.3",
"Description": "Processes are in place to monitor for the presence of unauthorized users, devices, connections, and software.",
"Attributes": [
{
"ItemId": "d3-dc-ev-b-3",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Detective Controls (DC)",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_multi_region_enabled",
"guardduty_is_enabled",
"securityhub_enabled",
"vpc_flow_logs_enabled"
]
},
{
"Id": "d3-dc-th-b-1",
"Name": "D3.DC.Th.B.1",
"Description": "Independent testing (including penetration testing and vulnerability scanning) is conducted according to the risk assessment for external-facing systems and the internal network.",
"Attributes": [
{
"ItemId": "d3-dc-th-b-1",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Detective Controls (DC)",
"Service": "aws"
}
],
"Checks": [
"guardduty_is_enabled",
"securityhub_enabled",
"ssm_managed_compliant_patching"
]
},
{
"Id": "d3-pc-am-b-1",
"Name": "D3.PC.Am.B.1",
"Description": "Employee access is granted to systems and confidential data based on job responsibilities and the principles of least privilege.",
"Attributes": [
{
"ItemId": "d3-pc-am-b-1",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"ec2_instance_profile_attached",
"iam_policy_attached_only_to_group_or_roles",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_no_root_access_key"
]
},
{
"Id": "d3-pc-am-b-10",
"Name": "D3.PC.Am.B.10",
"Description": "Production and non-production environments are segregated to prevent unauthorized access or changes to information assets. (*N/A if no production environment exists at the institution or the institution's third party.)",
"Attributes": [
{
"ItemId": "d3-pc-am-b-10",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "d3-pc-am-b-12",
"Name": "D3.PC.Am.B.12",
"Description": "All passwords are encrypted in storage and in transit.",
"Attributes": [
{
"ItemId": "d3-pc-am-b-12",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"apigateway_client_certificate_enabled",
"ec2_ebs_volume_encryption",
"ec2_ebs_default_encryption",
"efs_encryption_at_rest_enabled",
"opensearch_service_domains_encryption_at_rest_enabled",
"opensearch_service_domains_node_to_node_encryption_enabled",
"rds_instance_storage_encrypted",
"redshift_cluster_audit_logging",
"s3_bucket_default_encryption",
"s3_bucket_secure_transport_policy"
]
},
{
"Id": "d3-pc-am-b-13",
"Name": "D3.PC.Am.B.13",
"Description": "Confidential data is encrypted when transmitted across public or untrusted networks (e.g., Internet).",
"Attributes": [
{
"ItemId": "d3-pc-am-b-13",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"apigateway_client_certificate_enabled",
"elbv2_insecure_ssl_ciphers",
"elb_ssl_listeners",
"s3_bucket_secure_transport_policy"
]
},
{
"Id": "d3-pc-am-b-15",
"Name": "D3.PC.Am.B.15",
"Description": "Remote access to critical systems by employees, contractors, and third parties uses encrypted connections and multifactor authentication.",
"Attributes": [
{
"ItemId": "d3-pc-am-b-15",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"apigateway_client_certificate_enabled",
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_user_mfa_enabled_console_access",
"s3_bucket_secure_transport_policy"
]
},
{
"Id": "d3-pc-am-b-16",
"Name": "D3.PC.Am.B.16",
"Description": "Administrative, physical, or technical controls are in place to prevent users without administrative responsibilities from installing unauthorized software.",
"Attributes": [
{
"ItemId": "d3-pc-am-b-16",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges"
]
},
{
"Id": "d3-pc-am-b-2",
"Name": "D3.PC.Am.B.2",
"Description": "Employee access to systems and confidential data provides for separation of duties.",
"Attributes": [
{
"ItemId": "d3-pc-am-b-2",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges"
]
},
{
"Id": "d3-pc-am-b-3",
"Name": "D3.PC.Am.B.3",
"Description": "Elevated privileges (e.g., administrator privileges) are limited and tightly controlled (e.g., assigned to individuals, not shared, and require stronger password controls",
"Attributes": [
{
"ItemId": "d3-pc-am-b-3",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_no_root_access_key"
]
},
{
"Id": "d3-pc-am-b-6",
"Name": "D3.PC.Am.B.6",
"Description": "Identification and authentication are required and managed for access to systems, applications, and hardware.",
"Attributes": [
{
"ItemId": "d3-pc-am-b-6",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"iam_password_policy_minimum_length_14",
"iam_password_policy_lowercase",
"iam_password_policy_number",
"iam_password_policy_number",
"iam_password_policy_symbol",
"iam_password_policy_uppercase",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_rotate_access_key_90_days",
"iam_user_mfa_enabled_console_access",
"iam_user_mfa_enabled_console_access",
"iam_disable_90_days_credentials"
]
},
{
"Id": "d3-pc-am-b-7",
"Name": "D3.PC.Am.B.7",
"Description": "Access controls include password complexity and limits to password attempts and reuse.",
"Attributes": [
{
"ItemId": "d3-pc-am-b-7",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"iam_password_policy_minimum_length_14",
"iam_password_policy_lowercase",
"iam_password_policy_number",
"iam_password_policy_number",
"iam_password_policy_symbol",
"iam_password_policy_uppercase"
]
},
{
"Id": "d3-pc-am-b-8",
"Name": "D3.PC.Am.B.8",
"Description": "All default passwords and unnecessary default accounts are changed before system implementation.",
"Attributes": [
{
"ItemId": "d3-pc-am-b-8",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"iam_no_root_access_key"
]
},
{
"Id": "d3-pc-im-b-1",
"Name": "D3.PC.Im.B.1",
"Description": "Network perimeter defense tools (e.g., border router and firewall) are used.",
"Attributes": [
{
"ItemId": "d3-pc-im-b-1",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"acm_certificates_expiration_check",
"apigateway_waf_acl_attached",
"ec2_ebs_public_snapshot",
"ec2_instance_public_ip",
"elbv2_waf_acl_attached",
"emr_cluster_master_nodes_no_public_ip",
"awslambda_function_not_publicly_accessible",
"awslambda_function_url_public",
"rds_instance_no_public_access",
"rds_snapshots_public_access",
"redshift_cluster_public_access",
"s3_bucket_public_access",
"s3_bucket_policy_public_write_access",
"s3_account_level_public_access_blocks",
"s3_bucket_public_access",
"sagemaker_notebook_instance_without_direct_internet_access_configured",
"ec2_securitygroup_default_restrict_traffic",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "d3-pc-im-b-2",
"Name": "D3.PC.Im.B.2",
"Description": "Systems that are accessed from the Internet or by external parties are protected by firewalls or other similar devices.",
"Attributes": [
{
"ItemId": "d3-pc-im-b-2",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"apigateway_waf_acl_attached",
"elbv2_waf_acl_attached",
"ec2_securitygroup_default_restrict_traffic",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "d3-pc-im-b-3",
"Name": "D3.PC.Im.B.3",
"Description": "All ports are monitored.",
"Attributes": [
{
"ItemId": "d3-pc-im-b-3",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
"vpc_flow_logs_enabled"
]
},
{
"Id": "d3-pc-im-b-5",
"Name": "D3.PC.Im.B.5",
"Description": "Systems configurations (for servers, desktops, routers, etc.) follow industry standards and are enforced",
"Attributes": [
{
"ItemId": "d3-pc-im-b-5",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"ec2_instance_managed_by_ssm",
"ssm_managed_compliant_patching",
"ssm_managed_compliant_patching"
]
},
{
"Id": "d3-pc-im-b-6",
"Name": "D3.PC.Im.B.6",
"Description": "Ports, functions, protocols and services are prohibited if no longer needed for business purposes.",
"Attributes": [
{
"ItemId": "d3-pc-im-b-6",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"ec2_securitygroup_default_restrict_traffic",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "d3-pc-im-b-7",
"Name": "D3.PC.Im.B.7",
"Description": "Access to make changes to systems configurations (including virtual machines and hypervisors) is controlled and monitored.",
"Attributes": [
{
"ItemId": "d3-pc-im-b-7",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"iam_policy_attached_only_to_group_or_roles",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges"
]
},
{
"Id": "d3-pc-se-b-1",
"Name": "D3.PC.Se.B.1",
"Description": "Developers working for the institution follow secure program coding practices, as part of a system development life cycle (SDLC), that meet industry standards.",
"Attributes": [
{
"ItemId": "d3-pc-se-b1",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": []
},
{
"Id": "d4-c-co-b-2",
"Name": "D4.C.Co.B.2",
"Description": "The institution ensures that third-party connections are authorized.",
"Attributes": [
{
"ItemId": "d4-c-co-b-2",
"Section": "External Dependency Management (Domain 4)",
"SubSection": "Connections (C)",
"Service": "aws"
}
],
"Checks": [
"ec2_securitygroup_default_restrict_traffic",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "d5-dr-de-b-1",
"Name": "D5.DR.De.B.1",
"Description": "Alert parameters are set for detecting information security incidents that prompt mitigating actions.",
"Attributes": [
{
"ItemId": "d5-dr-de-b-1",
"Section": "Cyber Incident Management and Resilience (Domain 5)",
"SubSection": "Detection, Response, & Mitigation (DR)",
"Service": "aws"
}
],
"Checks": [
"cloudwatch_changes_to_network_acls_alarm_configured",
"cloudwatch_changes_to_network_gateways_alarm_configured",
"cloudwatch_changes_to_network_route_tables_alarm_configured",
"cloudwatch_changes_to_vpcs_alarm_configured",
"guardduty_is_enabled",
"securityhub_enabled"
]
},
{
"Id": "d5-dr-de-b-2",
"Name": "D5.DR.De.B.2",
"Description": "System performance reports contain information that can be used as a risk indicator to detect information security incidents.",
"Attributes": [
{
"ItemId": "d5-dr-de-b-2",
"Section": "Cyber Incident Management and Resilience (Domain 5)",
"SubSection": "Detection, Response, & Mitigation (DR)",
"Service": "aws"
}
],
"Checks": []
},
{
"Id": "d5-dr-de-b-3",
"Name": "D5.DR.De.B.3",
"Description": "Tools and processes are in place to detect, alert, and trigger the incident response program.",
"Attributes": [
{
"ItemId": "d5-dr-de-b-3",
"Section": "Cyber Incident Management and Resilience (Domain 5)",
"SubSection": "Detection, Response, & Mitigation (DR)",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"cloudwatch_changes_to_network_acls_alarm_configured",
"cloudwatch_changes_to_network_gateways_alarm_configured",
"cloudwatch_changes_to_network_route_tables_alarm_configured",
"cloudwatch_changes_to_vpcs_alarm_configured",
"elbv2_logging_enabled",
"elb_logging_enabled",
"guardduty_is_enabled",
"rds_instance_integration_cloudwatch_logs",
"redshift_cluster_audit_logging",
"s3_bucket_server_access_logging_enabled",
"securityhub_enabled"
]
},
{
"Id": "d5-er-es-b-4",
"Name": "D5.ER.Es.B.4",
"Description": "Incidents are classified, logged and tracked.",
"Attributes": [
{
"ItemId": "d5-er-es-b-4",
"Section": "Cyber Incident Management and Resilience (Domain 5)",
"SubSection": "Escalation and Reporting (ER)",
"Service": "aws"
}
],
"Checks": [
"guardduty_no_high_severity_findings"
]
},
{
"Id": "d5-ir-pl-b-6",
"Name": "D5.IR.Pl.B.6",
"Description": "The institution plans to use business continuity, disaster recovery, and data backup programs to recover operations following an incident.",
"Attributes": [
{
"ItemId": "d5-ir-pl-b-6",
"Section": "Cyber Incident Management and Resilience (Domain 5)",
"SubSection": "Incident Resilience Planning & Strategy (IR)",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"elbv2_deletion_protection",
"rds_instance_enhanced_monitoring_enabled",
"rds_instance_backup_enabled",
"rds_instance_deletion_protection",
"rds_instance_multi_az",
"rds_instance_backup_enabled",
"s3_bucket_object_versioning"
]
}
]
}

View File

@@ -0,0 +1,127 @@
{
"Framework": "GDPR",
"Version": "",
"Provider": "AWS",
"Description": "The General Data Protection Regulation (GDPR) is a new European privacy law that became enforceable on May 25, 2018. The GDPR replaces the EU Data Protection Directive, also known as Directive 95/46/EC. It's intended to harmonize data protection laws throughout the European Union (EU). It does this by applying a single data protection law that's binding throughout each EU member state.",
"Requirements": [
{
"Id": "article_25",
"Name": "Article 25 Data protection by design and by default",
"Description": "To obtain the latest version of the official guide, please visit https://gdpr-info.eu/art-25-gdpr/. Taking into account the state of the art, the cost of implementation and the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for rights and freedoms of natural persons posed by the processing, the controller shall, both at the time of the determination of the means for processing and at the time of the processing itself, implement appropriate technical and organisational measures, such as pseudonymisation, which are designed to implement data-protection principles, such as data minimisation, in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of this Regulation and protect the rights of data subjects. The controller shall implement appropriate technical and organisational measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed. That obligation applies to the amount of personal data collected, the extent of their processing, the period of their storage and their accessibility. In particular, such measures shall ensure that by default personal data are not made accessible without the individual's intervention to an indefinite number of natural persons. An approved certification mechanism pursuant to Article 42 may be used as an element to demonstrate compliance with the requirements set out in paragraphs 1 and 2 of this Article.",
"Attributes": [
{
"ItemId": "article_25",
"Section": "Article 25 Data protection by design and by default",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_logs_s3_bucket_is_not_publicly_accessible",
"cloudtrail_multi_region_enabled",
"cloudtrail_logs_s3_bucket_access_logging_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"cloudtrail_kms_encryption_enabled",
"cloudtrail_log_file_validation_enabled",
"config_recorder_all_regions_enabled",
"iam_password_policy_minimum_length_14",
"iam_password_policy_lowercase",
"iam_password_policy_number",
"iam_password_policy_symbol",
"iam_password_policy_uppercase",
"iam_password_policy_reuse_24",
"iam_password_policy_minimum_length_14",
"iam_password_policy_lowercase",
"iam_password_policy_number",
"iam_password_policy_number",
"iam_password_policy_symbol",
"iam_password_policy_uppercase",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_no_root_access_key",
"iam_support_role_created",
"iam_rotate_access_key_90_days",
"iam_user_mfa_enabled_console_access",
"iam_disable_90_days_credentials",
"kms_cmk_rotation_enabled",
"cloudwatch_log_metric_filter_for_s3_bucket_policy_changes",
"cloudwatch_log_metric_filter_and_alarm_for_cloudtrail_configuration_changes_enabled",
"cloudwatch_log_metric_filter_and_alarm_for_aws_config_configuration_changes_enabled",
"cloudwatch_log_metric_filter_authentication_failures",
"cloudwatch_log_metric_filter_sign_in_without_mfa",
"cloudwatch_log_metric_filter_disable_or_scheduled_deletion_of_kms_cmk",
"cloudwatch_log_metric_filter_policy_changes",
"cloudwatch_log_metric_filter_root_usage",
"cloudwatch_log_metric_filter_security_group_changes",
"cloudwatch_log_metric_filter_unauthorized_api_calls",
"vpc_flow_logs_enabled"
]
},
{
"Id": "article_30",
"Name": "Article 30 Records of processing activities",
"Description": " To obtain the latest version of the official guide, please visit https://www.privacy-regulation.eu/en/article-30-records-of-processing-activities-GDPR.htm. Each controller and, where applicable, the controller's representative, shall maintain a record of processing activities under its responsibility. That record shall contain all of the following information like the name and contact details of the controller and where applicable, the joint controller, the controller's representative and the data protection officer, the purposes of the processing etc. Each processor and where applicable, the processor's representative shall maintain a record of all categories of processing activities carried out on behalf of a controller, containing the name and contact details of the processor or processors and of each controller on behalf of which the processor is acting, and, where applicable of the controller's or the processor's representative, and the data protection officer, where applicable, transfers of personal data to a third country or an international organisation, including the identification of that third country or international organisation and, in the case of transfers referred to in the second subparagraph of Article 49(1), the documentation of suitable safeguards. The records referred to in paragraphs 1 and 2 shall be in writing, including in electronic form. The controller or the processor and, where applicable, the controller's or the processor's representative, shall make the record available to the supervisory authority on request. The obligations referred to in paragraphs 1 and 2 shall not apply to an enterprise or an organisation employing fewer than 250 persons unless the processing it carries out is likely to result in a risk to the rights and freedoms of data subjects, the processing is not occasional, or the processing includes special categories of data as referred to in Article 9(1) or personal data relating to criminal convictions and offences referred to in Article 10.",
"Attributes": [
{
"ItemId": "article_30",
"Section": "Article 30 Records of processing activities",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"cloudtrail_kms_encryption_enabled",
"config_recorder_all_regions_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
"kms_cmk_rotation_enabled",
"redshift_cluster_audit_logging",
"vpc_flow_logs_enabled"
]
},
{
"Id": "article_32",
"Name": "Article 32 Security of processing",
"Description": " To obtain the latest version of the official guide, please visit https://gdpr-info.eu/art-32-gdpr/. Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including inter alia as appropriate. In assessing the appropriate level of security account shall be taken in particular of the risks that are presented by processing, in particular from accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to personal data transmitted, stored or otherwise processed. Adherence to an approved code of conduct as referred to in Article 40 or an approved certification mechanism as referred to in Article 42 may be used as an element by which to demonstrate compliance with the requirements set out in paragraph 1 of this Article. The controller and processor shall take steps to ensure that any natural person acting under the authority of the controller or the processor who has access to personal data does not process them except on instructions from the controller, unless he or she is required to do so by Union or Member State law.",
"Attributes": [
{
"ItemId": "article_32",
"Section": "Article 32 Security of processing",
"Service": "aws"
}
],
"Checks": [
"acm_certificates_expiration_check",
"cloudfront_distributions_https_enabled",
"cloudtrail_kms_encryption_enabled",
"cloudtrail_log_file_validation_enabled",
"dynamodb_accelerator_cluster_encryption_enabled",
"dynamodb_tables_kms_cmk_encryption_enabled",
"dynamodb_tables_kms_cmk_encryption_enabled",
"ec2_ebs_volume_encryption",
"ec2_ebs_volume_encryption",
"efs_encryption_at_rest_enabled",
"elb_ssl_listeners",
"opensearch_service_domains_encryption_at_rest_enabled",
"opensearch_service_domains_node_to_node_encryption_enabled",
"cloudwatch_log_group_kms_encryption_enabled",
"rds_instance_storage_encrypted",
"rds_instance_backup_enabled",
"rds_instance_integration_cloudwatch_logs",
"rds_instance_storage_encrypted",
"redshift_cluster_automated_snapshot",
"redshift_cluster_audit_logging",
"s3_bucket_default_encryption",
"s3_bucket_default_encryption",
"s3_bucket_secure_transport_policy",
"sagemaker_notebook_instance_encryption_enabled",
"sns_topics_kms_encryption_at_rest_enabled"
]
}
]
}

View File

@@ -0,0 +1,347 @@
{
"Framework": "GxP-21-CFR-Part-11",
"Version": "",
"Provider": "AWS",
"Description": "GxP refers to the regulations and guidelines that are applicable to life sciences organizations that make food and medical products. Medical products that fall under this include medicines, medical devices, and medical software applications. The overall intent of GxP requirements is to ensure that food and medical products are safe for consumers. It's also to ensure the integrity of data that's used to make product-related safety decisions.",
"Requirements": [
{
"Id": "11.10-a",
"Name": "11.10(a)",
"Description": "Persons who use closed systems to create, modify, maintain, or transmit electronic records shall employ procedures and controls designed to ensure the authenticity, integrity, and, when appropriate, the confidentiality of electronic records, and to ensure that the signer cannot readily repudiate the signed record as not genuine. Such procedures and controls shall include the following: (a) Validation of systems to ensure accuracy, reliability, consistent intended performance, and the ability to discern invalid or altered records.",
"Attributes": [
{
"ItemId": "11.10-a",
"Section": "11.10 Controls for closed systems",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_log_file_validation_enabled",
"dynamodb_tables_pitr_enabled",
"ec2_instance_managed_by_ssm",
"ec2_instance_older_than_specific_days",
"elbv2_deletion_protection",
"rds_instance_backup_enabled",
"rds_instance_deletion_protection",
"rds_instance_backup_enabled",
"rds_instance_multi_az",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning",
"ssm_managed_compliant_patching",
"ssm_managed_compliant_patching"
]
},
{
"Id": "11.10-c",
"Name": "11.10(c)",
"Description": "Persons who use closed systems to create, modify, maintain, or transmit electronic records shall employ procedures and controls designed to ensure the authenticity, integrity, and, when appropriate, the confidentiality of electronic records, and to ensure that the signer cannot readily repudiate the signed record as not genuine. Such procedures and controls shall include the following: (c) Protection of records to enable their accurate and ready retrieval throughout the records retention period.",
"Attributes": [
{
"ItemId": "11.10-c",
"Section": "11.10 Controls for closed systems",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_kms_encryption_enabled",
"cloudwatch_log_group_retention_policy_specific_days_enabled",
"rds_instance_storage_encrypted",
"rds_instance_storage_encrypted",
"rds_snapshots_public_access",
"redshift_cluster_audit_logging",
"redshift_cluster_public_access",
"s3_bucket_default_encryption",
"s3_bucket_secure_transport_policy",
"s3_bucket_public_access",
"s3_bucket_policy_public_write_access",
"s3_bucket_object_versioning",
"sagemaker_notebook_instance_without_direct_internet_access_configured",
"sagemaker_notebook_instance_encryption_enabled"
]
},
{
"Id": "11.10-d",
"Name": "11.10(d)",
"Description": "Persons who use closed systems to create, modify, maintain, or transmit electronic records shall employ procedures and controls designed to ensure the authenticity, integrity, and, when appropriate, the confidentiality of electronic records, and to ensure that the signer cannot readily repudiate the signed record as not genuine. Such procedures and controls shall include the following: (d) Limiting system access to authorized individuals.",
"Attributes": [
{
"ItemId": "11.10-d",
"Section": "11.10 Controls for closed systems",
"Service": "aws"
}
],
"Checks": [
"ec2_ebs_public_snapshot",
"ec2_instance_profile_attached",
"ec2_instance_public_ip",
"ec2_instance_imdsv2_enabled",
"emr_cluster_master_nodes_no_public_ip",
"iam_password_policy_minimum_length_14",
"iam_password_policy_lowercase",
"iam_password_policy_number",
"iam_password_policy_number",
"iam_password_policy_symbol",
"iam_password_policy_uppercase",
"iam_policy_attached_only_to_group_or_roles",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_no_root_access_key",
"iam_rotate_access_key_90_days",
"iam_user_mfa_enabled_console_access",
"iam_user_mfa_enabled_console_access",
"iam_disable_90_days_credentials",
"awslambda_function_not_publicly_accessible",
"awslambda_function_url_public",
"rds_instance_no_public_access",
"rds_snapshots_public_access",
"redshift_cluster_public_access",
"s3_bucket_public_access",
"s3_bucket_policy_public_write_access",
"s3_account_level_public_access_blocks",
"s3_bucket_public_access",
"sagemaker_notebook_instance_without_direct_internet_access_configured",
"secretsmanager_automatic_rotation_enabled",
"ec2_securitygroup_default_restrict_traffic",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "11.10-e",
"Name": "11.10(e)",
"Description": "Persons who use closed systems to create, modify, maintain, or transmit electronic records shall employ procedures and controls designed to ensure the authenticity, integrity, and, when appropriate, the confidentiality of electronic records, and to ensure that the signer cannot readily repudiate the signed record as not genuine. Such procedures and controls shall include the following: (e) Use of secure, computer-generated, time-stamped audit trails to independently record the date and time of operator entries and actions that create, modify, or delete electronic records. Record changes shall not obscure previously recorded information. Such audit trail documentation shall be retained for a period at least as long as that required for the subject electronic records and shall be available for agency review and copying.",
"Attributes": [
{
"ItemId": "11.10-d",
"Section": "11.10 Controls for closed systems",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"cloudwatch_log_group_retention_policy_specific_days_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
"opensearch_service_domains_cloudwatch_logging_enabled",
"rds_instance_integration_cloudwatch_logs",
"redshift_cluster_audit_logging",
"s3_bucket_server_access_logging_enabled",
"vpc_flow_logs_enabled"
]
},
{
"Id": "11.10-g",
"Name": "11.10(g)",
"Description": "Persons who use closed systems to create, modify, maintain, or transmit electronic records shall employ procedures and controls designed to ensure the authenticity, integrity, and, when appropriate, the confidentiality of electronic records, and to ensure that the signer cannot readily repudiate the signed record as not genuine. Such procedures and controls shall include the following: (g) Use of authority checks to ensure that only authorized individuals can use the system, electronically sign a record, access the operation or computer system input or output device, alter a record, or perform the operation at hand.",
"Attributes": [
{
"ItemId": "11.10-g",
"Section": "11.10 Controls for closed systems",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_kms_cmk_encryption_enabled",
"ec2_ebs_volume_encryption",
"ec2_ebs_public_snapshot",
"ec2_ebs_default_encryption",
"ec2_instance_profile_attached",
"ec2_instance_public_ip",
"ec2_instance_imdsv2_enabled",
"efs_encryption_at_rest_enabled",
"emr_cluster_master_nodes_no_public_ip",
"opensearch_service_domains_encryption_at_rest_enabled",
"opensearch_service_domains_node_to_node_encryption_enabled",
"iam_password_policy_minimum_length_14",
"iam_password_policy_lowercase",
"iam_password_policy_number",
"iam_password_policy_number",
"iam_password_policy_symbol",
"iam_password_policy_uppercase",
"iam_policy_attached_only_to_group_or_roles",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_no_root_access_key",
"iam_rotate_access_key_90_days",
"iam_user_mfa_enabled_console_access",
"iam_user_mfa_enabled_console_access",
"iam_disable_90_days_credentials",
"awslambda_function_not_publicly_accessible",
"awslambda_function_url_public",
"rds_instance_no_public_access",
"rds_snapshots_public_access",
"redshift_cluster_public_access",
"s3_bucket_public_access",
"s3_bucket_policy_public_write_access",
"s3_account_level_public_access_blocks",
"s3_bucket_public_access",
"sagemaker_notebook_instance_without_direct_internet_access_configured",
"secretsmanager_automatic_rotation_enabled",
"ec2_securitygroup_default_restrict_traffic",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "11.10-h",
"Name": "11.10(h)",
"Description": "Persons who use closed systems to create, modify, maintain, or transmit electronic records shall employ procedures and controls designed to ensure the authenticity, integrity, and, when appropriate, the confidentiality of electronic records, and to ensure that the signer cannot readily repudiate the signed record as not genuine. Such procedures and controls shall include the following: (h) Use of device (e.g., terminal) checks to determine, as appropriate, the validity of the source of data input or operational instruction.",
"Attributes": [
{
"ItemId": "11.10-h",
"Section": "11.10 Controls for closed systems",
"Service": "aws"
}
],
"Checks": [
"ec2_instance_managed_by_ssm",
"ssm_managed_compliant_patching",
"ssm_managed_compliant_patching"
]
},
{
"Id": "11.10-k",
"Name": "11.10(k)",
"Description": "Persons who use closed systems to create, modify, maintain, or transmit electronic records shall employ procedures and controls designed to ensure the authenticity, integrity, and, when appropriate, the confidentiality of electronic records, and to ensure that the signer cannot readily repudiate the signed record as not genuine. Such procedures and controls shall include the following: (k) Use of appropriate controls over systems documentation including: (1) Adequate controls over the distribution of, access to, and use of documentation for system operation and maintenance. (2) Revision and change control procedures to maintain an audit trail that documents time-sequenced development and modification of systems documentation.",
"Attributes": [
{
"ItemId": "11.10-k",
"Section": "11.10 Controls for closed systems",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"ec2_ebs_public_snapshot",
"emr_cluster_master_nodes_no_public_ip",
"rds_instance_integration_cloudwatch_logs",
"rds_instance_no_public_access",
"rds_snapshots_public_access",
"redshift_cluster_public_access",
"s3_bucket_server_access_logging_enabled",
"s3_bucket_public_access",
"s3_bucket_policy_public_write_access",
"sagemaker_notebook_instance_without_direct_internet_access_configured",
"ec2_securitygroup_default_restrict_traffic",
"ec2_networkacl_allow_ingress_any_port",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "11.30",
"Name": "11.30 Controls for open systems",
"Description": "Persons who use open systems to create, modify, maintain, or transmit electronic records shall employ procedures and controls designed to ensure the authenticity, integrity, and, as appropriate, the confidentiality of electronic records from the point of their creation to the point of their receipt. Such procedures and controls shall include those identified in 11.10, as appropriate, and additional measures such as document encryption and use of appropriate digital signature standards to ensure, as necessary under the circumstances, record authenticity, integrity, and confidentiality.",
"Attributes": [
{
"ItemId": "11.30",
"Section": "11.30 Controls for open systems",
"Service": "aws"
}
],
"Checks": [
"apigateway_client_certificate_enabled",
"cloudtrail_kms_encryption_enabled",
"cloudtrail_log_file_validation_enabled",
"dynamodb_tables_kms_cmk_encryption_enabled",
"ec2_ebs_volume_encryption",
"ec2_ebs_default_encryption",
"efs_encryption_at_rest_enabled",
"elbv2_insecure_ssl_ciphers",
"elb_ssl_listeners",
"opensearch_service_domains_encryption_at_rest_enabled",
"opensearch_service_domains_node_to_node_encryption_enabled",
"kms_cmk_rotation_enabled",
"cloudwatch_log_group_kms_encryption_enabled",
"rds_instance_storage_encrypted",
"rds_instance_storage_encrypted",
"redshift_cluster_audit_logging",
"s3_bucket_default_encryption",
"s3_bucket_default_encryption",
"s3_bucket_secure_transport_policy",
"sagemaker_notebook_instance_encryption_enabled",
"sns_topics_kms_encryption_at_rest_enabled"
]
},
{
"Id": "11.200",
"Name": "11.200 Electronic signature components and controls",
"Description": "(a) Electronic signatures that are not based upon biometrics shall: (1) Employ at least two distinct identification components such as an identification code and password. (i) When an individual executes a series of signings during a single, continuous period of controlled system access, the first signing shall be executed using all electronic signature components; subsequent signings shall be executed using at least one electronic signature component that is only executable by, and designed to be used only by, the individual. (ii) When an individual executes one or more signings not performed during a single, continuous period of controlled system access, each signing shall be executed using all of the electronic signature components. (2) Be used only by their genuine owners; and (3) Be administered and executed to ensure that attempted use of an individual's electronic signature by anyone other than its genuine owner requires collaboration of two or more individuals.",
"Attributes": [
{
"ItemId": "11.200",
"Section": "11.200 Electronic signature components and controls",
"Service": "aws"
}
],
"Checks": [
"iam_password_policy_minimum_length_14",
"iam_password_policy_lowercase",
"iam_password_policy_number",
"iam_password_policy_number",
"iam_password_policy_symbol",
"iam_password_policy_uppercase",
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_no_root_access_key",
"iam_rotate_access_key_90_days",
"iam_user_mfa_enabled_console_access",
"iam_user_mfa_enabled_console_access"
]
},
{
"Id": "11.300-b",
"Name": "11.300(b)",
"Description": "Persons who use electronic signatures based upon use of identification codes in combination with passwords shall employ controls to ensure their security and integrity. Such controls shall include: (b) Ensuring that identification code and password issuances are periodically checked, recalled, or revised (e.g., to cover such events as password aging).",
"Attributes": [
{
"ItemId": "11.300-b",
"Section": "11.300 Controls for identification codes/passwords",
"Service": "aws"
}
],
"Checks": [
"iam_password_policy_minimum_length_14",
"iam_password_policy_lowercase",
"iam_password_policy_number",
"iam_password_policy_number",
"iam_password_policy_symbol",
"iam_password_policy_uppercase",
"iam_rotate_access_key_90_days",
"iam_disable_90_days_credentials",
"secretsmanager_automatic_rotation_enabled"
]
},
{
"Id": "11.300-d",
"Name": "11.300(d)",
"Description": "Persons who use electronic signatures based upon use of identification codes in combination with passwords shall employ controls to ensure their security and integrity. Such controls shall include: (d) Use of transaction safeguards to prevent unauthorized use of passwords and/or identification codes, and to detect and report in an immediate and urgent manner any attempts at their unauthorized use to the system security unit, and, as appropriate, to organizational management.",
"Attributes": [
{
"ItemId": "11.300-d",
"Section": "11.300 Controls for identification codes/passwords",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"guardduty_is_enabled",
"securityhub_enabled"
]
}
]
}

View File

@@ -0,0 +1,281 @@
{
"Framework": "GxP-EU-Annex-11",
"Version": "",
"Provider": "AWS",
"Description": "The GxP EU Annex 11 framework is the European equivalent to the FDA 21 CFR part 11 framework in the United States. This annex applies to all forms of computerized systems that are used as part of Good Manufacturing Practices (GMP) regulated activities. A computerized system is a set of software and hardware components that together fulfill certain functionalities. The application should be validated and IT infrastructure should be qualified. Where a computerized system replaces a manual operation, there should be no resultant decrease in product quality, process control, or quality assurance. There should be no increase in the overall risk of the process.",
"Requirements": [
{
"Id": "1-risk-management",
"Name": "1 Risk Management",
"Description": "Risk management should be applied throughout the lifecycle of the computerised system taking into account patient safety, data integrity and product quality. As part of a risk management system, decisions on the extent of validation and data integrity controls should be based on a justified and documented risk assessment of the computerised system.",
"Attributes": [
{
"ItemId": "1-risk-management",
"Section": "General",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_multi_region_enabled",
"securityhub_enabled"
]
},
{
"Id": "5-data",
"Name": "5 Data",
"Description": "Computerised systems exchanging data electronically with other systems should include appropriate built-in checks for the correct and secure entry and processing of data, in order to minimize the risks.",
"Attributes": [
{
"ItemId": "5-data",
"Section": "Operational Phase",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"dynamodb_tables_pitr_enabled",
"efs_have_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
},
{
"Id": "7.1-data-storage-damage-protection",
"Name": "7.1 Data Storage - Damage Protection",
"Description": "Data should be secured by both physical and electronic means against damage. Stored data should be checked for accessibility, readability and accuracy. Access to data should be ensured throughout the retention period.",
"Attributes": [
{
"ItemId": "7.1-data-storage-damage-protection",
"Section": "Operational Phase",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_kms_encryption_enabled",
"dynamodb_accelerator_cluster_encryption_enabled",
"dynamodb_tables_kms_cmk_encryption_enabled",
"dynamodb_tables_kms_cmk_encryption_enabled",
"dynamodb_tables_pitr_enabled",
"ec2_ebs_volume_encryption",
"ec2_ebs_default_encryption",
"efs_encryption_at_rest_enabled",
"eks_cluster_kms_cmk_encryption_in_secrets_enabled",
"opensearch_service_domains_encryption_at_rest_enabled",
"cloudwatch_log_group_kms_encryption_enabled",
"rds_instance_backup_enabled",
"rds_instance_storage_encrypted",
"rds_instance_backup_enabled",
"rds_instance_storage_encrypted",
"redshift_cluster_automated_snapshot",
"redshift_cluster_audit_logging",
"s3_bucket_default_encryption",
"s3_bucket_default_encryption",
"s3_bucket_object_versioning",
"sagemaker_notebook_instance_encryption_enabled",
"sns_topics_kms_encryption_at_rest_enabled"
]
},
{
"Id": "7.2-data-storage-backups",
"Name": "7.2 Data Storage - Backups",
"Description": "Regular back-ups of all relevant data should be done. Integrity and accuracy of backup data and the ability to restore the data should be checked during validation and monitored periodically.",
"Attributes": [
{
"ItemId": "7.2-data-storage-backups",
"Section": "Operational Phase",
"Service": "aws"
}
],
"Checks": [
"rds_instance_backup_enabled",
"dynamodb_tables_pitr_enabled",
"dynamodb_tables_pitr_enabled",
"efs_have_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
},
{
"Id": "8.2-printouts-data-changes",
"Name": "8.2 Printouts - Data Changes",
"Description": "For records supporting batch release it should be possible to generate printouts indicating if any of the data has been changed since the original entry.",
"Attributes": [
{
"ItemId": "8.2-printouts-data-changes",
"Section": "Operational Phase",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled"
]
},
{
"Id": "9-audit-trails",
"Name": "9 Audit Trails",
"Description": "Consideration should be given, based on a risk assessment, to building into the system the creation of a record of all GMP-relevant changes and deletions (a system generated 'audit trail'). For change or deletion of GMP-relevant data the reason should be documented. Audit trails need to be available and convertible to a generally intelligible form and regularly reviewed.",
"Attributes": [
{
"ItemId": "9-audit-trails",
"Section": "Operational Phase",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled"
]
},
{
"Id": "10-change-and-configuration-management",
"Name": "10 Change and Configuration Management",
"Description": "Any changes to a computerised system including system configurations should only be made in a controlled manner in accordance with a defined procedure.",
"Attributes": [
{
"ItemId": "10-change-and-configuration-management",
"Section": "Operational Phase",
"Service": "aws"
}
],
"Checks": [
"config_recorder_all_regions_enabled"
]
},
{
"Id": "12.4-security-audit-trail",
"Name": "12.4 Security - Audit Trail",
"Description": "Management systems for data and for documents should be designed to record the identity of operators entering, changing, confirming or deleting data including date and time.",
"Attributes": [
{
"ItemId": "12.4-security-audit-trail",
"Section": "Operational Phase",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled"
]
},
{
"Id": "16-business-continuity",
"Name": "16 Business Continuity",
"Description": "For the availability of computerised systems supporting critical processes, provisions should be made to ensure continuity of support for those processes in the event of a system breakdown (e.g. a manual or alternative system). The time required to bring the alternative arrangements into use should be based on risk and appropriate for a particular system and the business process it supports. These arrangements should be adequately documented and tested.",
"Attributes": [
{
"ItemId": "16-business-continuity",
"Section": "Operational Phase",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"dynamodb_tables_pitr_enabled",
"efs_have_backup_enabled",
"efs_have_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
},
{
"Id": "17-archiving",
"Name": "17 Archiving",
"Description": "Data may be archived. This data should be checked for accessibility, readability and integrity. If relevant changes are to be made to the system (e.g. computer equipment or programs), then the ability to retrieve the data should be ensured and tested.",
"Attributes": [
{
"ItemId": "17-archiving",
"Section": "Operational Phase",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"dynamodb_tables_pitr_enabled",
"efs_have_backup_enabled",
"efs_have_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
},
{
"Id": "4.2-validation-documentation-change-control",
"Name": "4.2 Validation - Documentation Change Control",
"Description": "Validation documentation should include change control records (if applicable) and reports on any deviations observed during the validation process.",
"Attributes": [
{
"ItemId": "4.2-validation-documentation-change-control",
"Section": "Project Phase",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_multi_region_enabled"
]
},
{
"Id": "4.5-validation-development-quality",
"Name": "4.5 Validation - Development Quality",
"Description": "The regulated user should take all reasonable steps, to ensure that the system has been developed in accordance with an appropriate quality management system. The supplier should be assessed appropriately.",
"Attributes": [
{
"ItemId": "4.5-validation-development-quality",
"Section": "Project Phase",
"Service": "aws"
}
],
"Checks": [
"config_recorder_all_regions_enabled"
]
},
{
"Id": "4.6-validation-quality-performance",
"Name": "4.6 Validation - Quality and Performance",
"Description": "For the validation of bespoke or customised computerised systems there should be a process in place that ensures the formal assessment and reporting of quality and performance measures for all the life-cycle stages of the system.",
"Attributes": [
{
"ItemId": "4.6-validation-quality-performance",
"Section": "Project Phase",
"Service": "aws"
}
],
"Checks": [
"config_recorder_all_regions_enabled"
]
},
{
"Id": "4.8-validation-data-transfer",
"Name": "4.8 Validation - Data Transfer",
"Description": "If data are transferred to another data format or system, validation should include checks that data are not altered in value and/or meaning during this migration process.",
"Attributes": [
{
"ItemId": "4.8-validation-data-transfer",
"Section": "Project Phase",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"dynamodb_tables_pitr_enabled",
"efs_have_backup_enabled",
"efs_have_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
}
]
}

View File

@@ -0,0 +1,780 @@
{
"Framework": "HIPAA",
"Version": "",
"Provider": "AWS",
"Description": "The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is legislation that helps US workers to retain health insurance coverage when they change or lose jobs. The legislation also seeks to encourage electronic health records to improve the efficiency and quality of the US healthcare system through improved information sharing.",
"Requirements": [
{
"Id": "164_308_a_1_ii_a",
"Name": "164.308(a)(1)(ii)(A) Risk analysis",
"Description": "Conduct an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of electronic protected health information held by the covered entity or business associate.",
"Attributes": [
{
"ItemId": "164_308_a_1_ii_a",
"Section": "164.308 Administrative Safeguards",
"Service": "aws"
}
],
"Checks": [
"config_recorder_all_regions_enabled",
"guardduty_is_enabled"
]
},
{
"Id": "164_308_a_1_ii_b",
"Name": "164.308(a)(1)(ii)(B) Risk Management",
"Description": "Implement security measures sufficient to reduce risks and vulnerabilities to a reasonable and appropriate level to comply with 164.306(a): Ensure the confidentiality, integrity, and availability of all electronic protected health information the covered entity or business associate creates, receives, maintains, or transmits.",
"Attributes": [
{
"ItemId": "164_308_a_1_ii_b",
"Section": "164.308 Administrative Safeguards",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_kms_encryption_enabled",
"cloudtrail_log_file_validation_enabled",
"dynamodb_tables_pitr_enabled",
"ec2_ebs_public_snapshot",
"ec2_ebs_volume_encryption",
"ec2_ebs_default_encryption",
"ec2_instance_public_ip",
"ec2_instance_older_than_specific_days",
"efs_encryption_at_rest_enabled",
"elbv2_deletion_protection",
"elb_ssl_listeners",
"emr_cluster_master_nodes_no_public_ip",
"opensearch_service_domains_encryption_at_rest_enabled",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_no_root_access_key",
"awslambda_function_not_publicly_accessible",
"awslambda_function_url_public",
"cloudwatch_log_group_kms_encryption_enabled",
"rds_instance_backup_enabled",
"rds_instance_storage_encrypted",
"rds_instance_multi_az",
"rds_instance_storage_encrypted",
"rds_snapshots_public_access",
"redshift_cluster_audit_logging",
"redshift_cluster_public_access",
"s3_bucket_default_encryption",
"s3_bucket_secure_transport_policy",
"s3_bucket_public_access",
"s3_bucket_policy_public_write_access",
"s3_bucket_object_versioning",
"s3_account_level_public_access_blocks",
"sagemaker_notebook_instance_without_direct_internet_access_configured",
"sagemaker_notebook_instance_encryption_enabled",
"sns_topics_kms_encryption_at_rest_enabled",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "164_308_a_1_ii_d",
"Name": "164.308(a)(1)(ii)(D) Information system activity review",
"Description": "Implement procedures to regularly review records of information system activity, such as audit logs, access reports, and security incident tracking reports.",
"Attributes": [
{
"ItemId": "164_308_a_1_ii_d",
"Section": "164.308 Administrative Safeguards",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"cloudtrail_kms_encryption_enabled",
"cloudtrail_log_file_validation_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
"guardduty_is_enabled",
"redshift_cluster_audit_logging",
"s3_bucket_server_access_logging_enabled",
"securityhub_enabled",
"vpc_flow_logs_enabled"
]
},
{
"Id": "164_308_a_3_i",
"Name": "164.308(a)(3)(i) Workforce security",
"Description": "Implement policies and procedures to ensure that all members of its workforce have appropriate access to electronic protected health information, as provided under paragraph (a)(4) of this section, and to prevent those workforce members who do not have access under paragraph (a)(4) of this section from obtaining access to electronic protected health information.",
"Attributes": [
{
"ItemId": "164_308_a_3_i",
"Section": "164.308 Administrative Safeguards",
"Service": "aws"
}
],
"Checks": [
"ec2_ebs_public_snapshot",
"ec2_instance_public_ip",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_no_root_access_key",
"awslambda_function_not_publicly_accessible",
"awslambda_function_url_public",
"rds_instance_no_public_access",
"rds_snapshots_public_access",
"redshift_cluster_public_access",
"s3_bucket_public_access",
"s3_bucket_policy_public_write_access",
"s3_account_level_public_access_blocks",
"sagemaker_notebook_instance_without_direct_internet_access_configured"
]
},
{
"Id": "164_308_a_3_ii_a",
"Name": "164.308(a)(3)(ii)(A) Authorization and/or supervision",
"Description": "Implement procedures for the authorization and/or supervision of workforce members who work with electronic protected health information or in locations where it might be accessed.",
"Attributes": [
{
"ItemId": "164_308_a_3_ii_a",
"Section": "164.308 Administrative Safeguards",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
"guardduty_is_enabled",
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_user_mfa_enabled_console_access",
"iam_user_mfa_enabled_console_access",
"redshift_cluster_audit_logging",
"s3_bucket_server_access_logging_enabled",
"securityhub_enabled",
"vpc_flow_logs_enabled"
]
},
{
"Id": "164_308_a_3_ii_b",
"Name": "164.308(a)(3)(ii)(B) Workforce clearance procedure",
"Description": "Implement procedures to determine that the access of a workforce member to electronic protected health information is appropriate.",
"Attributes": [
{
"ItemId": "164_308_a_3_ii_b",
"Section": "164.308 Administrative Safeguards",
"Service": "iam"
}
],
"Checks": [
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_no_root_access_key",
"iam_disable_90_days_credentials"
]
},
{
"Id": "164_308_a_3_ii_c",
"Name": "164.308(a)(3)(ii)(C) Termination procedures",
"Description": "Implement procedures for terminating access to electronic protected health information when the employment of, or other arrangement with, a workforce member ends or as required by determinations made as specified in paragraph (a)(3)(ii)(b).",
"Attributes": [
{
"ItemId": "164_308_a_3_ii_c",
"Section": "164.308 Administrative Safeguards",
"Service": "iam"
}
],
"Checks": [
"iam_rotate_access_key_90_days"
]
},
{
"Id": "164_308_a_4_i",
"Name": "164.308(a)(4)(i) Information access management",
"Description": "Implement policies and procedures for authorizing access to electronic protected health information that are consistent with the applicable requirements of subpart E of this part.",
"Attributes": [
{
"ItemId": "164_308_a_4_i",
"Section": "164.308 Administrative Safeguards",
"Service": "iam"
}
],
"Checks": [
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges"
]
},
{
"Id": "164_308_a_4_ii_a",
"Name": "164.308(a)(4)(ii)(A) Isolating health care clearinghouse functions",
"Description": "If a health care clearinghouse is part of a larger organization, the clearinghouse must implement policies and procedures that protect the electronic protected health information of the clearinghouse from unauthorized access by the larger organization.",
"Attributes": [
{
"ItemId": "164_308_a_4_ii_a",
"Section": "164.308 Administrative Safeguards",
"Service": "aws"
}
],
"Checks": [
"acm_certificates_expiration_check",
"cloudfront_distributions_https_enabled",
"cloudtrail_kms_encryption_enabled",
"dynamodb_accelerator_cluster_encryption_enabled",
"dynamodb_tables_kms_cmk_encryption_enabled",
"dynamodb_tables_kms_cmk_encryption_enabled",
"ec2_ebs_volume_encryption",
"ec2_ebs_volume_encryption",
"ec2_ebs_default_encryption",
"efs_encryption_at_rest_enabled",
"eks_cluster_kms_cmk_encryption_in_secrets_enabled",
"elb_ssl_listeners",
"opensearch_service_domains_encryption_at_rest_enabled",
"opensearch_service_domains_node_to_node_encryption_enabled",
"cloudwatch_log_group_kms_encryption_enabled",
"rds_instance_storage_encrypted",
"rds_instance_backup_enabled",
"rds_instance_integration_cloudwatch_logs",
"rds_instance_storage_encrypted",
"redshift_cluster_automated_snapshot",
"redshift_cluster_audit_logging",
"s3_bucket_default_encryption",
"s3_bucket_default_encryption",
"sagemaker_notebook_instance_encryption_enabled",
"sns_topics_kms_encryption_at_rest_enabled"
]
},
{
"Id": "164_308_a_4_ii_b",
"Name": "164.308(a)(4)(ii)(B) Access authorization",
"Description": "Implement policies and procedures for granting access to electronic protected health information, As one illustrative example, through access to a workstation, transaction, program, process, or other mechanism.",
"Attributes": [
{
"ItemId": "164_308_a_4_ii_b",
"Section": "164.308 Administrative Safeguards",
"Service": "iam"
}
],
"Checks": [
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges"
]
},
{
"Id": "164_308_a_4_ii_c",
"Name": "164.308(a)(4)(ii)(B) Access authorization",
"Description": "Implement policies and procedures that, based upon the covered entity's or the business associate's access authorization policies, establish, document, review, and modify a user's right of access to a workstation, transaction, program, or process.",
"Attributes": [
{
"ItemId": "164_308_a_4_ii_c",
"Section": "164.308 Administrative Safeguards",
"Service": "aws"
}
],
"Checks": [
"iam_password_policy_reuse_24",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_no_root_access_key",
"iam_rotate_access_key_90_days",
"iam_disable_90_days_credentials",
"secretsmanager_automatic_rotation_enabled"
]
},
{
"Id": "164_308_a_5_ii_b",
"Name": "164.308(a)(5)(ii)(B) Protection from malicious software",
"Description": "Procedures for guarding against, detecting, and reporting malicious software.",
"Attributes": [
{
"ItemId": "164_308_a_5_ii_b",
"Section": "164.308 Administrative Safeguards",
"Service": "aws"
}
],
"Checks": [
"ec2_instance_managed_by_ssm",
"ssm_managed_compliant_patching",
"ssm_managed_compliant_patching"
]
},
{
"Id": "164_308_a_5_ii_c",
"Name": "164.308(a)(5)(ii)(C) Log-in monitoring",
"Description": "Procedures for monitoring log-in attempts and reporting discrepancies.",
"Attributes": [
{
"ItemId": "164_308_a_5_ii_c",
"Section": "164.308 Administrative Safeguards",
"Service": "aws"
}
],
"Checks": [
"guardduty_is_enabled",
"cloudwatch_log_metric_filter_authentication_failures",
"securityhub_enabled"
]
},
{
"Id": "164_308_a_5_ii_d",
"Name": "164.308(a)(5)(ii)(D) Password management",
"Description": "Procedures for creating, changing, and safeguarding passwords.",
"Attributes": [
{
"ItemId": "164_308_a_5_ii_d",
"Section": "164.308 Administrative Safeguards",
"Service": "iam"
}
],
"Checks": [
"iam_password_policy_minimum_length_14",
"iam_password_policy_lowercase",
"iam_password_policy_number",
"iam_password_policy_symbol",
"iam_password_policy_uppercase",
"iam_password_policy_reuse_24",
"iam_rotate_access_key_90_days",
"iam_disable_90_days_credentials"
]
},
{
"Id": "164_308_a_6_i",
"Name": "164.308(a)(6)(i) Security incident procedures",
"Description": "Implement policies and procedures to address security incidents.",
"Attributes": [
{
"ItemId": "164_308_a_6_i",
"Section": "164.308 Administrative Safeguards",
"Service": "aws"
}
],
"Checks": [
"cloudwatch_changes_to_network_acls_alarm_configured",
"cloudwatch_changes_to_network_gateways_alarm_configured",
"cloudwatch_changes_to_network_route_tables_alarm_configured",
"cloudwatch_changes_to_vpcs_alarm_configured",
"guardduty_is_enabled",
"cloudwatch_log_metric_filter_authentication_failures",
"cloudwatch_log_metric_filter_root_usage",
"securityhub_enabled"
]
},
{
"Id": "164_308_a_6_ii",
"Name": "164.308(a)(6)(ii) Response and reporting",
"Description": "Identify and respond to suspected or known security incidents; mitigate, to the extent practicable, harmful effects of security incidents that are known to the covered entity or business associate; and document security incidents and their outcomes.",
"Attributes": [
{
"ItemId": "164_308_a_6_ii",
"Section": "164.308 Administrative Safeguards",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
"guardduty_is_enabled",
"guardduty_no_high_severity_findings",
"cloudwatch_log_metric_filter_authentication_failures",
"cloudwatch_log_metric_filter_root_usage",
"s3_bucket_server_access_logging_enabled",
"securityhub_enabled",
"vpc_flow_logs_enabled"
]
},
{
"Id": "164_308_a_7_i",
"Name": "164.308(a)(7)(i) Contingency plan",
"Description": "Establish (and implement as needed) policies and procedures for responding to an emergency or other occurrence (for example, fire, vandalism, system failure, and natural disaster) that damages systems that contain electronic protected health information.",
"Attributes": [
{
"ItemId": "164_308_a_7_i",
"Section": "164.308 Administrative Safeguards",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"dynamodb_tables_pitr_enabled",
"efs_have_backup_enabled",
"efs_have_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_multi_az",
"rds_instance_backup_enabled",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
},
{
"Id": "164_308_a_7_ii_a",
"Name": "164.308(a)(7)(ii)(A) Data backup plan",
"Description": "Establish and implement procedures to create and maintain retrievable exact copies of electronic protected health information.",
"Attributes": [
{
"ItemId": "164_308_a_7_ii_a",
"Section": "164.308 Administrative Safeguards",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"dynamodb_tables_pitr_enabled",
"efs_have_backup_enabled",
"efs_have_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_multi_az",
"rds_instance_backup_enabled",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
},
{
"Id": "164_308_a_7_ii_b",
"Name": "164.308(a)(7)(ii)(B) Disaster recovery plan",
"Description": "Establish (and implement as needed) procedures to restore any loss of data.",
"Attributes": [
{
"ItemId": "164_308_a_7_ii_b",
"Section": "164.308 Administrative Safeguards",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"dynamodb_tables_pitr_enabled",
"efs_have_backup_enabled",
"efs_have_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_multi_az",
"rds_instance_backup_enabled",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
},
{
"Id": "164_308_a_7_ii_c",
"Name": "164.308(a)(7)(ii)(C) Emergency mode operation plan",
"Description": "Establish (and implement as needed) procedures to enable continuation of critical business processes for protection of the security of electronic protected health information while operating in emergency mode.",
"Attributes": [
{
"ItemId": "164_308_a_7_ii_c",
"Section": "164.308 Administrative Safeguards",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"dynamodb_tables_pitr_enabled",
"efs_have_backup_enabled",
"efs_have_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_multi_az",
"rds_instance_backup_enabled",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
},
{
"Id": "164_308_a_8",
"Name": "164.308(a)(8) Evaluation",
"Description": "Perform a periodic technical and nontechnical evaluation, based initially upon the standards implemented under this rule and subsequently, in response to environmental or operational changes affecting the security of electronic protected health information, that establishes the extent to which an entity's security policies and procedures meet the requirements of this subpart.",
"Attributes": [
{
"ItemId": "164_308_a_8",
"Section": "164.308 Administrative Safeguards",
"Service": "aws"
}
],
"Checks": [
"guardduty_is_enabled",
"securityhub_enabled"
]
},
{
"Id": "164_312_a_1",
"Name": "164.312(a)(1) Access control",
"Description": "Implement technical policies and procedures for electronic information systems that maintain electronic protected health information to allow access only to those persons or software programs that have been granted access rights as specified in 164.308(a)(4).",
"Attributes": [
{
"ItemId": "164_312_a_1",
"Section": "164.312 Technical Safeguards",
"Service": "aws"
}
],
"Checks": [
"ec2_ebs_public_snapshot",
"ec2_instance_public_ip",
"emr_cluster_master_nodes_no_public_ip",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_user_mfa_enabled_console_access",
"awslambda_function_not_publicly_accessible",
"awslambda_function_url_public",
"rds_instance_no_public_access",
"rds_snapshots_public_access",
"redshift_cluster_public_access",
"s3_bucket_public_access",
"s3_bucket_policy_public_write_access",
"s3_bucket_public_access",
"sagemaker_notebook_instance_without_direct_internet_access_configured"
]
},
{
"Id": "164_312_a_2_i",
"Name": "164.312(a)(2)(i) Unique user identification",
"Description": "Assign a unique name and/or number for identifying and tracking user identity.",
"Attributes": [
{
"ItemId": "164_312_a_2_i",
"Section": "164.312 Technical Safeguards",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"iam_no_root_access_key",
"s3_bucket_public_access"
]
},
{
"Id": "164_312_a_2_ii",
"Name": "164.312(a)(2)(ii) Emergency access procedure",
"Description": "Establish (and implement as needed) procedures for obtaining necessary electronic protected health information during an emergency.",
"Attributes": [
{
"ItemId": "164_312_a_2_ii",
"Section": "164.312 Technical Safeguards",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"dynamodb_tables_pitr_enabled",
"efs_have_backup_enabled",
"efs_have_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
},
{
"Id": "164_312_a_2_iv",
"Name": "164.312(a)(2)(iv) Encryption and decryption",
"Description": "Implement a mechanism to encrypt and decrypt electronic protected health information.",
"Attributes": [
{
"ItemId": "164_312_a_2_iv",
"Section": "164.312 Technical Safeguards",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_kms_encryption_enabled",
"dynamodb_accelerator_cluster_encryption_enabled",
"dynamodb_tables_kms_cmk_encryption_enabled",
"dynamodb_tables_kms_cmk_encryption_enabled",
"ec2_ebs_volume_encryption",
"ec2_ebs_default_encryption",
"efs_encryption_at_rest_enabled",
"eks_cluster_kms_cmk_encryption_in_secrets_enabled",
"opensearch_service_domains_encryption_at_rest_enabled",
"kms_cmk_rotation_enabled",
"cloudwatch_log_group_kms_encryption_enabled",
"rds_instance_storage_encrypted",
"rds_instance_storage_encrypted",
"redshift_cluster_audit_logging",
"s3_bucket_default_encryption",
"s3_bucket_default_encryption",
"s3_bucket_secure_transport_policy",
"sagemaker_notebook_instance_encryption_enabled",
"sns_topics_kms_encryption_at_rest_enabled"
]
},
{
"Id": "164_312_b",
"Name": "164.312(b) Audit controls",
"Description": "Implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use electronic protected health information.",
"Attributes": [
{
"ItemId": "164_312_b",
"Section": "164.312 Technical Safeguards",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"cloudtrail_log_file_validation_enabled",
"cloudwatch_log_group_retention_policy_specific_days_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
"guardduty_is_enabled",
"rds_instance_integration_cloudwatch_logs",
"redshift_cluster_audit_logging",
"s3_bucket_server_access_logging_enabled",
"securityhub_enabled",
"vpc_flow_logs_enabled"
]
},
{
"Id": "164_312_c_1",
"Name": "164.312(c)(1) Integrity",
"Description": "Implement policies and procedures to protect electronic protected health information from improper alteration or destruction.",
"Attributes": [
{
"ItemId": "164_312_c_1",
"Section": "164.312 Technical Safeguards",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_kms_encryption_enabled",
"cloudtrail_log_file_validation_enabled",
"ec2_ebs_volume_encryption",
"s3_bucket_default_encryption",
"s3_bucket_secure_transport_policy",
"s3_bucket_object_versioning"
]
},
{
"Id": "164_312_c_2",
"Name": "164.312(c)(2) Mechanism to authenticate electronic protected health information",
"Description": "Implement electronic mechanisms to corroborate that electronic protected health information has not been altered or destroyed in an unauthorized manner.",
"Attributes": [
{
"ItemId": "164_312_c_2",
"Section": "164.312 Technical Safeguards",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_kms_encryption_enabled",
"cloudtrail_log_file_validation_enabled",
"ec2_ebs_volume_encryption",
"s3_bucket_default_encryption",
"s3_bucket_secure_transport_policy",
"s3_bucket_object_versioning",
"vpc_flow_logs_enabled"
]
},
{
"Id": "164_312_d",
"Name": "164.312(d) Person or entity authentication",
"Description": "Implement procedures to verify that a person or entity seeking access to electronic protected health information is the one claimed.",
"Attributes": [
{
"ItemId": "164_312_d",
"Section": "164.312 Technical Safeguards",
"Service": "iam"
}
],
"Checks": [
"iam_password_policy_reuse_24",
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_user_mfa_enabled_console_access",
"iam_user_mfa_enabled_console_access"
]
},
{
"Id": "164_312_e_1",
"Name": "164.312(e)(1) Transmission security",
"Description": "Implement technical security measures to guard against unauthorized access to electronic protected health information that is being transmitted over an electronic communications network.",
"Attributes": [
{
"ItemId": "164_312_e_1",
"Section": "164.312 Technical Safeguards",
"Service": "aws"
}
],
"Checks": [
"acm_certificates_expiration_check",
"cloudfront_distributions_https_enabled",
"elb_ssl_listeners",
"opensearch_service_domains_node_to_node_encryption_enabled",
"awslambda_function_not_publicly_accessible",
"s3_bucket_secure_transport_policy",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "164_312_e_2_i",
"Name": "164.312(e)(2)(i) Integrity controls",
"Description": "Implement security measures to ensure that electronically transmitted electronic protected health information is not improperly modified without detection until disposed of.",
"Attributes": [
{
"ItemId": "164_312_e_2_i",
"Section": "164.312 Technical Safeguards",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"elb_ssl_listeners",
"guardduty_is_enabled",
"s3_bucket_secure_transport_policy",
"s3_bucket_server_access_logging_enabled",
"securityhub_enabled"
]
},
{
"Id": "164_312_e_2_ii",
"Name": "164.312(e)(2)(ii) Encryption",
"Description": "Implement a mechanism to encrypt electronic protected health information whenever deemed appropriate.",
"Attributes": [
{
"ItemId": "164_312_e_2_ii",
"Section": "164.312 Technical Safeguards",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_kms_encryption_enabled",
"dynamodb_accelerator_cluster_encryption_enabled",
"dynamodb_tables_kms_cmk_encryption_enabled",
"dynamodb_tables_kms_cmk_encryption_enabled",
"ec2_ebs_volume_encryption",
"ec2_ebs_default_encryption",
"efs_encryption_at_rest_enabled",
"eks_cluster_kms_cmk_encryption_in_secrets_enabled",
"elb_ssl_listeners",
"opensearch_service_domains_encryption_at_rest_enabled",
"cloudwatch_log_group_kms_encryption_enabled",
"rds_instance_storage_encrypted",
"rds_instance_storage_encrypted",
"redshift_cluster_audit_logging",
"s3_bucket_default_encryption",
"s3_bucket_default_encryption",
"s3_bucket_secure_transport_policy",
"sagemaker_notebook_instance_encryption_enabled",
"sns_topics_kms_encryption_at_rest_enabled"
]
}
]
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,294 @@
{
"Framework": "PCI",
"Version": "3.2.1",
"Provider": "AWS",
"Description": "The Payment Card Industry Data Security Standard (PCI DSS) is a proprietary information security standard. It's administered by the PCI Security Standards Council, which was founded by American Express, Discover Financial Services, JCB International, MasterCard Worldwide, and Visa Inc. PCI DSS applies to entities that store, process, or transmit cardholder data (CHD) or sensitive authentication data (SAD). This includes, but isn't limited to, merchants, processors, acquirers, issuers, and service providers. The PCI DSS is mandated by the card brands and administered by the Payment Card Industry Security Standards Council.",
"Requirements": [
{
"Id": "autoscaling",
"Name": "Auto Scaling",
"Description": "This control checks whether your Auto Scaling groups that are associated with a load balancer are using Elastic Load Balancing health checks. PCI DSS does not require load balancing or highly available configurations. However, this check aligns with AWS best practices.",
"Attributes": [
{
"ItemId": "autoscaling",
"Service": "autoscaling"
}
],
"Checks": []
},
{
"Id": "cloudtrail",
"Name": "CloudTrail",
"Description": "This section contains recommendations for configuring CloudTrail resources and options.",
"Attributes": [
{
"ItemId": "cloudtrail",
"Service": "cloudtrail"
}
],
"Checks": [
"cloudtrail_kms_encryption_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_log_file_validation_enabled",
"cloudtrail_cloudwatch_logging_enabled"
]
},
{
"Id": "codebuild",
"Name": "CodeBuild",
"Description": "This section contains recommendations for configuring CodeBuild resources and options.",
"Attributes": [
{
"ItemId": "codebuild",
"Service": "codebuild"
}
],
"Checks": []
},
{
"Id": "config",
"Name": "Config",
"Description": "This section contains recommendations for configuring AWS Config.",
"Attributes": [
{
"ItemId": "config",
"Service": "config"
}
],
"Checks": [
"config_recorder_all_regions_enabled"
]
},
{
"Id": "cw",
"Name": "CloudWatch",
"Description": "This section contains recommendations for configuring CloudWatch resources and options.",
"Attributes": [
{
"ItemId": "cw",
"Service": "cloudwatch"
}
],
"Checks": [
"cloudwatch_log_metric_filter_root_usage"
]
},
{
"Id": "dms",
"Name": "DMS",
"Description": "This section contains recommendations for configuring AWS DMS resources and options.",
"Attributes": [
{
"ItemId": "dms",
"Service": "dms"
}
],
"Checks": []
},
{
"Id": "ec2",
"Name": "EC2",
"Description": "This section contains recommendations for configuring EC2 resources and options.",
"Attributes": [
{
"ItemId": "ec2",
"Service": "ec2"
}
],
"Checks": [
"ec2_ebs_public_snapshot",
"ec2_securitygroup_default_restrict_traffic",
"ec2_elastic_ip_unassgined",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_3389",
"vpc_flow_logs_enabled"
]
},
{
"Id": "elbv2",
"Name": "ELBV2",
"Description": "This section contains recommendations for configuring Elastic Load Balancer resources and options.",
"Attributes": [
{
"ItemId": "elbv2",
"Service": "elbv2"
}
],
"Checks": []
},
{
"Id": "elasticsearch",
"Name": "Elasticsearch",
"Description": "This section contains recommendations for configuring Elasticsearch resources and options.",
"Attributes": [
{
"ItemId": "elasticsearch",
"Service": "elasticsearch"
}
],
"Checks": [
"opensearch_service_domains_encryption_at_rest_enabled"
]
},
{
"Id": "guardduty",
"Name": "GuardDuty",
"Description": "This section contains recommendations for configuring AWS GuardDuty resources and options.",
"Attributes": [
{
"ItemId": "guardduty",
"Service": "guardduty"
}
],
"Checks": [
"guardduty_is_enabled"
]
},
{
"Id": "iam",
"Name": "IAM",
"Description": "This section contains recommendations for configuring AWS IAM resources and options.",
"Attributes": [
{
"ItemId": "iam",
"Service": "iam"
}
],
"Checks": [
"iam_no_root_access_key",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_user_mfa_enabled_console_access",
"iam_disable_90_days_credentials",
"iam_password_policy_minimum_length_14",
"iam_password_policy_lowercase",
"iam_password_policy_number",
"iam_password_policy_number",
"iam_password_policy_symbol",
"iam_password_policy_uppercase"
]
},
{
"Id": "kms",
"Name": "KMS",
"Description": "This section contains recommendations for configuring AWS KMS resources and options.",
"Attributes": [
{
"ItemId": "kms",
"Service": "kms"
}
],
"Checks": [
"kms_cmk_rotation_enabled"
]
},
{
"Id": "lambda",
"Name": "Lambda",
"Description": "This section contains recommendations for configuring Lambda resources and options.",
"Attributes": [
{
"ItemId": "lambda",
"Service": "lambda"
}
],
"Checks": [
"awslambda_function_url_public",
"awslambda_function_not_publicly_accessible"
]
},
{
"Id": "opensearch",
"Name": "OpenSearch",
"Description": "This section contains recommendations for configuring OpenSearch resources and options.",
"Attributes": [
{
"ItemId": "opensearch",
"Service": "opensearch"
}
],
"Checks": [
"opensearch_service_domains_encryption_at_rest_enabled"
]
},
{
"Id": "rds",
"Name": "RDS",
"Description": "This section contains recommendations for configuring AWS RDS resources and options.",
"Attributes": [
{
"ItemId": "rds",
"Service": "rds"
}
],
"Checks": [
"rds_snapshots_public_access",
"rds_instance_no_public_access"
]
},
{
"Id": "redshift",
"Name": "Redshift",
"Description": "This section contains recommendations for configuring AWS Redshift resources and options.",
"Attributes": [
{
"ItemId": "redshift",
"Service": "redshift"
}
],
"Checks": [
"redshift_cluster_public_access"
]
},
{
"Id": "s3",
"Name": "S3",
"Description": "This section contains recommendations for configuring AWS S3 resources and options.",
"Attributes": [
{
"ItemId": "s3",
"Service": "s3"
}
],
"Checks": [
"s3_bucket_policy_public_write_access",
"s3_bucket_public_access",
"s3_bucket_default_encryption",
"s3_bucket_secure_transport_policy",
"s3_bucket_public_access"
]
},
{
"Id": "sagemaker",
"Name": "SageMaker",
"Description": "This section contains recommendations for configuring AWS Sagemaker resources and options.",
"Attributes": [
{
"ItemId": "sagemaker",
"Service": "sagemaker"
}
],
"Checks": [
"sagemaker_notebook_instance_without_direct_internet_access_configured"
]
},
{
"Id": "ssm",
"Name": "SSM",
"Description": "This section contains recommendations for configuring AWS SSM resources and options.",
"Attributes": [
{
"ItemId": "ssm",
"Service": "ssm"
}
],
"Checks": [
"ssm_managed_compliant_patching",
"ssm_managed_compliant_patching",
"ec2_instance_managed_by_ssm"
]
}
]
}

View File

@@ -0,0 +1,200 @@
{
"Framework": "RBI-Cyber-Security-Framework",
"Version": "",
"Provider": "AWS",
"Description": "The Reserve Bank had prescribed a set of baseline cyber security controls for primary (Urban) cooperative banks (UCBs) in October 2018. On further examination, it has been decided to prescribe a comprehensive cyber security framework for the UCBs, as a graded approach, based on their digital depth and interconnectedness with the payment systems landscape, digital products offered by them and assessment of cyber security risk. The framework would mandate implementation of progressively stronger security measures based on the nature, variety and scale of digital product offerings of banks.",
"Requirements": [
{
"Id": "annex_i_1_1",
"Name": "Annex I (1.1)",
"Description": "UCBs should maintain an up-to-date business IT Asset Inventory Register containing the following fields, as a minimum: a) Details of the IT Asset (viz., hardware/software/network devices, key personnel, services, etc.), b. Details of systems where customer data are stored, c. Associated business applications, if any, d. Criticality of the IT asset (For example, High/Medium/Low).",
"Attributes": [
{
"ItemId": "annex_i_1_1",
"Service": "ec2"
}
],
"Checks": [
"ec2_instance_managed_by_ssm"
]
},
{
"Id": "annex_i_1_3",
"Name": "Annex I (1.3)",
"Description": "Appropriately manage and provide protection within and outside UCB/network, keeping in mind how the data/information is stored, transmitted, processed, accessed and put to use within/outside the UCBs network, and level of risk they are exposed to depending on the sensitivity of the data/information.",
"Attributes": [
{
"ItemId": "annex_i_1_3",
"Service": "aws"
}
],
"Checks": [
"acm_certificates_expiration_check",
"apigateway_client_certificate_enabled",
"cloudtrail_kms_encryption_enabled",
"dynamodb_tables_kms_cmk_encryption_enabled",
"ec2_ebs_volume_encryption",
"ec2_ebs_public_snapshot",
"ec2_ebs_volume_encryption",
"ec2_instance_public_ip",
"efs_encryption_at_rest_enabled",
"elbv2_insecure_ssl_ciphers",
"elb_ssl_listeners",
"emr_cluster_master_nodes_no_public_ip",
"opensearch_service_domains_encryption_at_rest_enabled",
"opensearch_service_domains_node_to_node_encryption_enabled",
"kms_cmk_rotation_enabled",
"awslambda_function_not_publicly_accessible",
"awslambda_function_url_public",
"cloudwatch_log_group_kms_encryption_enabled",
"rds_instance_storage_encrypted",
"rds_instance_no_public_access",
"rds_instance_storage_encrypted",
"rds_snapshots_public_access",
"redshift_cluster_audit_logging",
"redshift_cluster_public_access",
"s3_bucket_default_encryption",
"s3_bucket_default_encryption",
"s3_bucket_secure_transport_policy",
"s3_bucket_public_access",
"s3_bucket_policy_public_write_access",
"s3_bucket_public_access",
"sagemaker_notebook_instance_without_direct_internet_access_configured",
"sagemaker_notebook_instance_encryption_enabled",
"sns_topics_kms_encryption_at_rest_enabled",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "annex_i_5_1",
"Name": "Annex I (5.1)",
"Description": "The firewall configurations should be set to the highest security level and evaluation of critical device (such as firewall, network switches, security devices, etc.) configurations should be done periodically.",
"Attributes": [
{
"ItemId": "annex_i_5_1",
"Service": "aws"
}
],
"Checks": [
"apigateway_waf_acl_attached",
"elbv2_waf_acl_attached",
"ec2_securitygroup_default_restrict_traffic",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "annex_i_6",
"Name": "Annex I (6)",
"Description": "Put in place systems and processes to identify, track, manage and monitor the status of patches to servers, operating system and application software running at the systems used by the UCB officials (end-users). Implement and update antivirus protection for all servers and applicable end points preferably through a centralised system.",
"Attributes": [
{
"ItemId": "annex_i_6",
"Service": "aws"
}
],
"Checks": [
"guardduty_no_high_severity_findings",
"rds_instance_minor_version_upgrade_enabled",
"redshift_cluster_automatic_upgrades",
"ssm_managed_compliant_patching",
"ssm_managed_compliant_patching"
]
},
{
"Id": "annex_i_7_1",
"Name": "Annex I (7.1)",
"Description": "Disallow administrative rights on end-user workstations/PCs/laptops and provide access rights on a need to know and need to do basis.",
"Attributes": [
{
"ItemId": "annex_i_7_1",
"Service": "iam"
}
],
"Checks": [
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_policy_attached_only_to_group_or_roles",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_no_root_access_key"
]
},
{
"Id": "annex_i_7_2",
"Name": "Annex I (7.2)",
"Description": "Passwords should be set as complex and lengthy and users should not use same passwords for all the applications/systems/devices.",
"Attributes": [
{
"ItemId": "annex_i_7_2",
"Service": "iam"
}
],
"Checks": [
"iam_password_policy_reuse_24"
]
},
{
"Id": "annex_i_7_3",
"Name": "Annex I (7.3)",
"Description": "Remote Desktop Protocol (RDP) which allows others to access the computer remotely over a network or over the internet should be always disabled and should be enabled only with the approval of the authorised officer of the UCB. Logs for such remote access shall be enabled and monitored for suspicious activities.",
"Attributes": [
{
"ItemId": "annex_i_7_3",
"Service": "vpc"
}
],
"Checks": [
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22"
]
},
{
"Id": "annex_i_7_4",
"Name": "Annex I (7.4)",
"Description": "Implement appropriate (e.g. centralised) systems and controls to allow, manage, log and monitor privileged/super user/administrative access to critical systems (servers/databases, applications, network devices etc.)",
"Attributes": [
{
"ItemId": "annex_i_7_4",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"cloudwatch_log_group_retention_policy_specific_days_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
"opensearch_service_domains_cloudwatch_logging_enabled",
"rds_instance_integration_cloudwatch_logs",
"redshift_cluster_audit_logging",
"s3_bucket_server_access_logging_enabled",
"securityhub_enabled",
"vpc_flow_logs_enabled"
]
},
{
"Id": "annex_i_12",
"Name": "Annex I (12)",
"Description": "Take periodic back up of the important data and store this data off line (i.e., transferring important files to a storage device that can be detached from a computer/system after copying all the files).",
"Attributes": [
{
"ItemId": "annex_i_12",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"efs_have_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
}
]
}

View File

@@ -0,0 +1,916 @@
{
"Framework": "SOC2",
"Version": "",
"Provider": "AWS",
"Description": "System and Organization Controls (SOC), defined by the American Institute of Certified Public Accountants (AICPA), is the name of a set of reports that's produced during an audit. It's intended for use by service organizations (organizations that provide information systems as a service to other organizations) to issue validated reports of internal controls over those information systems to the users of those services. The reports focus on controls grouped into five categories known as Trust Service Principles.",
"Requirements": [
{
"Id": "cc_1_1",
"Name": "CC1.1 COSO Principle 1: The entity demonstrates a commitment to integrity and ethical values",
"Description": "Sets the Tone at the Top - The board of directors and management, at all levels, demonstrate through their directives, actions, and behavior the importance of integrity and ethical values to support the functioning of the system of internal control.Establishes Standards of Conduct - The expectations of the board of directors and senior management concerning integrity and ethical values are defined in the entitys standards of conduct and understood at all levels of the entity and by outsourced service providers and business partners. Evaluates Adherence to Standards of Conduct - Processes are in place to evaluate the performance of individuals and teams against the entitys expected standards of conduct. Addresses Deviations in a Timely Manner - Deviations from the entitys expected standards of conduct are identified and remedied in a timely and consistent manner.",
"Attributes": [
{
"ItemId": "cc_1_1",
"Section": "CC1.0 - Common Criteria Related to Control Environment",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "cc_1_2",
"Name": "CC1.2 COSO Principle 2: The board of directors demonstrates independence from management and exercises oversight of the development and performance of internal control",
"Description": "Establishes Oversight Responsibilities - The board of directors identifies and accepts its oversight responsibilities in relation to established requirements and expectations. Applies Relevant Expertise - The board of directors defines, maintains, and periodically evaluates the skills and expertise needed among its members to enable them to ask probing questions of senior management and take commensurate action. Operates Independently - The board of directors has sufficient members who are independent from management and objective in evaluations and decision making. Additional point of focus specifically related to all engagements using the trust services criteria: Supplements Board Expertise - The board of directors supplements its expertise relevant to security, availability, processing integrity, confidentiality, and privacy, as needed, through the use of a subcommittee or consultants.",
"Attributes": [
{
"ItemId": "cc_1_2",
"Section": "CC1.0 - Common Criteria Related to Control Environment",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "cc_1_3",
"Name": "CC1.3 COSO Principle 3: Management establishes, with board oversight, structures, reporting lines, and appropriate authorities and responsibilities in the pursuit of objectives",
"Description": "Considers All Structures of the Entity - Management and the board of directors consider the multiple structures used (including operating units, legal entities, geographic distribution, and outsourced service providers) to support the achievement of objectives. Establishes Reporting Lines - Management designs and evaluates lines of reporting for each entity structure to enable execution of authorities and responsibilities and flow of information to manage the activities of the entity. Defines, Assigns, and Limits Authorities and Responsibilities - Management and the board of directors delegate authority, define responsibilities, and use appropriate processes and technology to assign responsibility and segregate duties as necessary at the various levels of the organization. Additional points of focus specifically related to all engagements using the trust services criteria: Addresses Specific Requirements When Defining Authorities and Responsibilities—Management and the board of directors consider requirements relevant to security, availability, processing integrity, confidentiality, and privacy when defining authorities and responsibilities. Considers Interactions With External Parties When Establishing Structures, Reporting Lines, Authorities, and Responsibilities — Management and the board of directors consider the need for the entity to interact with and monitor the activities of external parties when establishing structures, reporting lines, authorities, and responsibilities.",
"Attributes": [
{
"ItemId": "cc_1_3",
"Section": "CC1.0 - Common Criteria Related to Control Environment",
"Service": "aws",
"Soc_Type": "automated"
}
],
"Checks": [
"iam_policy_attached_only_to_group_or_roles",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_disable_90_days_credentials"
]
},
{
"Id": "cc_1_4",
"Name": "CC1.4 COSO Principle 4: The entity demonstrates a commitment to attract, develop, and retain competent individuals in alignment with objectives",
"Description": "Establishes Policies and Practices - Policies and practices reflect expectations of competence necessary to support the achievement of objectives. Evaluates Competence and Addresses Shortcomings - The board of directors and management evaluate competence across the entity and in outsourced service providers in relation to established policies and practices and act as necessary to address shortcomings.Attracts, Develops, and Retains Individuals - The entity provides the mentoring and training needed to attract, develop, and retain sufficient and competent personnel and outsourced service providers to support the achievement of objectives.Plans and Prepares for Succession - Senior management and the board of directors develop contingency plans for assignments of responsibility important for internal control.Additional point of focus specifically related to all engagements using the trust services criteria:Considers the Background of Individuals - The entity considers the background of potential and existing personnel, contractors, and vendor employees when determining whether to employ and retain the individuals.Considers the Technical Competency of Individuals - The entity considers the technical competency of potential and existing personnel, contractors, and vendor employees when determining whether to employ and retain the individuals.Provides Training to Maintain Technical Competencies - The entity provides training programs, including continuing education and training, to ensure skill sets and technical competency of existing personnel, contractors, and vendor employees are developed and maintained.",
"Attributes": [
{
"ItemId": "cc_1_4",
"Section": "CC1.0 - Common Criteria Related to Control Environment",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "cc_1_5",
"Name": "CC1.5 COSO Principle 5: The entity holds individuals accountable for their internal control responsibilities in the pursuit of objectives",
"Description": "Enforces Accountability Through Structures, Authorities, and Responsibilities - Management and the board of directors establish the mechanisms to communicate and hold individuals accountable for performance of internal control responsibilities across the entity and implement corrective action as necessary. Establishes Performance Measures, Incentives, and Rewards - Management and the board of directors establish performance measures, incentives, and other rewards appropriate for responsibilities at all levels of the entity, reflecting appropriate dimensions of performance and expected standards of conduct, and considering the achievement of both short-term and longer-term objectives.Evaluates Performance Measures, Incentives, and Rewards for Ongoing Relevance - Management and the board of directors align incentives and rewards with the fulfillment of internal control responsibilities in the achievement of objectives.Considers Excessive Pressures - Management and the board of directors evaluate and adjust pressures associated with the achievement of objectives as they assign responsibilities, develop performance measures, and evaluate performance. Evaluates Performance and Rewards or Disciplines Individuals - Management and the board of directors evaluate performance of internal control responsibilities, including adherence to standards of conduct and expected levels of competence, and provide rewards or exercise disciplinary action, as appropriate.",
"Attributes": [
{
"ItemId": "cc_1_5",
"Section": "CC1.0 - Common Criteria Related to Control Environment",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "cc_2_1",
"Name": "CC2.1 COSO Principle 13: The entity obtains or generates and uses relevant, quality information to support the functioning of internal control",
"Description": "Identifies Information Requirements - A process is in place to identify the information required and expected to support the functioning of the other components of internal control and the achievement of the entitys objectives. Captures Internal and External Sources of Data - Information systems capture internal and external sources of data. Processes Relevant Data Into Information - Information systems process and transform relevant data into information. Maintains Quality Throughout Processing - Information systems produce information that is timely, current, accurate, complete, accessible, protected, verifiable, and retained. Information is reviewed to assess its relevance in supporting the internal control components.",
"Attributes": [
{
"ItemId": "cc_2_1",
"Section": "CC2.0 - Common Criteria Related to Communication and Information",
"Service": "aws",
"Soc_Type": "automated"
}
],
"Checks": [
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"config_recorder_all_regions_enabled"
]
},
{
"Id": "cc_2_2",
"Name": "CC2.2 COSO Principle 14: The entity internally communicates information, including objectives and responsibilities for internal control, necessary to support the functioning of internal control",
"Description": "Communicates Internal Control Information - A process is in place to communicate required information to enable all personnel to understand and carry out their internal control responsibilities. Communicates With the Board of Directors - Communication exists between management and the board of directors so that both have information needed to fulfill their roles with respect to the entitys objectives. Provides Separate Communication Lines - Separate communication channels, such as whistle-blower hotlines, are in place and serve as fail-safe mechanisms to enable anonymous or confidential communication when normal channels are inoperative or ineffective. Selects Relevant Method of Communication - The method of communication considers the timing, audience, and nature of the information. Additional point of focus specifically related to all engagements using the trust services criteria: Communicates Responsibilities - Entity personnel with responsibility for designing, developing, implementing,operating, maintaining, or monitoring system controls receive communications about their responsibilities, including changes in their responsibilities, and have the information necessary to carry out those responsibilities. Communicates Information on Reporting Failures, Incidents, Concerns, and Other Matters—Entity personnel are provided with information on how to report systems failures, incidents, concerns, and other complaints to personnel. Communicates Objectives and Changes to Objectives - The entity communicates its objectives and changes to those objectives to personnel in a timely manner. Communicates Information to Improve Security Knowledge and Awareness - The entity communicates information to improve security knowledge and awareness and to model appropriate security behaviors to personnel through a security awareness training program. Additional points of focus that apply only when an engagement using the trust services criteria is performed at the system level: Communicates Information About System Operation and Boundaries - The entity prepares and communicates information about the design and operation of the system and its boundaries to authorized personnel to enable them to understand their role in the system and the results of system operation. Communicates System Objectives - The entity communicates its objectives to personnel to enable them to carry out their responsibilities. Communicates System Changes - System changes that affect responsibilities or the achievement of the entity's objectives are communicated in a timely manner.",
"Attributes": [
{
"ItemId": "cc_2_2",
"Section": "CC2.0 - Common Criteria Related to Communication and Information",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "cc_2_3",
"Name": "CC2.3 COSO Principle 15: The entity communicates with external parties regarding matters affecting the functioning of internal control",
"Description": "Communicates to External Parties - Processes are in place to communicate relevant and timely information to external parties, including shareholders, partners, owners, regulators, customers, financial analysts, and other external parties. Enables Inbound Communications - Open communication channels allow input from customers, consumers, suppliers, external auditors, regulators, financial analysts, and others, providing management and the board of directors with relevant information. Communicates With the Board of Directors - Relevant information resulting from assessments conducted by external parties is communicated to the board of directors. Provides Separate Communication Lines - Separate communication channels, such as whistle-blower hotlines, are in place and serve as fail-safe mechanisms to enable anonymous or confidential communication when normal channels are inoperative or ineffective. Selects Relevant Method of Communication - The method of communication considers the timing, audience, and nature of the communication and legal, regulatory, and fiduciary requirements and expectations. Communicates Objectives Related to Confidentiality and Changes to Objectives - The entity communicates, to external users, vendors, business partners and others whose products and services are part of the system, objectives and changes to objectives related to confidentiality. Additional point of focus that applies only to an engagement using the trust services criteria for privacy: Communicates Objectives Related to Privacy and Changes to Objectives - The entity communicates, to external users, vendors, business partners and others whose products and services are part of the system, objectives related to privacy and changes to those objectives. Additional points of focus that apply only when an engagement using the trust services criteria is performed at the system level: Communicates Information About System Operation and Boundaries - The entity prepares and communicates information about the design and operation of the system and its boundaries to authorized external users to permit users to understand their role in the system and the results of system operation. Communicates System Objectives - The entity communicates its system objectives to appropriate external users. Communicates System Responsibilities - External users with responsibility for designing, developing, implementing, operating, maintaining, and monitoring system controls receive communications about their responsibilities and have the information necessary to carry out those responsibilities. Communicates Information on Reporting System Failures, Incidents, Concerns, and Other Matters - External users are provided with information on how to report systems failures, incidents, concerns, and other complaints to appropriate personnel.",
"Attributes": [
{
"ItemId": "cc_2_3",
"Section": "CC2.0 - Common Criteria Related to Communication and Information",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "cc_3_1",
"Name": "CC3.1 COSO Principle 6: The entity specifies objectives with sufficient clarity to enable the identification and assessment of risks relating to objectives",
"Description": "Operations Ojectives:Reflects Management's Choices - Operations objectives reflect management's choices about structure, industry considerations, and performance of the entity.Considers Tolerances for Risk - Management considers the acceptable levels of variation relative to the achievement of operations objectives.External Financial Reporting Objectives:Complies With Applicable Accounting Standards - Financial reporting objectives are consistent with accounting principles suitable and available for that entity. The accounting principles selected are appropriate in the circumstances.External Nonfinancial Reporting Objectives:Complies With Externally Established Frameworks - Management establishes objectives consistent with laws and regulations or standards and frameworks of recognized external organizations.Reflects Entity Activities - External reporting reflects the underlying transactions and events within a range of acceptable limits.Considers the Required Level of Precision—Management reflects the required level of precision and accuracy suitable for user needs and based on criteria established by third parties in nonfinancial reporting.Internal Reporting Objectives:Reflects Management's Choices - Internal reporting provides management with accurate and complete information regarding management's choices and information needed in managing the entity.Considers the Required Level of Precision—Management reflects the required level of precision and accuracy suitable for user needs in nonfinancial reporting objectives and materiality within financial reporting objectives.Reflects Entity Activities—Internal reporting reflects the underlying transactions and events within a range of acceptable limits.Compliance Objectives:Reflects External Laws and Regulations - Laws and regulations establish minimum standards of conduct, which the entity integrates into compliance objectives.Considers Tolerances for Risk - Management considers the acceptable levels of variation relative to the achievement of operations objectives.Additional point of focus specifically related to all engagements using the trust services criteria: Establishes Sub-objectives to Support Objectives—Management identifies sub-objectives related to security, availability, processing integrity, confidentiality, and privacy to support the achievement of the entitys objectives related to reporting, operations, and compliance.",
"Attributes": [
{
"ItemId": "cc_3_1",
"Section": "CC3.0 - Common Criteria Related to Risk Assessment",
"Service": "aws",
"Soc_Type": "automated"
}
],
"Checks": [
"guardduty_is_enabled",
"securityhub_enabled",
"config_recorder_all_regions_enabled"
]
},
{
"Id": "cc_3_2",
"Name": "CC3.2 COSO Principle 7: The entity identifies risks to the achievement of its objectives across the entity and analyzes risks as a basis for determining how the risks should be managed",
"Description": "Includes Entity, Subsidiary, Division, Operating Unit, and Functional Levels - The entity identifies and assesses risk at the entity, subsidiary, division, operating unit, and functional levels relevant to the achievement of objectives.Analyzes Internal and External Factors - Risk identification considers both internal and external factors and their impact on the achievement of objectives.Involves Appropriate Levels of Management - The entity puts into place effective risk assessment mechanisms that involve appropriate levels of management.Estimates Significance of Risks Identified - Identified risks are analyzed through a process that includes estimating the potential significance of the risk.Determines How to Respond to Risks - Risk assessment includes considering how the risk should be managed and whether to accept, avoid, reduce, or share the risk.Additional points of focus specifically related to all engagements using the trust services criteria:Identifies and Assesses Criticality of Information Assets and Identifies Threats and Vulnerabilities - The entity's risk identification and assessment process includes (1) identifying information assets, including physical devices and systems, virtual devices, software, data and data flows, external information systems, and organizational roles; (2) assessing the criticality of those information assets; (3) identifying the threats to the assets from intentional (including malicious) and unintentional acts and environmental events; and (4) identifying the vulnerabilities of the identified assets.",
"Attributes": [
{
"ItemId": "cc_3_2",
"Section": "CC3.0 - Common Criteria Related to Risk Assessment",
"Service": "aws",
"Soc_Type": "automated"
}
],
"Checks": [
"ec2_instance_managed_by_ssm",
"ssm_managed_compliant_patching",
"guardduty_no_high_severity_findings",
"guardduty_is_enabled",
"ssm_managed_compliant_patching"
]
},
{
"Id": "cc_3_3",
"Name": "CC3.3 COSO Principle 8: The entity considers the potential for fraud in assessing risks to the achievement of objectives",
"Description": "Considers Various Types of Fraud - The assessment of fraud considers fraudulent reporting, possible loss of assets, and corruption resulting from the various ways that fraud and misconduct can occur.Assesses Incentives and Pressures - The assessment of fraud risks considers incentives and pressures.Assesses Opportunities - The assessment of fraud risk considers opportunities for unauthorized acquisition,use, or disposal of assets, altering the entitys reporting records, or committing other inappropriate acts.Assesses Attitudes and Rationalizations - The assessment of fraud risk considers how management and other personnel might engage in or justify inappropriate actions.Additional point of focus specifically related to all engagements using the trust services criteria: Considers the Risks Related to the Use of IT and Access to Information - The assessment of fraud risks includes consideration of threats and vulnerabilities that arise specifically from the use of IT and access to information.",
"Attributes": [
{
"ItemId": "cc_3_3",
"Section": "CC3.0 - Common Criteria Related to Risk Assessment",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "cc_3_4",
"Name": "CC3.4 COSO Principle 9: The entity identifies and assesses changes that could significantly impact the system of internal control",
"Description": "Assesses Changes in the External Environment - The risk identification process considers changes to the regulatory, economic, and physical environment in which the entity operates.Assesses Changes in the Business Model - The entity considers the potential impacts of new business lines, dramatically altered compositions of existing business lines, acquired or divested business operations on the system of internal control, rapid growth, changing reliance on foreign geographies, and new technologies.Assesses Changes in Leadership - The entity considers changes in management and respective attitudes and philosophies on the system of internal control.Assess Changes in Systems and Technology - The risk identification process considers changes arising from changes in the entitys systems and changes in the technology environment.Assess Changes in Vendor and Business Partner Relationships - The risk identification process considers changes in vendor and business partner relationships.",
"Attributes": [
{
"ItemId": "cc_3_4",
"Section": "CC3.0 - Common Criteria Related to Risk Assessment",
"Service": "config",
"Soc_Type": "automated"
}
],
"Checks": [
"config_recorder_all_regions_enabled"
]
},
{
"Id": "cc_4_1",
"Name": "CC4.1 COSO Principle 16: The entity selects, develops, and performs ongoing and/or separate evaluations to ascertain whether the components of internal control are present and functioning",
"Description": "Considers a Mix of Ongoing and Separate Evaluations - Management includes a balance of ongoing and separate evaluations.Considers Rate of Change - Management considers the rate of change in business and business processes when selecting and developing ongoing and separate evaluations.Establishes Baseline Understanding - The design and current state of an internal control system are used to establish a baseline for ongoing and separate evaluations.Uses Knowledgeable Personnel - Evaluators performing ongoing and separate evaluations have sufficient knowledge to understand what is being evaluated.Integrates With Business Processes - Ongoing evaluations are built into the business processes and adjust to changing conditions.Adjusts Scope and Frequency—Management varies the scope and frequency of separate evaluations depending on risk.Objectively Evaluates - Separate evaluations are performed periodically to provide objective feedback.Considers Different Types of Ongoing and Separate Evaluations - Management uses a variety of different types of ongoing and separate evaluations, including penetration testing, independent certification made against established specifications (for example, ISO certifications), and internal audit assessments.",
"Attributes": [
{
"ItemId": "cc_4_1",
"Section": "CC4.0 - Monitoring Activities",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "cc_4_2",
"Name": "CC4.2 COSO Principle 17: The entity evaluates and communicates internal control deficiencies in a timely manner to those parties responsible for taking corrective action, including senior management and the board of directors, as appropriate",
"Description": "Assesses Results - Management and the board of directors, as appropriate, assess results of ongoing and separate evaluations.Communicates Deficiencies - Deficiencies are communicated to parties responsible for taking corrective action and to senior management and the board of directors, as appropriate.Monitors Corrective Action - Management tracks whether deficiencies are remedied on a timely basis.",
"Attributes": [
{
"ItemId": "cc_4_2",
"Section": "CC4.0 - Monitoring Activities",
"Service": "guardduty",
"Soc_Type": "automated"
}
],
"Checks": [
"guardduty_is_enabled",
"guardduty_no_high_severity_findings"
]
},
{
"Id": "cc_5_1",
"Name": "CC5.1 COSO Principle 10: The entity selects and develops control activities that contribute to the mitigation of risks to the achievement of objectives to acceptable levels",
"Description": "Integrates With Risk Assessment - Control activities help ensure that risk responses that address and mitigate risks are carried out.Considers Entity-Specific Factors - Management considers how the environment, complexity, nature, and scope of its operations, as well as the specific characteristics of its organization, affect the selection and development of control activities.Determines Relevant Business Processes - Management determines which relevant business processes require control activities.Evaluates a Mix of 2017 Data Submitted Types - Control activities include a range and variety of controls and may include a balance of approaches to mitigate risks, considering both manual and automated controls, and preventive and detective controls.Considers at What Level Activities Are Applied - Management considers control activities at various levels in the entity.Addresses Segregation of Duties - Management segregates incompatible duties, and where such segregation is not practical, management selects and develops alternative control activities.",
"Attributes": [
{
"ItemId": "cc_5_1",
"Section": "CC5.0 - Control Activities",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "cc_5_2",
"Name": "CC5.2 COSO Principle 11: The entity also selects and develops general control activities over technology to support the achievement of objectives",
"Description": "Determines Dependency Between the Use of Technology in Business Processes and Technology General Controls - Management understands and determines the dependency and linkage between business processes, automated control activities, and technology general controls.Establishes Relevant Technology Infrastructure Control Activities - Management selects and develops control activities over the technology infrastructure, which are designed and implemented to help ensure the completeness, accuracy, and availability of technology processing.Establishes Relevant Security Management Process Controls Activities - Management selects and develops control activities that are designed and implemented to restrict technology access rights to authorized users commensurate with their job responsibilities and to protect the entitys assets from external threats.Establishes Relevant Technology Acquisition, Development, and Maintenance Process Control Activities - Management selects and develops control activities over the acquisition, development, and maintenance of technology and its infrastructure to achieve managements objectives.",
"Attributes": [
{
"ItemId": "cc_5_2",
"Section": "CC5.0 - Control Activities",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "cc_5_3",
"Name": "CCC5.3 COSO Principle 12: The entity deploys control activities through policies that establish what is expected and in procedures that put policies into action",
"Description": "Establishes Policies and Procedures to Support Deployment of Management s Directives - Management establishes control activities that are built into business processes and employees day-to-day activities through policies establishing what is expected and relevant procedures specifying actions.Establishes Responsibility and Accountability for Executing Policies and Procedures - Management establishes responsibility and accountability for control activities with management (or other designated personnel) of the business unit or function in which the relevant risks reside.Performs in a Timely Manner - Responsible personnel perform control activities in a timely manner as defined by the policies and procedures.Takes Corrective Action - Responsible personnel investigate and act on matters identified as a result of executing control activities.Performs Using Competent Personnel - Competent personnel with sufficient authority perform control activities with diligence and continuing focus.Reassesses Policies and Procedures - Management periodically reviews control activities to determine their continued relevance and refreshes them when necessary.",
"Attributes": [
{
"ItemId": "cc_5_3",
"Section": "CC5.0 - Control Activities",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "cc_6_1",
"Name": "CC6.1 The entity implements logical access security software, infrastructure, and architectures over protected information assets to protect them from security events to meet the entity's objectives",
"Description": "Identifies and Manages the Inventory of Information Assets - The entity identifies, inventories, classifies, and manages information assets.Restricts Logical Access - Logical access to information assets, including hardware, data (at-rest, during processing, or in transmission), software, administrative authorities, mobile devices, output, and offline system components is restricted through the use of access control software and rule sets.Identifies and Authenticates Users - Persons, infrastructure and software are identified and authenticated prior to accessing information assets, whether locally or remotely.Considers Network Segmentation - Network segmentation permits unrelated portions of the entity's information system to be isolated from each other.Manages Points of Access - Points of access by outside entities and the types of data that flow through the points of access are identified, inventoried, and managed. The types of individuals and systems using each point of access are identified, documented, and managed.Restricts Access to Information Assets - Combinations of data classification, separate data structures, port restrictions, access protocol restrictions, user identification, and digital certificates are used to establish access control rules for information assets.Manages Identification and Authentication - Identification and authentication requirements are established, documented, and managed for individuals and systems accessing entity information, infrastructure and software.Manages Credentials for Infrastructure and Software - New internal and external infrastructure and software are registered, authorized, and documented prior to being granted access credentials and implemented on the network or access point. Credentials are removed and access is disabled when access is no longer required or the infrastructure and software are no longer in use.Uses Encryption to Protect Data - The entity uses encryption to supplement other measures used to protect data-at-rest, when such protections are deemed appropriate based on assessed risk.Protects Encryption Keys - Processes are in place to protect encryption keys during generation, storage, use, and destruction.",
"Attributes": [
{
"ItemId": "cc_6_1",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "s3",
"Soc_Type": "automated"
}
],
"Checks": [
"s3_bucket_public_access"
]
},
{
"Id": "cc_6_2",
"Name": "CC6.2 Prior to issuing system credentials and granting system access, the entity registers and authorizes new internal and external users whose access is administered by the entity",
"Description": "Prior to issuing system credentials and granting system access, the entity registers and authorizes new internal and external users whose access is administered by the entity. For those users whose access is administered by the entity, user system credentials are removed when user access is no longer authorized.Controls Access Credentials to Protected Assets - Information asset access credentials are created based on an authorization from the system's asset owner or authorized custodian.Removes Access to Protected Assets When Appropriate - Processes are in place to remove credential access when an individual no longer requires such access.Reviews Appropriateness of Access Credentials - The appropriateness of access credentials is reviewed on a periodic basis for unnecessary and inappropriate individuals with credentials.",
"Attributes": [
{
"ItemId": "cc_6_2",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "rds",
"Soc_Type": "automated"
}
],
"Checks": [
"rds_instance_no_public_access"
]
},
{
"Id": "cc_6_3",
"Name": "CC6.3 The entity authorizes, modifies, or removes access to data, software, functions, and other protected information assets based on roles, responsibilities, or the system design and changes, giving consideration to the concepts of least privilege and segregation of duties, to meet the entitys objectives",
"Description": "Creates or Modifies Access to Protected Information Assets - Processes are in place to create or modify access to protected information assets based on authorization from the assets owner.Removes Access to Protected Information Assets - Processes are in place to remove access to protected information assets when an individual no longer requires access.Uses Role-Based Access Controls - Role-based access control is utilized to support segregation of incompatible functions.",
"Attributes": [
{
"ItemId": "cc_6_3",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "iam",
"Soc_Type": "automated"
}
],
"Checks": [
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges"
]
},
{
"Id": "cc_6_4",
"Name": "CC6.4 The entity restricts physical access to facilities and protected information assets to authorized personnel to meet the entitys objectives",
"Description": "Creates or Modifies Physical Access - Processes are in place to create or modify physical access to facilities such as data centers, office spaces, and work areas, based on authorization from the system's asset owner.Removes Physical Access - Processes are in place to remove access to physical resources when an individual no longer requires access.Reviews Physical Access - Processes are in place to periodically review physical access to ensure consistency with job responsibilities.",
"Attributes": [
{
"ItemId": "cc_6_4",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "cc_6_5",
"Name": "CC6.5 The entity discontinues logical and physical protections over physical assets only after the ability to read or recover data and software from those assets has been diminished and is no longer required to meet the entitys objectives",
"Description": "Identifies Data and Software for Disposal - Procedures are in place to identify data and software stored on equipment to be disposed and to render such data and software unreadable.Removes Data and Software From Entity Control - Procedures are in place to remove data and software stored on equipment to be removed from the physical control of the entity and to render such data and software unreadable.",
"Attributes": [
{
"ItemId": "cc_6_5",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "cc_6_6",
"Name": "CC6.6 The entity implements logical access security measures to protect against threats from sources outside its system boundaries",
"Description": "Restricts Access — The types of activities that can occur through a communication channel (for example, FTP site, router port) are restricted.Protects Identification and Authentication Credentials — Identification and authentication credentials are protected during transmission outside its system boundaries.Requires Additional Authentication or Credentials — Additional authentication information or credentials are required when accessing the system from outside its boundaries.Implements Boundary Protection Systems — Boundary protection systems (for example, firewalls, demilitarized zones, and intrusion detection systems) are implemented to protect external access points from attempts and unauthorized access and are monitored to detect such attempts.",
"Attributes": [
{
"ItemId": "cc_6_6",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "ec2",
"Soc_Type": "automated"
}
],
"Checks": [
"ec2_instance_public_ip"
]
},
{
"Id": "cc_6_7",
"Name": "CC6.7 The entity restricts the transmission, movement, and removal of information to authorized internal and external users and processes, and protects it during transmission, movement, or removal to meet the entitys objectives",
"Description": "Restricts the Ability to Perform Transmission - Data loss prevention processes and technologies are used to restrict ability to authorize and execute transmission, movement and removal of information.Uses Encryption Technologies or Secure Communication Channels to Protect Data - Encryption technologies or secured communication channels are used to protect transmission of data and other communications beyond connectivity access points.Protects Removal Media - Encryption technologies and physical asset protections are used for removable media (such as USB drives and back-up tapes), as appropriate.Protects Mobile Devices - Processes are in place to protect mobile devices (such as laptops, smart phones and tablets) that serve as information assets.",
"Attributes": [
{
"ItemId": "cc_6_7",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "acm",
"Soc_Type": "automated"
}
],
"Checks": [
"acm_certificates_expiration_check"
]
},
{
"Id": "cc_6_8",
"Name": "CC6.8 The entity implements controls to prevent or detect and act upon the introduction of unauthorized or malicious software to meet the entitys objectives",
"Description": "Restricts Application and Software Installation - The ability to install applications and software is restricted to authorized individuals.Detects Unauthorized Changes to Software and Configuration Parameters - Processes are in place to detect changes to software and configuration parameters that may be indicative of unauthorized or malicious software.Uses a Defined Change Control Process - A management-defined change control process is used for the implementation of software.Uses Antivirus and Anti-Malware Software - Antivirus and anti-malware software is implemented and maintained to provide for the interception or detection and remediation of malware.Scans Information Assets from Outside the Entity for Malware and Other Unauthorized Software - Procedures are in place to scan information assets that have been transferred or returned to the entitys custody for malware and other unauthorized software and to remove any items detected prior to its implementation on the network.",
"Attributes": [
{
"ItemId": "cc_6_8",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "aws",
"Soc_Type": "automated"
}
],
"Checks": [
"guardduty_is_enabled",
"securityhub_enabled"
]
},
{
"Id": "cc_7_1",
"Name": "CC7.1 To meet its objectives, the entity uses detection and monitoring procedures to identify (1) changes to configurations that result in the introduction of new vulnerabilities, and (2) susceptibilities to newly discovered vulnerabilities",
"Description": "Uses Defined Configuration Standards - Management has defined configuration standards.Monitors Infrastructure and Software - The entity monitors infrastructure and software for noncompliance with the standards, which could threaten the achievement of the entity's objectives.Implements Change-Detection Mechanisms - The IT system includes a change-detection mechanism (for example, file integrity monitoring tools) to alert personnel to unauthorized modifications of critical system files, configuration files, or content files.Detects Unknown or Unauthorized Components - Procedures are in place to detect the introduction of unknown or unauthorized components.Conducts Vulnerability Scans - The entity conducts vulnerability scans designed to identify potential vulnerabilities or misconfigurations on a periodic basis and after any significant change in the environment and takes action to remediate identified deficiencies on a timely basis.",
"Attributes": [
{
"ItemId": "cc_7_1",
"Section": "CC7.0 - System Operations",
"Service": "aws",
"Soc_Type": "automated"
}
],
"Checks": [
"guardduty_is_enabled",
"securityhub_enabled",
"ec2_instance_managed_by_ssm",
"ssm_managed_compliant_patching"
]
},
{
"Id": "cc_7_2",
"Name": "CC7.2 The entity monitors system components and the operation of those components for anomalies that are indicative of malicious acts, natural disasters, and errors affecting the entity's ability to meet its objectives; anomalies are analyzed to determine whether they represent security events",
"Description": "Implements Detection Policies, Procedures, and Tools - Detection policies and procedures are defined and implemented, and detection tools are implemented on Infrastructure and software to identify anomalies in the operation or unusual activity on systems. Procedures may include (1) a defined governance process for security event detection and management that includes provision of resources; (2) use of intelligence sources to identify newly discovered threats and vulnerabilities; and (3) logging of unusual system activities.Designs Detection Measures - Detection measures are designed to identify anomalies that could result from actual or attempted (1) compromise of physical barriers; (2) unauthorized actions of authorized personnel; (3) use of compromised identification and authentication credentials; (4) unauthorized access from outside the system boundaries; (5) compromise of authorized external parties; and (6) implementation or connection of unauthorized hardware and software.Implements Filters to Analyze Anomalies - Management has implemented procedures to filter, summarize, and analyze anomalies to identify security events.Monitors Detection Tools for Effective Operation - Management has implemented processes to monitor the effectiveness of detection tools.",
"Attributes": [
{
"ItemId": "cc_7_2",
"Section": "CC7.0 - System Operations",
"Service": "aws",
"Soc_Type": "automated"
}
],
"Checks": [
"cloudtrail_cloudwatch_logging_enabled",
"cloudwatch_changes_to_network_acls_alarm_configured",
"cloudwatch_changes_to_network_gateways_alarm_configured",
"cloudwatch_changes_to_network_route_tables_alarm_configured",
"cloudwatch_changes_to_vpcs_alarm_configured",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
"s3_bucket_server_access_logging_enabled",
"rds_instance_integration_cloudwatch_logs",
"cloudtrail_multi_region_enabled",
"securityhub_enabled",
"cloudwatch_log_group_retention_policy_specific_days_enabled",
"cloudtrail_multi_region_enabled",
"redshift_cluster_audit_logging",
"vpc_flow_logs_enabled",
"ec2_instance_imdsv2_enabled",
"guardduty_is_enabled",
"apigateway_logging_enabled",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22"
]
},
{
"Id": "cc_7_3",
"Name": "CC7.3 The entity evaluates security events to determine whether they could or have resulted in a failure of the entity to meet its objectives (security incidents) and, if so, takes actions to prevent or address such failures",
"Description": "Responds to Security Incidents - Procedures are in place for responding to security incidents and evaluating the effectiveness of those policies and procedures on a periodic basis.Communicates and Reviews Detected Security Events - Detected security events are communicated to and reviewed by the individuals responsible for the management of the security program and actions are taken, if necessary.Develops and Implements Procedures to Analyze Security Incidents - Procedures are in place to analyze security incidents and determine system impact.Assesses the Impact on Personal Information - Detected security events are evaluated to determine whether they could or did result in the unauthorized disclosure or use of personal information and whether there has been a failure to comply with applicable laws or regulations.Determines Personal Information Used or Disclosed - When an unauthorized use or disclosure of personal information has occurred, the affected information is identified.",
"Attributes": [
{
"ItemId": "cc_7_3",
"Section": "CC7.0 - System Operations",
"Service": "aws",
"Soc_Type": "automated"
}
],
"Checks": [
"cloudwatch_log_group_kms_encryption_enabled",
"cloudtrail_log_file_validation_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"guardduty_is_enabled",
"apigateway_logging_enabled",
"rds_instance_integration_cloudwatch_logs",
"securityhub_enabled",
"cloudwatch_changes_to_network_acls_alarm_configured",
"cloudwatch_changes_to_network_gateways_alarm_configured",
"cloudwatch_changes_to_network_route_tables_alarm_configured",
"cloudwatch_changes_to_vpcs_alarm_configured",
"elbv2_logging_enabled",
"elb_logging_enabled",
"s3_bucket_server_access_logging_enabled",
"cloudwatch_log_group_retention_policy_specific_days_enabled",
"vpc_flow_logs_enabled",
"guardduty_no_high_severity_findings"
]
},
{
"Id": "cc_7_4",
"Name": "CC7.4 The entity responds to identified security incidents by executing a defined incident response program to understand, contain, remediate, and communicate security incidents, as appropriate",
"Description": "Assigns Roles and Responsibilities - Roles and responsibilities for the design, implementation, maintenance, and execution of the incident response program are assigned, including the use of external resources when necessary.Contains Security Incidents - Procedures are in place to contain security incidents that actively threaten entity objectives.Mitigates Ongoing Security Incidents - Procedures are in place to mitigate the effects of ongoing security incidents.Ends Threats Posed by Security Incidents - Procedures are in place to end the threats posed by security incidents through closure of the vulnerability, removal of unauthorized access, and other remediation actions.Restores Operations - Procedures are in place to restore data and business operations to an interim state that permits the achievement of entity objectives. Develops and Implements Communication Protocols for Security Incidents - Protocols for communicating security incidents and actions taken to affected parties are developed and implemented to meet the entity's objectives.Obtains Understanding of Nature of Incident and Determines Containment Strategy - An understanding of the nature (for example, the method by which the incident occurred and the affected system resources) and severity of the security incident is obtained to determine the appropriate containment strategy, including (1) a determination of the appropriate response time frame, and (2) the determination and execution of the containment approach.Remediates Identified Vulnerabilities - Identified vulnerabilities are remediated through the development and execution of remediation activities.Communicates Remediation Activities - Remediation activities are documented and communicated in accordance with the incident response program.Evaluates the Effectiveness of Incident Response - The design of incident response activities is evaluated for effectiveness on a periodic basis.Periodically Evaluates Incidents - Periodically, management reviews incidents related to security, availability, processing integrity, confidentiality, and privacy and identifies the need for system changes based on incident patterns and root causes. Communicates Unauthorized Use and Disclosure - Events that resulted in unauthorized use or disclosure of personal information are communicated to the data subjects, legal and regulatory authorities, and others as required.Application of Sanctions - The conduct of individuals and organizations operating under the authority of the entity and involved in the unauthorized use or disclosure of personal information is evaluated and, if appropriate, sanctioned in accordance with entity policies and legal and regulatory requirements.",
"Attributes": [
{
"ItemId": "cc_7_4",
"Section": "CC7.0 - System Operations",
"Service": "aws",
"Soc_Type": "automated"
}
],
"Checks": [
"cloudwatch_changes_to_network_acls_alarm_configured",
"cloudwatch_changes_to_network_gateways_alarm_configured",
"cloudwatch_changes_to_network_route_tables_alarm_configured",
"cloudwatch_changes_to_vpcs_alarm_configured",
"dynamodb_tables_pitr_enabled",
"dynamodb_tables_pitr_enabled",
"efs_have_backup_enabled",
"efs_have_backup_enabled",
"guardduty_is_enabled",
"guardduty_no_high_severity_findings",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning",
"securityhub_enabled"
]
},
{
"Id": "cc_7_5",
"Name": "CC7.5 The entity identifies, develops, and implements activities to recover from identified security incidents",
"Description": "Restores the Affected Environment - The activities restore the affected environment to functional operation by rebuilding systems, updating software, installing patches, and changing configurations, as needed.Communicates Information About the Event - Communications about the nature of the incident, recovery actions taken, and activities required for the prevention of future security events are made to management and others as appropriate (internal and external).Determines Root Cause of the Event - The root cause of the event is determined.Implements Changes to Prevent and Detect Recurrences - Additional architecture or changes to preventive and detective controls, or both, are implemented to prevent and detect recurrences on a timely basis.Improves Response and Recovery Procedures - Lessons learned are analyzed, and the incident response plan and recovery procedures are improved.Implements Incident Recovery Plan Testing - Incident recovery plan testing is performed on a periodic basis. The testing includes (1) development of testing scenarios based on threat likelihood and magnitude; (2) consideration of relevant system components from across the entity that can impair availability; (3) scenarios that consider the potential for the lack of availability of key personnel; and (4) revision of continuity plans and systems based on test results.",
"Attributes": [
{
"ItemId": "cc_7_5",
"Section": "CC7.0 - System Operations",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "cc_8_1",
"Name": "CC8.1 The entity authorizes, designs, develops or acquires, configures, documents, tests, approves, and implements changes to infrastructure, data, software, and procedures to meet its objectives",
"Description": "Manages Changes Throughout the System Lifecycle - A process for managing system changes throughout the lifecycle of the system and its components (infrastructure, data, software and procedures) is used to support system availability and processing integrity.Authorizes Changes - A process is in place to authorize system changes prior to development.Designs and Develops Changes - A process is in place to design and develop system changes.Documents Changes - A process is in place to document system changes to support ongoing maintenance of the system and to support system users in performing their responsibilities.Tracks System Changes - A process is in place to track system changes prior to implementation.Configures Software - A process is in place to select and implement the configuration parameters used to control the functionality of software.Tests System Changes - A process is in place to test system changes prior to implementation.Approves System Changes - A process is in place to approve system changes prior to implementation.Deploys System Changes - A process is in place to implement system changes.Identifies and Evaluates System Changes - Objectives affected by system changes are identified, and the ability of the modified system to meet the objectives is evaluated throughout the system development life cycle.Identifies Changes in Infrastructure, Data, Software, and Procedures Required to Remediate Incidents - Changes in infrastructure, data, software, and procedures required to remediate incidents to continue to meet objectives are identified, and the change process is initiated upon identification.Creates Baseline Configuration of IT Technology - A baseline configuration of IT and control systems is created and maintained.Provides for Changes Necessary in Emergency Situations - A process is in place for authorizing, designing, testing, approving and implementing changes necessary in emergency situations (that is, changes that need to be implemented in an urgent timeframe).Protects Confidential Information - The entity protects confidential information during system design, development, testing, implementation, and change processes to meet the entitys objectives related to confidentiality.Protects Personal Information - The entity protects personal information during system design, development, testing, implementation, and change processes to meet the entitys objectives related to privacy.",
"Attributes": [
{
"ItemId": "cc_8_1",
"Section": "CC8.0 - Change Management",
"Service": "aws",
"Soc_Type": "automated"
}
],
"Checks": [
"config_recorder_all_regions_enabled"
]
},
{
"Id": "cc_9_1",
"Name": "CC9.1 The entity identifies, selects, and develops risk mitigation activities for risks arising from potential business disruptions",
"Description": "Considers Mitigation of Risks of Business Disruption - Risk mitigation activities include the development of planned policies, procedures, communications, and alternative processing solutions to respond to, mitigate, and recover from security events that disrupt business operations. Those policies and procedures include monitoring processes and information and communications to meet the entity's objectives during response, mitigation, and recovery efforts.Considers the Use of Insurance to Mitigate Financial Impact Risks - The risk management activities consider the use of insurance to offset the financial impact of loss events that would otherwise impair the ability of the entity to meet its objectives.",
"Attributes": [
{
"ItemId": "cc_9_1",
"Section": "CC9.0 - Risk Mitigation",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "cc_9_2",
"Name": "CC9.2 The entity assesses and manages risks associated with vendors and business partners",
"Description": "Establishes Requirements for Vendor and Business Partner Engagements - The entity establishes specific requirements for a vendor and business partner engagement that includes (1) scope of services and product specifications, (2) roles and responsibilities, (3) compliance requirements, and (4) service levels.Assesses Vendor and Business Partner Risks - The entity assesses, on a periodic basis, the risks that vendors and business partners (and those entities vendors and business partners) represent to the achievement of the entity's objectives.Assigns Responsibility and Accountability for Managing Vendors and Business Partners - The entity assigns responsibility and accountability for the management of risks associated with vendors and business partners.Establishes Communication Protocols for Vendors and Business Partners - The entity establishes communication and resolution protocols for service or product issues related to vendors and business partners.Establishes Exception Handling Procedures From Vendors and Business Partners - The entity establishes exception handling procedures for service or product issues related to vendors and business partners.Assesses Vendor and Business Partner Performance - The entity periodically assesses the performance of vendors and business partners.Implements Procedures for Addressing Issues Identified During Vendor and Business Partner Assessments - The entity implements procedures for addressing issues identified with vendor and business partner relationships.Implements Procedures for Terminating Vendor and Business Partner Relationships - The entity implements procedures for terminating vendor and business partner relationships.Obtains Confidentiality Commitments from Vendors and Business Partners - The entity obtains confidentiality commitments that are consistent with the entitys confidentiality commitments and requirements from vendors and business partners who have access to confidential information.Assesses Compliance With Confidentiality Commitments of Vendors and Business Partners - On a periodic and as-needed basis, the entity assesses compliance by vendors and business partners with the entitys confidentiality commitments and requirements.Obtains Privacy Commitments from Vendors and Business Partners - The entity obtains privacy commitments, consistent with the entitys privacy commitments and requirements, from vendors and business partners who have access to personal information.Assesses Compliance with Privacy Commitments of Vendors and Business Partners - On a periodic and as-needed basis, the entity assesses compliance by vendors and business partners with the entitys privacy commitments and requirements and takes corrective action as necessary.",
"Attributes": [
{
"ItemId": "cc_9_2",
"Section": "CC9.0 - Risk Mitigation",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "cc_a_1_1",
"Name": "A1.1 The entity maintains, monitors, and evaluates current processing capacity and use of system components (infrastructure, data, and software) to manage capacity demand and to enable the implementation of additional capacity to help meet its objectives",
"Description": "Measures Current Usage - The use of the system components is measured to establish a baseline for capacity management and to use when evaluating the risk of impaired availability due to capacity constraints.Forecasts Capacity - The expected average and peak use of system components is forecasted and compared to system capacity and associated tolerances. Forecasting considers capacity in the event of the failure of system components that constrain capacity.Makes Changes Based on Forecasts - The system change management process is initiated when forecasted usage exceeds capacity tolerances.",
"Attributes": [
{
"ItemId": "cc_a_1_1",
"Section": "CCA1.0 - Additional Criterial for Availability",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "cc_a_1_2",
"Name": "A1.2 The entity authorizes, designs, develops or acquires, implements, operates, approves, maintains, and monitors environmental protections, software, data back-up processes, and recovery infrastructure to meet its objectives",
"Description": "Measures Current Usage - The use of the system components is measured to establish a baseline for capacity management and to use when evaluating the risk of impaired availability due to capacity constraints.Forecasts Capacity - The expected average and peak use of system components is forecasted and compared to system capacity and associated tolerances. Forecasting considers capacity in the event of the failure of system components that constrain capacity.Makes Changes Based on Forecasts - The system change management process is initiated when forecasted usage exceeds capacity tolerances.",
"Attributes": [
{
"ItemId": "cc_a_1_2",
"Section": "CCA1.0 - Additional Criterial for Availability",
"Service": "aws",
"Soc_Type": "automated"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"dynamodb_tables_pitr_enabled",
"dynamodb_tables_pitr_enabled",
"efs_have_backup_enabled",
"efs_have_backup_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_integration_cloudwatch_logs",
"rds_instance_backup_enabled",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
},
{
"Id": "cc_a_1_3",
"Name": "A1.3 The entity tests recovery plan procedures supporting system recovery to meet its objectives",
"Description": "Implements Business Continuity Plan Testing - Business continuity plan testing is performed on a periodic basis. The testing includes (1) development of testing scenarios based on threat likelihood and magnitude; (2) consideration of system components from across the entity that can impair the availability; (3) scenarios that consider the potential for the lack of availability of key personnel; and (4) revision of continuity plans and systems based on test results.Tests Integrity and Completeness of Back-Up Data - The integrity and completeness of back-up information is tested on a periodic basis.",
"Attributes": [
{
"ItemId": "cc_a_1_3",
"Section": "CCA1.0 - Additional Criterial for Availability",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "cc_c_1_1",
"Name": "C1.1 The entity identifies and maintains confidential information to meet the entitys objectives related to confidentiality",
"Description": "Identifies Confidential information - Procedures are in place to identify and designate confidential information when it is received or created and to determine the period over which the confidential information is to be retained.Protects Confidential Information from Destruction - Procedures are in place to protect confidential information from erasure or destruction during the specified retention period of the information",
"Attributes": [
{
"ItemId": "cc_c_1_1",
"Section": "CCC1.0 - Additional Criterial for Confidentiality",
"Service": "aws",
"Soc_Type": "automated"
}
],
"Checks": [
"rds_instance_deletion_protection"
]
},
{
"Id": "cc_c_1_2",
"Name": "C1.2 The entity disposes of confidential information to meet the entitys objectives related to confidentiality",
"Description": "Identifies Confidential Information for Destruction - Procedures are in place to identify confidential information requiring destruction when the end of the retention period is reached.Destroys Confidential Information - Procedures are in place to erase or otherwise destroy confidential information that has been identified for destruction.",
"Attributes": [
{
"ItemId": "cc_c_1_2",
"Section": "CCC1.0 - Additional Criterial for Confidentiality",
"Service": "s3",
"Soc_Type": "automated"
}
],
"Checks": [
"s3_bucket_object_versioning"
]
},
{
"Id": "p_1_1",
"Name": "P1.1 The entity provides notice to data subjects about its privacy practices to meet the entitys objectives related to privacy",
"Description": "The entity provides notice to data subjects about its privacy practices to meet the entitys objectives related to privacy. The notice is updated and communicated to data subjects in a timely manner for changes to the entitys privacy practices, including changes in the use of personal information, to meet the entitys objectives related to privacy.Communicates to Data Subjects - Notice is provided to data subjects regarding the following:Purpose for collecting personal informationChoice and consentTypes of personal information collectedMethods of collection (for example, use of cookies or other tracking techniques)Use, retention, and disposalAccessDisclosure to third partiesSecurity for privacyQuality, including data subjects responsibilities for qualityMonitoring and enforcementIf personal information is collected from sources other than the individual, such sources are described in the privacy notice.Provides Notice to Data Subjects - Notice is provided to data subjects (1) at or before the time personal information is collected or as soon as practical thereafter, (2) at or before the entity changes its privacy notice or as soon as practical thereafter, or (3) before personal information is used for new purposes not previously identified.Covers Entities and Activities in Notice - An objective description of the entities and activities covered is included in the entitys privacy notice.Uses Clear and Conspicuous Language - The entitys privacy notice is conspicuous and uses clear language.",
"Attributes": [
{
"ItemId": "p_1_1",
"Section": "P1.0 - Privacy Criteria Related to Notice and Communication of Objectives Related to Privacy",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "p_2_1",
"Name": "P2.1 The entity communicates choices available regarding the collection, use, retention, disclosure, and disposal of personal information to the data subjects and the consequences, if any, of each choice",
"Description": "The entity communicates choices available regarding the collection, use, retention, disclosure, and disposal of personal information to the data subjects and the consequences, if any, of each choice. Explicit consent for the collection, use, retention, disclosure, and disposal of personal information is obtained from data subjects or other authorized persons, if required. Such consent is obtained only for the intended purpose of the information to meet the entitys objectives related to privacy. The entitys basis for determining implicit consent for the collection, use, retention, disclosure, and disposal of personal information is documented.Communicates to Data Subjects - Data subjects are informed (a) about the choices available to them with respect to the collection, use, and disclosure of personal information and (b) that implicit or explicit consent is required to collect, use, and disclose personal information, unless a law or regulation specifically requires or allows otherwise.Communicates Consequences of Denying or Withdrawing Consent - When personal information is collected, data subjects are informed of the consequences of refusing to provide personal information or denying or withdrawing consent to use personal information for purposes identified in the notice.Obtains Implicit or Explicit Consent - Implicit or explicit consent is obtained from data subjects at or before the time personal information is collected or soon thereafter. The individuals preferences expressed in his or her consent are confirmed and implemented.Documents and Obtains Consent for New Purposes and Uses - If information that was previously collected is to be used for purposes not previously identified in the privacy notice, the new purpose is documented, the data subject is notified, and implicit or explicit consent is obtained prior to such new use or purpose.Obtains Explicit Consent for Sensitive Information - Explicit consent is obtained directly from the data subject when sensitive personal information is collected, used, or disclosed, unless a law or regulation specifically requires otherwise.",
"Attributes": [
{
"ItemId": "p_2_1",
"Section": "P2.0 - Privacy Criteria Related to Choice and Consent",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "p_3_1",
"Name": "P3.1 Personal information is collected consistent with the entitys objectives related to privacy",
"Description": "Limits the Collection of Personal Information - The collection of personal information is limited to that necessary to meet the entitys objectives.Collects Information by Fair and Lawful Means - Methods of collecting personal information are reviewed by management before they are implemented to confirm that personal information is obtained (a) fairly, without intimidation or deception, and (b) lawfully, adhering to all relevant rules of law, whether derived from statute or common law, relating to the collection of personal information.Collects Information From Reliable Sources - Management confirms that third parties from whom personal information is collected (that is, sources other than the individual) are reliable sources that collect information fairly and lawfully.Informs Data Subjects When Additional Information Is Acquired - Data subjects are informed if the entity develops or acquires additional information about them for its use.",
"Attributes": [
{
"ItemId": "p_3_1",
"Section": "P3.0 - Privacy Criteria Related to Collection",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "p_3_2",
"Name": "P3.2 For information requiring explicit consent, the entity communicates the need for such consent, as well as the consequences of a failure to provide consent for the request for personal information, and obtains the consent prior to the collection of the information to meet the entitys objectives related to privacy",
"Description": "Obtains Explicit Consent for Sensitive Information - Explicit consent is obtained directly from the data subject when sensitive personal information is collected, used, or disclosed, unless a law or regulation specifically requires otherwise.Documents Explicit Consent to Retain Information - Documentation of explicit consent for the collection, use, or disclosure of sensitive personal information is retained in accordance with objectives related to privacy.",
"Attributes": [
{
"ItemId": "p_3_2",
"Section": "P3.0 - Privacy Criteria Related to Collection",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "p_4_1",
"Name": "P4.1 The entity limits the use of personal information to the purposes identified in the entitys objectives related to privacy",
"Description": "Uses Personal Information for Intended Purposes - Personal information is used only for the intended purposes for which it was collected and only when implicit or explicit consent has been obtained unless a law or regulation specifically requires otherwise.",
"Attributes": [
{
"ItemId": "p_4_1",
"Section": "P4.0 - Privacy Criteria Related to Use, Retention, and Disposal",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "p_4_2",
"Name": "P4.2 The entity retains personal information consistent with the entitys objectives related to privacy",
"Description": "Retains Personal Information - Personal information is retained for no longer than necessary to fulfill the stated purposes, unless a law or regulation specifically requires otherwise.Protects Personal Information - Policies and procedures have been implemented to protect personal information from erasure or destruction during the specified retention period of the information.",
"Attributes": [
{
"ItemId": "p_4_2",
"Section": "P4.0 - Privacy Criteria Related to Use, Retention, and Disposal",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "p_4_3",
"Name": "P4.3 The entity securely disposes of personal information to meet the entitys objectives related to privacy",
"Description": "Captures, Identifies, and Flags Requests for Deletion - Requests for deletion of personal information are captured, and information related to the requests is identified and flagged for destruction to meet the entitys objectives related to privacy.Disposes of, Destroys, and Redacts Personal Information - Personal information no longer retained is anonymized, disposed of, or destroyed in a manner that prevents loss, theft, misuse, or unauthorized access.Destroys Personal Information - Policies and procedures are implemented to erase or otherwise destroy personal information that has been identified for destruction.",
"Attributes": [
{
"ItemId": "p_4_3",
"Section": "P4.0 - Privacy Criteria Related to Use, Retention, and Disposal",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "p_5_1",
"Name": "P5.1 The entity grants identified and authenticated data subjects the ability to access their stored personal information for review and, upon request, provides physical or electronic copies of that information to data subjects to meet the entitys objectives related to privacy",
"Description": "The entity grants identified and authenticated data subjects the ability to access their stored personal information for review and, upon request, provides physical or electronic copies of that information to data subjects to meet the entitys objectives related to privacy. If access is denied, data subjects are informed of the denial and reason for such denial, as required, to meet the entitys objectives related to privacy.Authenticates Data Subjects Identity - The identity of data subjects who request access to their personal information is authenticated before they are given access to that information.Permits Data Subjects Access to Their Personal Information - Data subjects are able to determine whether the entity maintains personal information about them and, upon request, may obtain access to their personal information.Provides Understandable Personal Information Within Reasonable Time - Personal information is provided to data subjects in an understandable form, in a reasonable time frame, and at a reasonable cost, if any.Informs Data Subjects If Access Is Denied - When data subjects are denied access to their personal information, the entity informs them of the denial and the reason for the denial in a timely manner, unless prohibited by law or regulation.",
"Attributes": [
{
"ItemId": "p_5_1",
"Section": "P5.0 - Privacy Criteria Related to Access",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "p_5_2",
"Name": "P5.2 The entity corrects, amends, or appends personal information based on information provided by data subjects and communicates such information to third parties, as committed or required, to meet the entitys objectives related to privacy",
"Description": "The entity corrects, amends, or appends personal information based on information provided by data subjects and communicates such information to third parties, as committed or required, to meet the entitys objectives related to privacy. If a request for correction is denied, data subjects are informed of the denial and reason for such denial to meet the entitys objectives related to privacy.Communicates Denial of Access Requests - Data subjects are informed, in writing, of the reason a request for access to their personal information was denied, the source of the entitys legal right to deny such access, if applicable, and the individuals right, if any, to challenge such denial, as specifically permitted or required by law or regulation.Permits Data Subjects to Update or Correct Personal Information - Data subjects are able to update or correct personal information held by the entity. The entity provides such updated or corrected information to third parties that were previously provided with the data subjects personal information consistent with the entitys objective related to privacy.Communicates Denial of Correction Requests - Data subjects are informed, in writing, about the reason a request for correction of personal information was denied and how they may appeal.",
"Attributes": [
{
"ItemId": "p_5_2",
"Section": "P5.0 - Privacy Criteria Related to Access",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "p_6_1",
"Name": "P6.1 The entity discloses personal information to third parties with the explicit consent of data subjects, and such consent is obtained prior to disclosure to meet the entitys objectives related to privacy",
"Description": "Communicates Privacy Policies to Third Parties - Privacy policies or other specific instructions or requirements for handling personal information are communicated to third parties to whom personal information is disclosed.Discloses Personal Information Only When Appropriate - Personal information is disclosed to third parties only for the purposes for which it was collected or created and only when implicit or explicit consent has been obtained from the data subject, unless a law or regulation specifically requires otherwise.Discloses Personal Information Only to Appropriate Third Parties - Personal information is disclosed only to third parties who have agreements with the entity to protect personal information in a manner consistent with the relevant aspects of the entitys privacy notice or other specific instructions or requirements. The entity has procedures in place to evaluate that the third parties have effective controls to meet the terms of the agreement, instructions, or requirements.",
"Attributes": [
{
"ItemId": "p_6_1",
"Section": "P6.0 - Privacy Criteria Related to Disclosure and Notification",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "p_6_2",
"Name": "P6.2 The entity creates and retains a complete, accurate, and timely record of authorized disclosures of personal information to meet the entitys objectives related to privacy",
"Description": "Creates and Retains Record of Authorized Disclosures - The entity creates and maintains a record of authorized disclosures of personal information that is complete, accurate, and timely.",
"Attributes": [
{
"ItemId": "p_6_2",
"Section": "P6.0 - Privacy Criteria Related to Disclosure and Notification",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "p_6_3",
"Name": "P6.3 The entity creates and retains a complete, accurate, and timely record of detected or reported unauthorized disclosures (including breaches) of personal information to meet the entitys objectives related to privacy",
"Description": "Creates and Retains Record of Detected or Reported Unauthorized Disclosures - The entity creates and maintains a record of detected or reported unauthorized disclosures of personal information that is complete, accurate, and timely.",
"Attributes": [
{
"ItemId": "p_6_3",
"Section": "P6.0 - Privacy Criteria Related to Disclosure and Notification",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "p_6_4",
"Name": "P6.4 The entity obtains privacy commitments from vendors and other third parties who have access to personal information to meet the entitys objectives related to privacy",
"Description": "The entity obtains privacy commitments from vendors and other third parties who have access to personal information to meet the entitys objectives related to privacy. The entity assesses those parties compliance on a periodic and as-needed basis and takes corrective action, if necessary.Discloses Personal Information Only to Appropriate Third Parties - Personal information is disclosed only to third parties who have agreements with the entity to protect personal information in a manner consistent with the relevant aspects of the entitys privacy notice or other specific instructions or requirements. The entity has procedures in place to evaluate that the third parties have effective controls to meet the terms of the agreement, instructions, or requirements.Remediates Misuse of Personal Information by a Third Party - The entity takes remedial action in response to misuse of personal information by a third party to whom the entity has transferred such information.",
"Attributes": [
{
"ItemId": "p_6_4",
"Section": "P6.0 - Privacy Criteria Related to Disclosure and Notification",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "p_6_5",
"Name": "P6.5 The entity obtains commitments from vendors and other third parties with access to personal information to notify the entity in the event of actual or suspected unauthorized disclosures of personal information",
"Description": "The entity obtains commitments from vendors and other third parties with access to personal information to notify the entity in the event of actual or suspected unauthorized disclosures of personal information. Such notifications are reported to appropriate personnel and acted on in accordance with established incident response procedures to meet the entitys objectives related to privacy.Remediates Misuse of Personal Information by a Third Party - The entity takes remedial action in response to misuse of personal information by a third party to whom the entity has transferred such information.Reports Actual or Suspected Unauthorized Disclosures - A process exists for obtaining commitments from vendors and other third parties to report to the entity actual or suspected unauthorized disclosures of personal information.",
"Attributes": [
{
"ItemId": "p_6_5",
"Section": "P6.0 - Privacy Criteria Related to Disclosure and Notification",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "p_6_6",
"Name": "P6.6 The entity provides notification of breaches and incidents to affected data subjects, regulators, and others to meet the entitys objectives related to privacy",
"Description": "Remediates Misuse of Personal Information by a Third Party - The entity takes remedial action in response to misuse of personal information by a third party to whom the entity has transferred such information. Reports Actual or Suspected Unauthorized Disclosures - A process exists for obtaining commitments from vendors and other third parties to report to the entity actual or suspected unauthorized disclosures of personal information.",
"Attributes": [
{
"ItemId": "p_6_6",
"Section": "P6.0 - Privacy Criteria Related to Disclosure and Notification",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "p_6_7",
"Name": "P6.7 The entity provides data subjects with an accounting of the personal information held and disclosure of the data subjects personal information, upon the data subjects request, to meet the entitys objectives related to privacy",
"Description": "Identifies Types of Personal Information and Handling Process - The types of personal information and sensitive personal information and the related processes, systems, and third parties involved in the handling of such information are identified. Captures, Identifies, and Communicates Requests for Information - Requests for an accounting of personal information held and disclosures of the data subjects personal information are captured, and information related to the requests is identified and communicated to data subjects to meet the entitys objectives related to privacy.",
"Attributes": [
{
"ItemId": "p_6_7",
"Section": "P6.0 - Privacy Criteria Related to Disclosure and Notification",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "p_7_1",
"Name": "P7.1 The entity collects and maintains accurate, up-to-date, complete, and relevant personal information to meet the entitys objectives related to privacy",
"Description": "Ensures Accuracy and Completeness of Personal Information - Personal information is accurate and complete for the purposes for which it is to be used. Ensures Relevance of Personal Information - Personal information is relevant to the purposes for which it is to be used.",
"Attributes": [
{
"ItemId": "p_7_1",
"Section": "P7.0 - Privacy Criteria Related to Quality",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
},
{
"Id": "p_8_1",
"Name": "P8.1 The entity implements a process for receiving, addressing, resolving, and communicating the resolution of inquiries, complaints, and disputes from data subjects and others and periodically monitors compliance to meet the entitys objectives related to privacy",
"Description": "The entity implements a process for receiving, addressing, resolving, and communicating the resolution of inquiries, complaints, and disputes from data subjects and others and periodically monitors compliance to meet the entitys objectives related to privacy. Corrections and other necessary actions related to identified deficiencies are made or taken in a timely manner.Communicates to Data Subjects—Data subjects are informed about how to contact the entity with inquiries, complaints, and disputes.Addresses Inquiries, Complaints, and Disputes - A process is in place to address inquiries, complaints, and disputes.Documents and Communicates Dispute Resolution and Recourse - Each complaint is addressed, and the resolution is documented and communicated to the individual.Documents and Reports Compliance Review Results - Compliance with objectives related to privacy are reviewed and documented, and the results of such reviews are reported to management. If problems are identified, remediation plans are developed and implemented.Documents and Reports Instances of Noncompliance - Instances of noncompliance with objectives related to privacy are documented and reported and, if needed, corrective and disciplinary measures are taken on a timely basis.Performs Ongoing Monitoring - Ongoing procedures are performed for monitoring the effectiveness of controls over personal information and for taking timely corrective actions when necessary.",
"Attributes": [
{
"ItemId": "p_8_1",
"Section": "P8.0 - Privacy Criteria Related to Monitoring and Enforcement",
"Service": "aws",
"Soc_Type": "manual"
}
],
"Checks": []
}
]
}

View File

@@ -1,5 +1,7 @@
### Account, Check and/or Region can be * to apply for all the cases
### Resources is a list that can have either Regex or Keywords:
### Account, Check and/or Region can be * to apply for all the cases.
### Resources and tags are lists that can have either Regex or Keywords.
### Tags is an optional list that matches on tuples of 'key=value' and are "ANDed" together.
### Use an alternation Regex to match one of multiple tags with "ORed" logic.
########################### ALLOWLIST EXAMPLE ###########################
Allowlist:
Accounts:
@@ -11,11 +13,19 @@ Allowlist:
Resources:
- "user-1" # Will ignore user-1 in check iam_user_hardware_mfa_enabled
- "user-2" # Will ignore user-2 in check iam_user_hardware_mfa_enabled
"ec2_*":
Regions:
- "*"
Resources:
- "*" # Will ignore every EC2 check in every account and region
"*":
Regions:
- "*"
Resources:
- "test" # Will ignore every resource containing the string "test" in every account and region
- "test"
Tags:
- "test=test" # Will ignore every resource containing the string "test" and the tags 'test=test' and
- "project=test|project=stage" # either of ('project=test' OR project=stage) in account 123456789012 and every region
"*":
Checks:
@@ -27,6 +37,14 @@ Allowlist:
- "ci-logs" # Will ignore bucket "ci-logs" AND ALSO bucket "ci-logs-replica" in specified check and regions
- "logs" # Will ignore EVERY BUCKET containing the string "logs" in specified check and regions
- "[[:alnum:]]+-logs" # Will ignore all buckets containing the terms ci-logs, qa-logs, etc. in specified check and regions
"*":
Regions:
- "*"
Resources:
- "*"
Tags:
- "environment=dev" # Will ignore every resource containing the tag 'environment=dev' in every account and region
# EXAMPLE: CONTROL TOWER (to migrate)
# When using Control Tower, guardrails prevent access to certain protected resources. The allowlist

View File

@@ -3,25 +3,42 @@ import pathlib
from datetime import datetime, timezone
from os import getcwd
import requests
import yaml
from prowler.lib.logger import logger
timestamp = datetime.today()
timestamp_utc = datetime.now(timezone.utc).replace(tzinfo=timezone.utc)
prowler_version = "3.1.4"
prowler_version = "3.5.2"
html_logo_url = "https://github.com/prowler-cloud/prowler/"
html_logo_img = "https://user-images.githubusercontent.com/3985464/113734260-7ba06900-96fb-11eb-82bc-d4f68a1e2710.png"
square_logo_img = "https://user-images.githubusercontent.com/38561120/235905862-9ece5bd7-9aa3-4e48-807a-3a9035eb8bfb.png"
aws_logo = "https://user-images.githubusercontent.com/38561120/235953920-3e3fba08-0795-41dc-b480-9bea57db9f2e.png"
azure_logo = "https://user-images.githubusercontent.com/38561120/235927375-b23e2e0f-8932-49ec-b59c-d89f61c8041d.png"
gcp_logo = "https://user-images.githubusercontent.com/38561120/235928332-eb4accdc-c226-4391-8e97-6ca86a91cf50.png"
orange_color = "\033[38;5;208m"
banner_color = "\033[1;92m"
# Compliance
compliance_specification_dir = "./compliance"
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
compliance_aws_dir = f"{actual_directory}/../compliance/aws"
available_compliance_frameworks = []
with os.scandir(compliance_aws_dir) as files:
files = [
file.name
for file in files
if file.is_file()
and file.name.endswith(".json")
and available_compliance_frameworks.append(file.name.removesuffix(".json"))
]
# AWS services-regions matrix json
aws_services_json_file = "aws_regions_by_service.json"
# gcp_zones_json_file = "gcp_zones.json"
default_output_directory = getcwd() + "/output"
output_file_timestamp = timestamp.strftime("%Y%m%d%H%M%S")
@@ -33,6 +50,22 @@ html_file_suffix = ".html"
config_yaml = f"{pathlib.Path(os.path.dirname(os.path.realpath(__file__)))}/config.yaml"
def check_current_version():
try:
prowler_version_string = f"Prowler {prowler_version}"
release_response = requests.get(
"https://api.github.com/repos/prowler-cloud/prowler/tags"
)
latest_version = release_response.json()[0]["name"]
if latest_version != prowler_version:
return f"{prowler_version_string} (latest is {latest_version}, upgrade for the latest features)"
else:
return f"{prowler_version_string} (it is the latest version, yay!)"
except Exception as error:
logger.error(f"{error.__class__.__name__}: {error}")
return f"{prowler_version_string}"
def change_config_var(variable, value):
try:
with open(config_yaml) as f:

View File

@@ -41,3 +41,17 @@ obsolete_lambda_runtimes:
"dotnetcore2.1",
"ruby2.5",
]
# AWS Organizations
# organizations_scp_check_deny_regions
# organizations_enabled_regions: [
# 'eu-central-1',
# 'eu-west-1',
# "us-east-1"
# ]
organizations_enabled_regions: []
# organizations_delegated_administrators
# organizations_trusted_delegated_administrators: [
# "12345678901"
# ]
organizations_trusted_delegated_administrators: []

View File

@@ -1,6 +1,8 @@
import functools
import importlib
import os
import re
import shutil
import sys
import traceback
from pkgutil import walk_packages
@@ -24,6 +26,7 @@ except KeyError:
except Exception:
sys.exit(1)
import prowler
from prowler.lib.utils.utils import open_file, parse_json_file
from prowler.providers.common.models import Audit_Metadata
from prowler.providers.common.outputs import Provider_Output_Options
@@ -108,8 +111,8 @@ def exclude_services_to_run(
# Load checks from checklist.json
def parse_checks_from_file(input_file: str, provider: str) -> set:
checks_to_execute = set()
f = open_file(input_file)
json_file = parse_json_file(f)
with open_file(input_file) as f:
json_file = parse_json_file(f)
for check_name in json_file[provider]:
checks_to_execute.add(check_name)
@@ -117,16 +120,87 @@ def parse_checks_from_file(input_file: str, provider: str) -> set:
return checks_to_execute
# Load checks from custom folder
def parse_checks_from_folder(audit_info, input_folder: str, provider: str) -> int:
try:
imported_checks = 0
# Check if input folder is a S3 URI
if provider == "aws" and re.search(
"^s3://([^/]+)/(.*?([^/]+))/$", input_folder
):
bucket = input_folder.split("/")[2]
key = ("/").join(input_folder.split("/")[3:])
s3_reource = audit_info.audit_session.resource("s3")
bucket = s3_reource.Bucket(bucket)
for obj in bucket.objects.filter(Prefix=key):
if not os.path.exists(os.path.dirname(obj.key)):
os.makedirs(os.path.dirname(obj.key))
bucket.download_file(obj.key, obj.key)
input_folder = key
# Import custom checks by moving the checks folders to the corresponding services
with os.scandir(input_folder) as checks:
for check in checks:
if check.is_dir():
check_module = input_folder + "/" + check.name
# Copy checks to specific provider/service folder
check_service = check.name.split("_")[0]
prowler_dir = prowler.__path__
prowler_module = f"{prowler_dir[0]}/providers/{provider}/services/{check_service}/{check.name}"
if os.path.exists(prowler_module):
shutil.rmtree(prowler_module)
shutil.copytree(check_module, prowler_module)
imported_checks += 1
return imported_checks
except Exception as error:
logger.critical(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
sys.exit(1)
# Load checks from custom folder
def remove_custom_checks_module(input_folder: str, provider: str):
# Check if input folder is a S3 URI
s3_uri = False
if provider == "aws" and re.search("^s3://([^/]+)/(.*?([^/]+))/$", input_folder):
input_folder = ("/").join(input_folder.split("/")[3:])
s3_uri = True
with os.scandir(input_folder) as checks:
for check in checks:
if check.is_dir():
# Remove imported checks
check_service = check.name.split("_")[0]
prowler_dir = prowler.__path__
prowler_module = f"{prowler_dir[0]}/providers/{provider}/services/{check_service}/{check.name}"
if os.path.exists(prowler_module):
shutil.rmtree(prowler_module)
# If S3 URI, remove the downloaded folders
if s3_uri and os.path.exists(input_folder):
shutil.rmtree(input_folder)
def list_services(provider: str) -> set():
available_services = set()
checks_tuple = recover_checks_from_provider(provider)
for _, check_path in checks_tuple:
# Format: /absolute_path/prowler/providers/{provider}/services/{service_name}/{check_name}
service_name = check_path.split("/")[-2]
if os.name == "nt":
service_name = check_path.split("\\")[-2]
else:
service_name = check_path.split("/")[-2]
available_services.add(service_name)
return sorted(available_services)
def list_checks(provider: str) -> set():
available_checks = set()
checks_tuple = recover_checks_from_provider(provider)
for check_name, _ in checks_tuple:
available_checks.add(check_name)
return sorted(available_checks)
def list_categories(provider: str, bulk_checks_metadata: dict) -> set():
available_categories = set()
for check in bulk_checks_metadata.values():
@@ -136,17 +210,28 @@ def list_categories(provider: str, bulk_checks_metadata: dict) -> set():
def print_categories(categories: set):
print(
f"There are {Fore.YELLOW}{len(categories)}{Style.RESET_ALL} available categories: \n"
)
categories_num = len(categories)
plural_string = f"There are {Fore.YELLOW}{categories_num}{Style.RESET_ALL} available categories: \n"
singular_string = f"There is {Fore.YELLOW}{categories_num}{Style.RESET_ALL} available category: \n"
message = plural_string if categories_num > 1 else singular_string
print(message)
for category in categories:
print(f"- {category}")
def print_services(service_list: set):
print(
f"There are {Fore.YELLOW}{len(service_list)}{Style.RESET_ALL} available services: \n"
services_num = len(service_list)
plural_string = (
f"There are {Fore.YELLOW}{services_num}{Style.RESET_ALL} available services: \n"
)
singular_string = (
f"There is {Fore.YELLOW}{services_num}{Style.RESET_ALL} available service: \n"
)
message = plural_string if services_num > 1 else singular_string
print(message)
for service in service_list:
print(f"- {service}")
@@ -154,9 +239,12 @@ def print_services(service_list: set):
def print_compliance_frameworks(
bulk_compliance_frameworks: dict,
):
print(
f"There are {Fore.YELLOW}{len(bulk_compliance_frameworks.keys())}{Style.RESET_ALL} available Compliance Frameworks: \n"
)
frameworks_num = len(bulk_compliance_frameworks.keys())
plural_string = f"There are {Fore.YELLOW}{frameworks_num}{Style.RESET_ALL} available Compliance Frameworks: \n"
singular_string = f"There is {Fore.YELLOW}{frameworks_num}{Style.RESET_ALL} available Compliance Framework: \n"
message = plural_string if frameworks_num > 1 else singular_string
print(message)
for framework in bulk_compliance_frameworks.keys():
print(f"\t- {Fore.YELLOW}{framework}{Style.RESET_ALL}")
@@ -165,17 +253,18 @@ def print_compliance_requirements(
bulk_compliance_frameworks: dict, compliance_frameworks: list
):
for compliance_framework in compliance_frameworks:
for compliance in bulk_compliance_frameworks.values():
# Workaround until we have more Compliance Frameworks
split_compliance = compliance_framework.split("_")
framework = split_compliance[0].upper()
version = split_compliance[1].upper()
provider = split_compliance[2].upper()
if framework in compliance.Framework and compliance.Version == version:
for key in bulk_compliance_frameworks.keys():
framework = bulk_compliance_frameworks[key].Framework
provider = bulk_compliance_frameworks[key].Provider
version = bulk_compliance_frameworks[key].Version
requirements = bulk_compliance_frameworks[key].Requirements
# We can list the compliance requirements for a given framework using the
# bulk_compliance_frameworks keys since they are the compliance specification file name
if compliance_framework == key:
print(
f"Listing {framework} {version} {provider} Compliance Requirements:\n"
)
for requirement in compliance.Requirements:
for requirement in requirements:
checks = ""
for check in requirement.Checks:
checks += f" {Fore.YELLOW}\t\t{check}\n{Style.RESET_ALL}"
@@ -200,9 +289,16 @@ def print_checks(
)
sys.exit(1)
print(
f"\nThere are {Fore.YELLOW}{len(check_list)}{Style.RESET_ALL} available checks.\n"
checks_num = len(check_list)
plural_string = (
f"\nThere are {Fore.YELLOW}{checks_num}{Style.RESET_ALL} available checks.\n"
)
singular_string = (
f"\nThere is {Fore.YELLOW}{checks_num}{Style.RESET_ALL} available check.\n"
)
message = plural_string if checks_num > 1 else singular_string
print(message)
# Parse checks from compliance frameworks specification
@@ -331,6 +427,22 @@ def execute_checks(
audit_progress=0,
)
if os.name != "nt":
try:
from resource import RLIMIT_NOFILE, getrlimit
# Check ulimit for the maximum system open files
soft, _ = getrlimit(RLIMIT_NOFILE)
if soft < 4096:
logger.warning(
f"Your session file descriptors limit ({soft} open files) is below 4096. We recommend to increase it to avoid errors. Solve it running this command `ulimit -n 4096`. For more info visit https://docs.prowler.cloud/en/latest/troubleshooting/"
)
except Exception as error:
logger.error("Unable to retrieve ulimit default settings")
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
# Execution with the --only-logs flag
if audit_output_options.only_logs:
for check_name in checks_to_execute:
@@ -350,7 +462,6 @@ def execute_checks(
# If check does not exists in the provider or is from another provider
except ModuleNotFoundError:
logger.critical(
f"Check '{check_name}' was not found for the {provider.upper()} provider"
)
@@ -361,8 +472,13 @@ def execute_checks(
)
else:
# Default execution
checks_num = len(checks_to_execute)
plural_string = "checks"
singular_string = "check"
check_noun = plural_string if checks_num > 1 else singular_string
print(
f"{Style.BRIGHT}Executing {len(checks_to_execute)} checks, please wait...{Style.RESET_ALL}\n"
f"{Style.BRIGHT}Executing {checks_num} {check_noun}, please wait...{Style.RESET_ALL}\n"
)
with alive_bar(
total=len(checks_to_execute),
@@ -460,3 +576,21 @@ def update_audit_metadata(
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def recover_checks_from_service(service_list: list, provider: str) -> list:
checks = set()
for service in service_list:
modules = recover_checks_from_provider(provider, service)
if not modules:
logger.error(f"Service '{service}' does not have checks.")
else:
for check_module in modules:
# Recover check name and module name from import path
# Format: "providers.{provider}.services.{service}.{check_name}.{check_name}"
check_name = check_module[0].split(".")[-1]
# If the service is present in the group list passed as parameters
# if service_name in group_list: checks_from_arn.add(check_name)
checks.add(check_name)
return checks

View File

@@ -2,9 +2,9 @@ from prowler.lib.check.check import (
parse_checks_from_compliance_framework,
parse_checks_from_file,
recover_checks_from_provider,
recover_checks_from_service,
)
from prowler.lib.logger import logger
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
# Generate the list of checks to execute
@@ -19,25 +19,10 @@ def load_checks_to_execute(
compliance_frameworks: list,
categories: set,
provider: str,
audit_info: AWS_Audit_Info,
) -> set:
"""Generate the list of checks to execute based on the cloud provider and input arguments specified"""
checks_to_execute = set()
# Handle if there are audit resources so only their services are executed
if audit_info.audit_resources:
service_list = []
for resource in audit_info.audit_resources:
service = resource.split(":")[2]
# Parse services when they are different in the ARNs
if service == "lambda":
service = "awslambda"
if service == "elasticloadbalancing":
service = "elb"
elif service == "logs":
service = "cloudwatch"
service_list.append(service)
# Handle if there are checks passed using -c/--checks
if check_list:
for check_name in check_list:
@@ -59,19 +44,7 @@ def load_checks_to_execute(
# Handle if there are services passed using -s/--services
elif service_list:
# Loaded dynamically from modules within provider/services
for service in service_list:
modules = recover_checks_from_provider(provider, service)
if not modules:
logger.error(f"Service '{service}' does not have checks.")
else:
for check_module in modules:
# Recover check name and module name from import path
# Format: "providers.{provider}.services.{service}.{check_name}.{check_name}"
check_name = check_module[0].split(".")[-1]
# If the service is present in the group list passed as parameters
# if service_name in group_list: checks_to_execute.add(check_name)
checks_to_execute.add(check_name)
checks_to_execute = recover_checks_from_service(service_list, provider)
# Handle if there are compliance frameworks passed using --compliance
elif compliance_frameworks:

View File

@@ -1,9 +1,12 @@
import sys
from pydantic import parse_obj_as
from prowler.lib.check.compliance_models import (
Compliance_Base_Model,
Compliance_Requirement,
)
from prowler.lib.check.models import Check_Metadata_Model
from prowler.lib.logger import logger
@@ -17,6 +20,7 @@ def update_checks_metadata_with_compliance(
for framework in bulk_compliance_frameworks.values():
for requirement in framework.Requirements:
compliance_requirements = []
# Verify if check is in the requirement
if check in requirement.Checks:
# Create the Compliance_Requirement
requirement = Compliance_Requirement(
@@ -34,12 +38,60 @@ def update_checks_metadata_with_compliance(
Framework=framework.Framework,
Provider=framework.Provider,
Version=framework.Version,
Description=framework.Description,
Requirements=compliance_requirements,
)
# Include the compliance framework for the check
check_compliance.append(compliance)
# Save it into the check's metadata
bulk_checks_metadata[check].Compliance = check_compliance
# Add requirements of Manual Controls
for framework in bulk_compliance_frameworks.values():
for requirement in framework.Requirements:
compliance_requirements = []
# Verify if requirement is Manual
if not requirement.Checks:
compliance_requirements.append(requirement)
# Create the Compliance_Model
compliance = Compliance_Base_Model(
Framework=framework.Framework,
Provider=framework.Provider,
Version=framework.Version,
Description=framework.Description,
Requirements=compliance_requirements,
)
# Include the compliance framework for the check
check_compliance.append(compliance)
# Create metadata for Manual Control
manual_check_metadata = {
"Provider": "aws",
"CheckID": "manual_check",
"CheckTitle": "Manual Check",
"CheckType": [],
"ServiceName": "",
"SubServiceName": "",
"ResourceIdTemplate": "",
"Severity": "",
"ResourceType": "",
"Description": "",
"Risk": "",
"RelatedUrl": "",
"Remediation": {
"Code": {"CLI": "", "NativeIaC": "", "Other": "", "Terraform": ""},
"Recommendation": {"Text": "", "Url": ""},
},
"Categories": [],
"Tags": {},
"DependsOn": [],
"RelatedTo": [],
"Notes": "",
}
manual_check = parse_obj_as(Check_Metadata_Model, manual_check_metadata)
# Save it into the check's metadata
bulk_checks_metadata["manual_check"] = manual_check
bulk_checks_metadata["manual_check"].Compliance = check_compliance
return bulk_checks_metadata
except Exception as e:
logger.critical(f"{e.__class__.__name__}[{e.__traceback__.tb_lineno}] -- {e}")

View File

@@ -1,8 +1,8 @@
import sys
from enum import Enum
from typing import Any, List, Optional, Union
from typing import Optional, Union
from pydantic import BaseModel, ValidationError
from pydantic import BaseModel, ValidationError, root_validator
from prowler.lib.logger import logger
@@ -11,10 +11,10 @@ from prowler.lib.logger import logger
class ENS_Requirements_Nivel(str, Enum):
"""ENS V3 Requirements Level"""
opcional = "opcional"
bajo = "bajo"
medio = "medio"
alto = "alto"
pytec = "pytec"
class ENS_Requirements_Dimensiones(str, Enum):
@@ -27,35 +27,101 @@ class ENS_Requirements_Dimensiones(str, Enum):
disponibilidad = "disponibilidad"
class ENS_Requirements_Tipos(str, Enum):
"""ENS Requirements Tipos"""
refuerzo = "refuerzo"
requisito = "requisito"
recomendacion = "recomendacion"
medida = "medida"
class ENS_Requirements(BaseModel):
"""ENS V3 Framework Requirements"""
IdGrupoControl: str
Marco: str
Categoria: str
Descripcion_Control: str
Nivel: list[ENS_Requirements_Nivel]
DescripcionControl: str
Tipo: ENS_Requirements_Tipos
Nivel: ENS_Requirements_Nivel
Dimensiones: list[ENS_Requirements_Dimensiones]
# Generic Compliance Requirements
class Generic_Compliance_Requirements(BaseModel):
"""Generic Compliance Requirements"""
ItemId: str
Section: Optional[str]
SubSection: Optional[str]
SubGroup: Optional[str]
Service: str
Soc_Type: Optional[str]
class CIS_Requirements_Profile(str):
"""CIS Requirements Profile"""
Level_1 = "Level 1"
Level_2 = "Level 2"
class CIS_Requirements_AssessmentStatus(str):
"""CIS Requirements Assessment Status"""
Manual = "Manual"
Automated = "Automated"
# CIS Requirements
class CIS_Requirements(BaseModel):
"""CIS Requirements"""
Section: str
Profile: CIS_Requirements_Profile
AssessmentStatus: CIS_Requirements_AssessmentStatus
Description: str
RationaleStatement: str
ImpactStatement: str
RemediationProcedure: str
AuditProcedure: str
AdditionalInformation: str
References: str
# Base Compliance Model
class Compliance_Requirement(BaseModel):
"""Compliance_Requirement holds the base model for every requirement within a compliance framework"""
Id: str
Description: str
Attributes: list[Union[ENS_Requirements, Any]]
Checks: List[str]
Attributes: list[
Union[CIS_Requirements, ENS_Requirements, Generic_Compliance_Requirements]
]
Checks: list[str]
class Compliance_Base_Model(BaseModel):
"""Compliance_Base_Model holds the base model for every compliance framework"""
Framework: str
Provider: Optional[str]
Version: str
Provider: str
Version: Optional[str]
Description: str
Requirements: list[Compliance_Requirement]
@root_validator(pre=True)
# noqa: F841 - since vulture raises unused variable 'cls'
def framework_and_provider_must_not_be_empty(cls, values): # noqa: F841
framework, provider = (
values.get("Framework"),
values.get("Provider"),
)
if framework == "" or provider == "":
raise ValueError("Framework or Provider must not be empty")
return values
# Testing Pending
def load_compliance_framework(

View File

@@ -48,7 +48,6 @@ class Check_Metadata_Model(BaseModel):
RelatedUrl: str
Remediation: Remediation
Categories: list[str]
Tags: dict
DependsOn: list[str]
RelatedTo: list[str]
Notes: str
@@ -129,6 +128,23 @@ class Check_Report_Azure(Check_Report):
self.subscription = ""
@dataclass
class Check_Report_GCP(Check_Report):
"""Contains the GCP Check's finding information."""
resource_name: str
resource_id: str
project_id: str
location: str
def __init__(self, metadata):
super().__init__(metadata)
self.resource_name = ""
self.resource_id = ""
self.project_id = ""
self.location = ""
# Testing Pending
def load_check_metadata(metadata_file: str) -> Check_Metadata_Model:
"""load_check_metadata loads and parse a Check's metadata file"""

View File

@@ -2,7 +2,11 @@ import argparse
import sys
from argparse import RawTextHelpFormatter
from prowler.config.config import default_output_directory, prowler_version
from prowler.config.config import (
available_compliance_frameworks,
check_current_version,
default_output_directory,
)
from prowler.providers.aws.aws_provider import get_aws_available_regions
from prowler.providers.aws.lib.arn.arn import is_valid_arn
@@ -24,7 +28,6 @@ class ProwlerArgumentParser:
epilog="""
To see the different available options on a specific provider, run:
prowler {provider} -h|--help
Detailed documentation at https://docs.prowler.cloud
""",
)
@@ -32,8 +35,7 @@ Detailed documentation at https://docs.prowler.cloud
self.parser.add_argument(
"-v",
"--version",
action="version",
version=f"Prowler {prowler_version}",
action="store_true",
help="show Prowler version",
)
# Common arguments parser
@@ -54,6 +56,7 @@ Detailed documentation at https://docs.prowler.cloud
# Init Providers Arguments
self.__init_aws_parser__()
self.__init_azure_parser__()
self.__init_gcp_parser__()
def parse(self, args=None) -> argparse.Namespace:
"""
@@ -63,6 +66,10 @@ Detailed documentation at https://docs.prowler.cloud
if args:
sys.argv = args
if len(sys.argv) == 2 and sys.argv[1] in ("-v", "--version"):
print(check_current_version())
sys.exit(0)
# Set AWS as the default provider if no provider is supplied
if len(sys.argv) == 1:
sys.argv = self.__set_default_provider__(sys.argv)
@@ -147,6 +154,11 @@ Detailed documentation at https://docs.prowler.cloud
common_outputs_parser.add_argument(
"-b", "--no-banner", action="store_true", help="Hide Prowler banner"
)
common_outputs_parser.add_argument(
"--slack",
action="store_true",
help="Send a summary of the execution with a Slack APP in your channel. Environment variables SLACK_API_TOKEN and SLACK_CHANNEL_ID are required (see more in https://docs.prowler.cloud/en/latest/tutorials/integrations/#slack).",
)
def __init_logging_parser__(self):
# Logging Options
@@ -212,7 +224,7 @@ Detailed documentation at https://docs.prowler.cloud
"--compliance",
nargs="+",
help="Compliance Framework to check against for. The format should be the following: framework_version_provider (e.g.: ens_rd2022_aws)",
choices=["ens_rd2022_aws", "cis_1.4_aws", "cis_1.5_aws"],
choices=available_compliance_frameworks,
)
group.add_argument(
"--categories",
@@ -221,6 +233,12 @@ Detailed documentation at https://docs.prowler.cloud
default=[],
# Pending validate choices
)
common_checks_parser.add_argument(
"-x",
"--checks-folder",
nargs="?",
help="Specify external directory with custom checks (each check must have a folder with the required files, see more in https://docs.prowler.cloud/en/latest/tutorials/misc/#custom-checks).",
)
def __init_list_checks_parser__(self):
# List checks options
@@ -241,7 +259,7 @@ Detailed documentation at https://docs.prowler.cloud
"--list-compliance-requirements",
nargs="+",
help="List compliance requirements for a given requirement",
choices=["ens_rd2022_aws", "cis_1.4_aws", "cis_1.5_aws"],
choices=available_compliance_frameworks,
)
list_group.add_argument(
"--list-categories",
@@ -313,6 +331,11 @@ Detailed documentation at https://docs.prowler.cloud
action="store_true",
help="Send check output to AWS Security Hub",
)
aws_security_hub_subparser.add_argument(
"--skip-sh-update",
action="store_true",
help="Skip updating previous findings of Prowler in Security Hub",
)
# AWS Quick Inventory
aws_quick_inventory_subparser = aws_parser.add_argument_group("Quick Inventory")
aws_quick_inventory_subparser.add_argument(
@@ -376,6 +399,16 @@ Detailed documentation at https://docs.prowler.cloud
help="Scan only resources with specific AWS Resource ARNs, e.g., arn:aws:iam::012345678910:user/test arn:aws:ec2:us-east-1:123456789012:vpc/vpc-12345678",
)
# Boto3 Config
boto3_config_subparser = aws_parser.add_argument_group("Boto3 Config")
boto3_config_subparser.add_argument(
"--aws-retries-max-attempts",
nargs="?",
default=None,
type=int,
help="Set the maximum attemps for the Boto3 standard retrier config (Default: 3)",
)
def __init_azure_parser__(self):
"""Init the Azure Provider CLI parser"""
azure_parser = self.subparsers.add_parser(
@@ -412,3 +445,18 @@ Detailed documentation at https://docs.prowler.cloud
default=[],
help="Azure subscription ids to be scanned by prowler",
)
def __init_gcp_parser__(self):
"""Init the GCP Provider CLI parser"""
gcp_parser = self.subparsers.add_parser(
"gcp", parents=[self.common_providers_parser], help="GCP Provider"
)
# Authentication Modes
gcp_auth_subparser = gcp_parser.add_argument_group("Authentication Modes")
gcp_auth_modes_group = gcp_auth_subparser.add_mutually_exclusive_group()
gcp_auth_modes_group.add_argument(
"--credentials-file",
nargs="?",
metavar="FILE_PATH",
help="Authenticate using a Google Service Account Application Credentials JSON file",
)

View File

@@ -4,102 +4,72 @@ from csv import DictWriter
from colorama import Fore, Style
from tabulate import tabulate
from prowler.config.config import timestamp
from prowler.config.config import orange_color, timestamp
from prowler.lib.check.models import Check_Report
from prowler.lib.logger import logger
from prowler.lib.outputs.models import (
Check_Output_CSV_CIS,
Check_Output_CSV_ENS_RD2022,
Check_Output_CSV_Generic_Compliance,
generate_csv_fields,
)
def add_manual_controls(output_options, audit_info, file_descriptors):
try:
# Check if MANUAL control was already added to output
if "manual_check" in output_options.bulk_checks_metadata:
manual_finding = Check_Report(
output_options.bulk_checks_metadata["manual_check"].json()
)
manual_finding.status = "INFO"
manual_finding.status_extended = "Manual check"
manual_finding.resource_id = "manual_check"
manual_finding.region = ""
fill_compliance(
output_options, manual_finding, audit_info, file_descriptors
)
del output_options.bulk_checks_metadata["manual_check"]
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def fill_compliance(output_options, finding, audit_info, file_descriptors):
# We have to retrieve all the check's compliance requirements
check_compliance = output_options.bulk_checks_metadata[
finding.check_metadata.CheckID
].Compliance
csv_header = compliance_row = compliance_output = None
for compliance in check_compliance:
if (
compliance.Framework == "ENS"
and compliance.Version == "RD2022"
and "ens_rd2022_aws" in output_options.output_modes
):
compliance_output = "ens_rd2022_aws"
for requirement in compliance.Requirements:
requirement_description = requirement.Description
requirement_id = requirement.Id
for attribute in requirement.Attributes:
compliance_row = Check_Output_CSV_ENS_RD2022(
Provider=finding.check_metadata.Provider,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=timestamp.isoformat(),
Requirements_Id=requirement_id,
Requirements_Description=requirement_description,
Requirements_Attributes_IdGrupoControl=attribute.get(
"IdGrupoControl"
),
Requirements_Attributes_Marco=attribute.get("Marco"),
Requirements_Attributes_Categoria=attribute.get("Categoria"),
Requirements_Attributes_DescripcionControl=attribute.get(
"DescripcionControl"
),
Requirements_Attributes_Nivel=attribute.get("Nivel"),
Requirements_Attributes_Tipo=attribute.get("Tipo"),
Requirements_Attributes_Dimensiones=",".join(
attribute.get("Dimensiones")
),
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
)
csv_header = generate_csv_fields(Check_Output_CSV_ENS_RD2022)
elif compliance.Framework == "CIS-AWS" and "cis" in str(
output_options.output_modes
):
# Only with the version of CIS that was selected
if "cis_" + compliance.Version + "_aws" in str(output_options.output_modes):
compliance_output = "cis_" + compliance.Version + "_aws"
try:
# We have to retrieve all the check's compliance requirements
check_compliance = output_options.bulk_checks_metadata[
finding.check_metadata.CheckID
].Compliance
for compliance in check_compliance:
csv_header = compliance_row = compliance_output = None
if (
compliance.Framework == "ENS"
and compliance.Version == "RD2022"
and "ens_rd2022_aws" in output_options.output_modes
):
compliance_output = "ens_rd2022_aws"
for requirement in compliance.Requirements:
requirement_description = requirement.Description
requirement_id = requirement.Id
for attribute in requirement.Attributes:
compliance_row = Check_Output_CSV_CIS(
compliance_row = Check_Output_CSV_ENS_RD2022(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=timestamp.isoformat(),
Requirements_Id=requirement_id,
Requirements_Description=requirement_description,
Requirements_Attributes_Section=attribute.get("Section"),
Requirements_Attributes_Profile=attribute.get("Profile"),
Requirements_Attributes_AssessmentStatus=attribute.get(
"AssessmentStatus"
),
Requirements_Attributes_Description=attribute.get(
"Description"
),
Requirements_Attributes_RationaleStatement=attribute.get(
"RationaleStatement"
),
Requirements_Attributes_ImpactStatement=attribute.get(
"ImpactStatement"
),
Requirements_Attributes_RemediationProcedure=attribute.get(
"RemediationProcedure"
),
Requirements_Attributes_AuditProcedure=attribute.get(
"AuditProcedure"
),
Requirements_Attributes_AdditionalInformation=attribute.get(
"AdditionalInformation"
),
Requirements_Attributes_References=attribute.get(
"References"
Requirements_Attributes_IdGrupoControl=attribute.IdGrupoControl,
Requirements_Attributes_Marco=attribute.Marco,
Requirements_Attributes_Categoria=attribute.Categoria,
Requirements_Attributes_DescripcionControl=attribute.DescripcionControl,
Requirements_Attributes_Nivel=attribute.Nivel,
Requirements_Attributes_Tipo=attribute.Tipo,
Requirements_Attributes_Dimensiones=",".join(
attribute.Dimensiones
),
Status=finding.status,
StatusExtended=finding.status_extended,
@@ -107,15 +77,93 @@ def fill_compliance(output_options, finding, audit_info, file_descriptors):
CheckId=finding.check_metadata.CheckID,
)
csv_header = generate_csv_fields(Check_Output_CSV_CIS)
csv_header = generate_csv_fields(Check_Output_CSV_ENS_RD2022)
if compliance_row:
csv_writer = DictWriter(
file_descriptors[compliance_output],
fieldnames=csv_header,
delimiter=";",
)
csv_writer.writerow(compliance_row.__dict__)
elif compliance.Framework == "CIS" and "cis_" in str(
output_options.output_modes
):
# Only with the version of CIS that was selected
if "cis_" + compliance.Version + "_aws" in str(
output_options.output_modes
):
compliance_output = "cis_" + compliance.Version + "_aws"
for requirement in compliance.Requirements:
requirement_description = requirement.Description
requirement_id = requirement.Id
for attribute in requirement.Attributes:
compliance_row = Check_Output_CSV_CIS(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=timestamp.isoformat(),
Requirements_Id=requirement_id,
Requirements_Description=requirement_description,
Requirements_Attributes_Section=attribute.Section,
Requirements_Attributes_Profile=attribute.Profile,
Requirements_Attributes_AssessmentStatus=attribute.AssessmentStatus,
Requirements_Attributes_Description=attribute.Description,
Requirements_Attributes_RationaleStatement=attribute.RationaleStatement,
Requirements_Attributes_ImpactStatement=attribute.ImpactStatement,
Requirements_Attributes_RemediationProcedure=attribute.RemediationProcedure,
Requirements_Attributes_AuditProcedure=attribute.AuditProcedure,
Requirements_Attributes_AdditionalInformation=attribute.AdditionalInformation,
Requirements_Attributes_References=attribute.References,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
)
csv_header = generate_csv_fields(Check_Output_CSV_CIS)
else:
compliance_output = compliance.Framework
if compliance.Version != "":
compliance_output += "_" + compliance.Version
if compliance.Provider != "":
compliance_output += "_" + compliance.Provider
compliance_output = compliance_output.lower().replace("-", "_")
if compliance_output in output_options.output_modes:
for requirement in compliance.Requirements:
requirement_description = requirement.Description
requirement_id = requirement.Id
for attribute in requirement.Attributes:
compliance_row = Check_Output_CSV_Generic_Compliance(
Provider=finding.check_metadata.Provider,
Description=compliance.Description,
AccountId=audit_info.audited_account,
Region=finding.region,
AssessmentDate=timestamp.isoformat(),
Requirements_Id=requirement_id,
Requirements_Description=requirement_description,
Requirements_Attributes_Section=attribute.Section,
Requirements_Attributes_SubSection=attribute.SubSection,
Requirements_Attributes_SubGroup=attribute.SubGroup,
Requirements_Attributes_Service=attribute.Service,
Requirements_Attributes_Soc_Type=attribute.Soc_Type,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
CheckId=finding.check_metadata.CheckID,
)
csv_header = generate_csv_fields(
Check_Output_CSV_Generic_Compliance
)
if compliance_row:
csv_writer = DictWriter(
file_descriptors[compliance_output],
fieldnames=csv_header,
delimiter=";",
)
csv_writer.writerow(compliance_row.__dict__)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def display_compliance_table(
@@ -126,16 +174,16 @@ def display_compliance_table(
output_directory: str,
):
try:
if "ens_rd2022_aws" in compliance_framework:
if "ens_rd2022_aws" == compliance_framework:
marcos = {}
ens_compliance_table = {
"Proveedor": [],
"Marco/Categoria": [],
"Estado": [],
"PYTEC": [],
"Alto": [],
"Medio": [],
"Bajo": [],
"Opcional": [],
}
pass_count = fail_count = 0
for finding in findings:
@@ -153,13 +201,13 @@ def display_compliance_table(
for requirement in compliance.Requirements:
for attribute in requirement.Attributes:
marco_categoria = (
f"{attribute['Marco']}/{attribute['Categoria']}"
f"{attribute.Marco}/{attribute.Categoria}"
)
# Check if Marco/Categoria exists
if marco_categoria not in marcos:
marcos[marco_categoria] = {
"Estado": f"{Fore.GREEN}CUMPLE{Style.RESET_ALL}",
"Pytec": 0,
"Opcional": 0,
"Alto": 0,
"Medio": 0,
"Bajo": 0,
@@ -171,13 +219,13 @@ def display_compliance_table(
] = f"{Fore.RED}NO CUMPLE{Style.RESET_ALL}"
elif finding.status == "PASS":
pass_count += 1
if attribute["Nivel"] == "pytec":
marcos[marco_categoria]["Pytec"] += 1
elif attribute["Nivel"] == "alto":
if attribute.Nivel == "opcional":
marcos[marco_categoria]["Opcional"] += 1
elif attribute.Nivel == "alto":
marcos[marco_categoria]["Alto"] += 1
elif attribute["Nivel"] == "medio":
elif attribute.Nivel == "medio":
marcos[marco_categoria]["Medio"] += 1
elif attribute["Nivel"] == "bajo":
elif attribute.Nivel == "bajo":
marcos[marco_categoria]["Bajo"] += 1
# Add results to table
@@ -185,17 +233,17 @@ def display_compliance_table(
ens_compliance_table["Proveedor"].append("aws")
ens_compliance_table["Marco/Categoria"].append(marco)
ens_compliance_table["Estado"].append(marcos[marco]["Estado"])
ens_compliance_table["PYTEC"].append(
f"{Fore.LIGHTRED_EX}{marcos[marco]['Pytec']}{Style.RESET_ALL}"
ens_compliance_table["Opcional"].append(
f"{Fore.BLUE}{marcos[marco]['Opcional']}{Style.RESET_ALL}"
)
ens_compliance_table["Alto"].append(
f"{Fore.RED}{marcos[marco]['Alto']}{Style.RESET_ALL}"
f"{Fore.LIGHTRED_EX}{marcos[marco]['Alto']}{Style.RESET_ALL}"
)
ens_compliance_table["Medio"].append(
f"{Fore.YELLOW}{marcos[marco]['Medio']}{Style.RESET_ALL}"
f"{orange_color}{marcos[marco]['Medio']}{Style.RESET_ALL}"
)
ens_compliance_table["Bajo"].append(
f"{Fore.BLUE}{marcos[marco]['Bajo']}{Style.RESET_ALL}"
f"{Fore.YELLOW}{marcos[marco]['Bajo']}{Style.RESET_ALL}"
)
if fail_count + pass_count < 0:
print(
@@ -223,11 +271,11 @@ def display_compliance_table(
print(
f"{Style.BRIGHT}* Solo aparece el Marco/Categoria que contiene resultados.{Style.RESET_ALL}"
)
print("\nResultados detallados en:")
print(f"\nResultados detallados de {compliance_fm} en:")
print(
f" - CSV: {output_directory}/{output_filename}_{compliance_framework[0]}.csv\n"
f" - CSV: {output_directory}/{output_filename}_{compliance_framework}.csv\n"
)
if "cis" in str(compliance_framework):
elif "cis_1." in compliance_framework:
sections = {}
cis_compliance_table = {
"Provider": [],
@@ -240,14 +288,15 @@ def display_compliance_table(
check = bulk_checks_metadata[finding.check_metadata.CheckID]
check_compliances = check.Compliance
for compliance in check_compliances:
if compliance.Framework == "CIS-AWS" and compliance.Version in str(
compliance_framework
if (
compliance.Framework == "CIS"
and compliance.Version in compliance_framework
):
compliance_version = compliance.Version
compliance_fm = compliance.Framework
for requirement in compliance.Requirements:
for attribute in requirement.Attributes:
section = attribute["Section"]
section = attribute.Section
# Check if Section exists
if section not in sections:
sections[section] = {
@@ -259,12 +308,12 @@ def display_compliance_table(
fail_count += 1
elif finding.status == "PASS":
pass_count += 1
if attribute["Profile"] == "Level 1":
if attribute.Profile == "Level 1":
if finding.status == "FAIL":
sections[section]["Level 1"]["FAIL"] += 1
else:
sections[section]["Level 1"]["PASS"] += 1
elif attribute["Profile"] == "Level 2":
elif attribute.Profile == "Level 2":
if finding.status == "FAIL":
sections[section]["Level 2"]["FAIL"] += 1
else:
@@ -291,7 +340,7 @@ def display_compliance_table(
cis_compliance_table["Level 2"].append(
f"{Fore.GREEN}PASS({sections[section]['Level 2']['PASS']}){Style.RESET_ALL}"
)
if fail_count + pass_count < 0:
if fail_count + pass_count < 1:
print(
f"\n {Style.BRIGHT}There are no resources for {Fore.YELLOW}{compliance_fm}-{compliance_version}{Style.RESET_ALL}.\n"
)
@@ -317,10 +366,15 @@ def display_compliance_table(
print(
f"{Style.BRIGHT}* Only sections containing results appear.{Style.RESET_ALL}"
)
print("\nDetailed Results in:")
print(f"\nDetailed results of {compliance_fm} are in:")
print(
f" - CSV: {output_directory}/{output_filename}_{compliance_framework[0]}.csv\n"
f" - CSV: {output_directory}/{output_filename}_{compliance_framework}.csv\n"
)
else:
print(f"\nDetailed results of {compliance_framework.upper()} are in:")
print(
f" - CSV: {output_directory}/{output_filename}_{compliance_framework}.csv\n"
)
except Exception as error:
logger.critical(
f"{error.__class__.__name__}:{error.__traceback__.tb_lineno} -- {error}"

View File

@@ -15,11 +15,14 @@ from prowler.lib.outputs.models import (
Azure_Check_Output_CSV,
Check_Output_CSV_CIS,
Check_Output_CSV_ENS_RD2022,
Check_Output_CSV_Generic_Compliance,
Gcp_Check_Output_CSV,
generate_csv_fields,
)
from prowler.lib.utils.utils import file_exists, open_file
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.azure.lib.audit_info.models import Azure_Audit_Info
from prowler.providers.gcp.lib.audit_info.models import GCP_Audit_Info
def initialize_file_descriptor(
@@ -41,18 +44,17 @@ def initialize_file_descriptor(
"a",
)
if output_mode in ("csv", "ens_rd2022_aws", "cis_1.5_aws", "cis_1.4_aws"):
if output_mode in ("json", "json-asff"):
file_descriptor.write("[")
elif "html" in output_mode:
add_html_header(file_descriptor, audit_info)
else:
# Format is the class model of the CSV format to print the headers
csv_header = [x.upper() for x in generate_csv_fields(format)]
csv_writer = DictWriter(
file_descriptor, fieldnames=csv_header, delimiter=";"
)
csv_writer.writeheader()
if output_mode in ("json", "json-asff"):
file_descriptor.write("[")
if "html" in output_mode:
add_html_header(file_descriptor, audit_info)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
@@ -82,17 +84,30 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
audit_info,
Azure_Check_Output_CSV,
)
if isinstance(audit_info, GCP_Audit_Info):
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Gcp_Check_Output_CSV,
)
file_descriptors.update({output_mode: file_descriptor})
if output_mode == "json":
elif output_mode == "json":
filename = f"{output_directory}/{output_filename}{json_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename, output_mode, audit_info
)
file_descriptors.update({output_mode: file_descriptor})
if isinstance(audit_info, AWS_Audit_Info):
elif output_mode == "html":
filename = f"{output_directory}/{output_filename}{html_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename, output_mode, audit_info
)
file_descriptors.update({output_mode: file_descriptor})
elif isinstance(audit_info, AWS_Audit_Info):
if output_mode == "json-asff":
filename = f"{output_directory}/{output_filename}{json_asff_file_suffix}"
file_descriptor = initialize_file_descriptor(
@@ -100,16 +115,7 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
)
file_descriptors.update({output_mode: file_descriptor})
if output_mode == "html":
filename = (
f"{output_directory}/{output_filename}{html_file_suffix}"
)
file_descriptor = initialize_file_descriptor(
filename, output_mode, audit_info
)
file_descriptors.update({output_mode: file_descriptor})
if output_mode == "ens_rd2022_aws":
elif output_mode == "ens_rd2022_aws":
filename = f"{output_directory}/{output_filename}_ens_rd2022_aws{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename,
@@ -119,19 +125,31 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
)
file_descriptors.update({output_mode: file_descriptor})
if output_mode == "cis_1.5_aws":
elif output_mode == "cis_1.5_aws":
filename = f"{output_directory}/{output_filename}_cis_1.5_aws{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename, output_mode, audit_info, Check_Output_CSV_CIS
)
file_descriptors.update({output_mode: file_descriptor})
if output_mode == "cis_1.4_aws":
elif output_mode == "cis_1.4_aws":
filename = f"{output_directory}/{output_filename}_cis_1.4_aws{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename, output_mode, audit_info, Check_Output_CSV_CIS
)
file_descriptors.update({output_mode: file_descriptor})
else:
# Generic Compliance framework
filename = f"{output_directory}/{output_filename}_{output_mode}{csv_file_suffix}"
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Check_Output_CSV_Generic_Compliance,
)
file_descriptors.update({output_mode: file_descriptor})
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -1,3 +1,4 @@
import importlib
import sys
from os import path
@@ -8,20 +9,22 @@ from prowler.config.config import (
prowler_version,
timestamp,
)
from prowler.lib.check.models import Check_Report_AWS, Check_Report_GCP
from prowler.lib.logger import logger
from prowler.lib.outputs.models import (
get_check_compliance,
parse_html_string,
unroll_dict,
unroll_tags,
)
from prowler.lib.utils.utils import open_file
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.azure.lib.audit_info.models import Azure_Audit_Info
from prowler.providers.gcp.lib.audit_info.models import GCP_Audit_Info
def add_html_header(file_descriptor, audit_info):
try:
if not audit_info.profile:
audit_info.profile = "ENV"
if isinstance(audit_info.audited_regions, list):
audited_regions = " ".join(audit_info.audited_regions)
elif not audit_info.audited_regions:
audited_regions = "All Regions"
else:
audited_regions = audit_info.audited_regions
file_descriptor.write(
"""
<!DOCTYPE html>
@@ -108,51 +111,9 @@ def add_html_header(file_descriptor, audit_info):
</li>
</ul>
</div>
</div>
<div class="col-md-2">
<div class="card">
<div class="card-header">
AWS Assessment Summary
</div>
<ul class="list-group list-group-flush">
<li class="list-group-item">
<b>AWS Account:</b> """
+ audit_info.audited_account
</div> """
+ get_assessment_summary(audit_info)
+ """
</li>
<li class="list-group-item">
<b>AWS-CLI Profile:</b> """
+ audit_info.profile
+ """
</li>
<li class="list-group-item">
<b>Audited Regions:</b> """
+ audited_regions
+ """
</li>
</ul>
</div>
</div>
<div class="col-md-4">
<div class="card">
<div class="card-header">
AWS Credentials
</div>
<ul class="list-group list-group-flush">
<li class="list-group-item">
<b>User Id:</b> """
+ audit_info.audited_user_id
+ """
</li>
<li class="list-group-item">
<b>Caller Identity ARN:</b>
"""
+ audit_info.audited_identity_arn
+ """
</li>
</ul>
</div>
</div>
<div class="col-md-2">
<div class="card">
<div class="card-header">
@@ -183,53 +144,60 @@ def add_html_header(file_descriptor, audit_info):
<tr>
<th scope="col">Status</th>
<th scope="col">Severity</th>
<th style="width:5%" scope="col">Service Name</th>
<th scope="col">Service Name</th>
<th scope="col">Region</th>
<th style="width:20%" scope="col">Check ID</th>
<th style="width:20%" scope="col">Check Title</th>
<th scope="col">Resource ID</th>
<th style="width:15%" scope="col">Check Description</th>
<th scope="col">Check ID</th>
<th scope="col">Resource Tags</th>
<th scope="col">Status Extended</th>
<th scope="col">Risk</th>
<th scope="col">Recomendation</th>
<th style="width:5%" scope="col">Recomendation URL</th>
<th scope="col">Compliance</th>
</tr>
</thead>
<tbody>
"""
)
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
logger.critical(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
sys.exit(1)
def fill_html(file_descriptor, finding):
row_class = "p-3 mb-2 bg-success-custom"
if finding.status == "INFO":
row_class = "table-info"
elif finding.status == "FAIL":
row_class = "table-danger"
elif finding.status == "WARNING":
row_class = "table-warning"
file_descriptor.write(
f"""
<tr class="{row_class}">
<td>{finding.status}</td>
<td>{finding.check_metadata.Severity}</td>
<td>{finding.check_metadata.ServiceName}</td>
<td>{finding.region}</td>
<td>{finding.check_metadata.CheckTitle}</td>
<td>{finding.resource_id.replace("<", "&lt;").replace(">", "&gt;").replace("_", "<wbr>_")}</td>
<td>{finding.check_metadata.Description}</td>
<td>{finding.check_metadata.CheckID.replace("_", "<wbr>_")}</td>
<td>{finding.status_extended.replace("<", "&lt;").replace(">", "&gt;").replace("_", "<wbr>_")}</td>
<td><p class="show-read-more">{finding.check_metadata.Risk}</p></td>
<td><p class="show-read-more">{finding.check_metadata.Remediation.Recommendation.Text}</p></td>
<td><a class="read-more" href="{finding.check_metadata.Remediation.Recommendation.Url}"><i class="fas fa-external-link-alt"></i></a></td>
</tr>
"""
)
def fill_html(file_descriptor, finding, output_options):
try:
row_class = "p-3 mb-2 bg-success-custom"
if finding.status == "INFO":
row_class = "table-info"
elif finding.status == "FAIL":
row_class = "table-danger"
elif finding.status == "WARNING":
row_class = "table-warning"
file_descriptor.write(
f"""
<tr class="{row_class}">
<td>{finding.status}</td>
<td>{finding.check_metadata.Severity}</td>
<td>{finding.check_metadata.ServiceName}</td>
<td>{finding.location if isinstance(finding, Check_Report_GCP) else finding.region if isinstance(finding, Check_Report_AWS) else ""}</td>
<td>{finding.check_metadata.CheckID.replace("_", "<wbr>_")}</td>
<td>{finding.check_metadata.CheckTitle}</td>
<td>{finding.resource_id.replace("<", "&lt;").replace(">", "&gt;").replace("_", "<wbr>_")}</td>
<td>{parse_html_string(unroll_tags(finding.resource_tags))}</td>
<td>{finding.status_extended.replace("<", "&lt;").replace(">", "&gt;").replace("_", "<wbr>_")}</td>
<td><p class="show-read-more">{finding.check_metadata.Risk}</p></td>
<td><p class="show-read-more">{finding.check_metadata.Remediation.Recommendation.Text}</p> <a class="read-more" href="{finding.check_metadata.Remediation.Recommendation.Url}"><i class="fas fa-external-link-alt"></i></a></td>
<td><p class="show-read-more">{parse_html_string(unroll_dict(get_check_compliance(finding, finding.check_metadata.Provider, output_options)))}</p></td>
</tr>
"""
)
except Exception as error:
logger.critical(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
sys.exit(1)
def fill_html_overview_statistics(stats, output_filename, output_directory):
@@ -365,3 +333,207 @@ def add_html_footer(output_filename, output_directory):
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
sys.exit(1)
def get_aws_html_assessment_summary(audit_info):
try:
if isinstance(audit_info, AWS_Audit_Info):
if not audit_info.profile:
audit_info.profile = "ENV"
if isinstance(audit_info.audited_regions, list):
audited_regions = " ".join(audit_info.audited_regions)
elif not audit_info.audited_regions:
audited_regions = "All Regions"
else:
audited_regions = audit_info.audited_regions
return (
"""
<div class="col-md-2">
<div class="card">
<div class="card-header">
AWS Assessment Summary
</div>
<ul class="list-group list-group-flush">
<li class="list-group-item">
<b>AWS Account:</b> """
+ audit_info.audited_account
+ """
</li>
<li class="list-group-item">
<b>AWS-CLI Profile:</b> """
+ audit_info.profile
+ """
</li>
<li class="list-group-item">
<b>Audited Regions:</b> """
+ audited_regions
+ """
</li>
</ul>
</div>
</div>
<div class="col-md-4">
<div class="card">
<div class="card-header">
AWS Credentials
</div>
<ul class="list-group list-group-flush">
<li class="list-group-item">
<b>User Id:</b> """
+ audit_info.audited_user_id
+ """
</li>
<li class="list-group-item">
<b>Caller Identity ARN:</b> """
+ audit_info.audited_identity_arn
+ """
</li>
</ul>
</div>
</div>
"""
)
except Exception as error:
logger.critical(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
sys.exit(1)
def get_azure_html_assessment_summary(audit_info):
try:
if isinstance(audit_info, Azure_Audit_Info):
printed_subscriptions = []
for key, value in audit_info.identity.subscriptions.items():
intermediate = key + " : " + value
printed_subscriptions.append(intermediate)
return (
"""
<div class="col-md-2">
<div class="card">
<div class="card-header">
Azure Assessment Summary
</div>
<ul class="list-group list-group-flush">
<li class="list-group-item">
<b>Azure Tenant IDs:</b> """
+ " ".join(audit_info.identity.tenant_ids)
+ """
</li>
<li class="list-group-item">
<b>Azure Tenant Domain:</b> """
+ audit_info.identity.domain
+ """
</li>
<li class="list-group-item">
<b>Azure Subscriptions:</b> """
+ " ".join(printed_subscriptions)
+ """
</li>
</ul>
</div>
</div>
<div class="col-md-4">
<div class="card">
<div class="card-header">
Azure Credentials
</div>
<ul class="list-group list-group-flush">
<li class="list-group-item">
<b>Azure Identity Type:</b> """
+ audit_info.identity.identity_type
+ """
</li>
<li class="list-group-item">
<b>Azure Identity ID:</b> """
+ audit_info.identity.identity_id
+ """
</li>
</ul>
</div>
</div>
"""
)
except Exception as error:
logger.critical(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
sys.exit(1)
def get_gcp_html_assessment_summary(audit_info):
try:
if isinstance(audit_info, GCP_Audit_Info):
try:
getattr(audit_info.credentials, "_service_account_email")
profile = (
audit_info.credentials._service_account_email
if audit_info.credentials._service_account_email is not None
else "default"
)
except AttributeError:
profile = "default"
return (
"""
<div class="col-md-2">
<div class="card">
<div class="card-header">
GCP Assessment Summary
</div>
<ul class="list-group list-group-flush">
<li class="list-group-item">
<b>GCP Project ID:</b> """
+ audit_info.project_id
+ """
</li>
</ul>
</div>
</div>
<div class="col-md-4">
<div class="card">
<div class="card-header">
GCP Credentials
</div>
<ul class="list-group list-group-flush">
<li class="list-group-item">
<b>GCP Account:</b> """
+ profile
+ """
</li>
</ul>
</div>
</div>
"""
)
except Exception as error:
logger.critical(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
sys.exit(1)
def get_assessment_summary(audit_info):
"""
get_assessment_summary gets the HTML assessment summary for the provider
"""
try:
# This is based in the Provider_Audit_Info class
# It is not pretty but useful
# AWS_Audit_Info --> aws
# GCP_Audit_Info --> gcp
# Azure_Audit_Info --> azure
provider = audit_info.__class__.__name__.split("_")[0].lower()
# Dynamically get the Provider quick inventory handler
provider_html_assessment_summary_function = (
f"get_{provider}_html_assessment_summary"
)
return getattr(
importlib.import_module(__name__), provider_html_assessment_summary_function
)(audit_info)
except Exception as error:
logger.critical(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
sys.exit(1)

View File

@@ -8,11 +8,17 @@ from prowler.config.config import (
timestamp_utc,
)
from prowler.lib.logger import logger
from prowler.lib.outputs.models import Compliance, ProductFields, Resource, Severity
from prowler.lib.outputs.models import (
Compliance,
ProductFields,
Resource,
Severity,
get_check_compliance,
)
from prowler.lib.utils.utils import hash_sha512, open_file
def fill_json_asff(finding_output, audit_info, finding):
def fill_json_asff(finding_output, audit_info, finding, output_options):
# Check if there are no resources in the finding
if finding.resource_arn == "":
if finding.resource_id == "":
@@ -31,7 +37,7 @@ def fill_json_asff(finding_output, audit_info, finding):
) = finding_output.CreatedAt = timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ")
finding_output.Severity = Severity(Label=finding.check_metadata.Severity.upper())
finding_output.Title = finding.check_metadata.CheckTitle
finding_output.Description = finding.check_metadata.Description
finding_output.Description = finding.status_extended
finding_output.Resources = [
Resource(
Id=finding.resource_arn,
@@ -40,14 +46,22 @@ def fill_json_asff(finding_output, audit_info, finding):
Region=finding.region,
)
]
# Check if any Requirement has > 64 characters
check_types = []
for type in finding.check_metadata.CheckType:
check_types.extend(type.split("/"))
# Iterate for each compliance framework
compliance_summary = []
associated_standards = []
check_compliance = get_check_compliance(finding, "aws", output_options)
for key, value in check_compliance.items():
associated_standards.append({"StandardsId": key})
item = f"{key} {' '.join(value)}"
if len(item) > 64:
item = item[0:63]
compliance_summary.append(item)
# Add ED to PASS or FAIL (PASSED/FAILED)
finding_output.Compliance = Compliance(
Status=finding.status + "ED",
RelatedRequirements=check_types,
AssociatedStandards=associated_standards,
RelatedRequirements=compliance_summary,
)
finding_output.Remediation = {
"Recommendation": finding.check_metadata.Remediation.Recommendation

View File

@@ -11,13 +11,37 @@ from prowler.lib.logger import logger
from prowler.providers.aws.lib.audit_info.models import AWS_Organizations_Info
def generate_provider_output_csv(provider: str, finding, audit_info, mode: str, fd):
def get_check_compliance(finding, provider, output_options):
try:
check_compliance = {}
# We have to retrieve all the check's compliance requirements
if finding.check_metadata.CheckID in output_options.bulk_checks_metadata:
for compliance in output_options.bulk_checks_metadata[
finding.check_metadata.CheckID
].Compliance:
compliance_fw = compliance.Framework
if compliance.Version:
compliance_fw = f"{compliance_fw}-{compliance.Version}"
if compliance.Provider == provider.upper():
if compliance_fw not in check_compliance:
check_compliance[compliance_fw] = []
for requirement in compliance.Requirements:
check_compliance[compliance_fw].append(requirement.Id)
return check_compliance
except Exception as error:
logger.critical(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
sys.exit(1)
def generate_provider_output_csv(
provider: str, finding, audit_info, mode: str, fd, output_options
):
"""
set_provider_output_options configures automatically the outputs based on the selected provider and returns the Provider_Output_Options object.
"""
try:
finding_output_model = f"{provider.capitalize()}_Check_Output_{mode.upper()}"
output_model = getattr(importlib.import_module(__name__), finding_output_model)
# Dynamically load the Provider_Output_Options class
finding_output_model = f"{provider.capitalize()}_Check_Output_{mode.upper()}"
output_model = getattr(importlib.import_module(__name__), finding_output_model)
@@ -32,6 +56,22 @@ def generate_provider_output_csv(provider: str, finding, audit_info, mode: str,
data[
"finding_unique_id"
] = f"prowler-{provider}-{finding.check_metadata.CheckID}-{finding.subscription}-{finding.resource_id}"
data["compliance"] = unroll_dict(
get_check_compliance(finding, provider, output_options)
)
finding_output = output_model(**data)
if provider == "gcp":
data["resource_id"] = finding.resource_id
data["resource_name"] = finding.resource_name
data["project_id"] = finding.project_id
data["location"] = finding.location
data[
"finding_unique_id"
] = f"prowler-{provider}-{finding.check_metadata.CheckID}-{finding.project_id}-{finding.resource_id}"
data["compliance"] = unroll_dict(
get_check_compliance(finding, provider, output_options)
)
finding_output = output_model(**data)
if provider == "aws":
@@ -43,6 +83,9 @@ def generate_provider_output_csv(provider: str, finding, audit_info, mode: str,
data[
"finding_unique_id"
] = f"prowler-{provider}-{finding.check_metadata.CheckID}-{audit_info.audited_account}-{finding.region}-{finding.resource_id}"
data["compliance"] = unroll_dict(
get_check_compliance(finding, provider, output_options)
)
finding_output = output_model(**data)
if audit_info.organizations_metadata:
@@ -91,7 +134,7 @@ def fill_common_data_csv(finding: dict) -> dict:
"severity": finding.check_metadata.Severity,
"resource_type": finding.check_metadata.ResourceType,
"resource_details": finding.resource_details,
"resource_tags": finding.resource_tags,
"resource_tags": unroll_tags(finding.resource_tags),
"description": finding.check_metadata.Description,
"risk": finding.check_metadata.Risk,
"related_url": finding.check_metadata.RelatedUrl,
@@ -113,26 +156,99 @@ def fill_common_data_csv(finding: dict) -> dict:
"remediation_recommendation_code_other": (
finding.check_metadata.Remediation.Code.Other
),
"categories": __unroll_list__(finding.check_metadata.Categories),
"depends_on": __unroll_list__(finding.check_metadata.DependsOn),
"related_to": __unroll_list__(finding.check_metadata.RelatedTo),
"categories": unroll_list(finding.check_metadata.Categories),
"depends_on": unroll_list(finding.check_metadata.DependsOn),
"related_to": unroll_list(finding.check_metadata.RelatedTo),
"notes": finding.check_metadata.Notes,
}
return data
def __unroll_list__(listed_items: list):
def unroll_list(listed_items: list):
unrolled_items = ""
separator = "|"
for item in listed_items:
if not unrolled_items:
unrolled_items = f"{item}"
else:
unrolled_items = f"{unrolled_items}{separator}{item}"
if listed_items:
for item in listed_items:
if not unrolled_items:
unrolled_items = f"{item}"
else:
unrolled_items = f"{unrolled_items} {separator} {item}"
return unrolled_items
def unroll_tags(tags: list):
unrolled_items = ""
separator = "|"
if tags and tags != [{}] and tags != [None]:
for item in tags:
# Check if there are tags in list
if type(item) == dict:
for key, value in item.items():
if not unrolled_items:
# Check the pattern of tags (Key:Value or Key:key/Value:value)
if "Key" != key and "Value" != key:
unrolled_items = f"{key}={value}"
else:
if "Key" == key:
unrolled_items = f"{value}="
else:
unrolled_items = f"{value}"
else:
if "Key" != key and "Value" != key:
unrolled_items = (
f"{unrolled_items} {separator} {key}={value}"
)
else:
if "Key" == key:
unrolled_items = (
f"{unrolled_items} {separator} {value}="
)
else:
unrolled_items = f"{unrolled_items}{value}"
elif not unrolled_items:
unrolled_items = f"{item}"
else:
unrolled_items = f"{unrolled_items} {separator} {item}"
return unrolled_items
def unroll_dict(dict: dict):
unrolled_items = ""
separator = "|"
for key, value in dict.items():
if type(value) == list:
value = ", ".join(value)
if not unrolled_items:
unrolled_items = f"{key}: {value}"
else:
unrolled_items = f"{unrolled_items} {separator} {key}: {value}"
return unrolled_items
def parse_html_string(str: str):
string = ""
for elem in str.split(" | "):
if elem:
string += f"\n&#x2022;{elem}\n"
return string
def parse_json_tags(tags: list):
dict_tags = {}
if tags and tags != [{}] and tags != [None]:
for tag in tags:
if "Key" in tag and "Value" in tag:
dict_tags[tag["Key"]] = tag["Value"]
else:
dict_tags.update(tag)
return dict_tags
def generate_csv_fields(format: Any) -> list[str]:
"""Generates the CSV headers for the given class"""
csv_fields = []
@@ -162,7 +278,7 @@ class Check_Output_CSV(BaseModel):
severity: str
resource_type: str
resource_details: str
resource_tags: list
resource_tags: str
description: str
risk: str
related_url: str
@@ -172,6 +288,7 @@ class Check_Output_CSV(BaseModel):
remediation_recommendation_code_terraform: str
remediation_recommendation_code_cli: str
remediation_recommendation_code_other: str
compliance: str
categories: str
depends_on: str
related_to: str
@@ -206,7 +323,20 @@ class Azure_Check_Output_CSV(Check_Output_CSV):
resource_name: str = ""
def generate_provider_output_json(provider: str, finding, audit_info, mode: str, fd):
class Gcp_Check_Output_CSV(Check_Output_CSV):
"""
Gcp_Check_Output_CSV generates a finding's output in CSV format for the GCP provider.
"""
project_id: str = ""
location: str = ""
resource_id: str = ""
resource_name: str = ""
def generate_provider_output_json(
provider: str, finding, audit_info, mode: str, output_options
):
"""
generate_provider_output_json configures automatically the outputs based on the selected provider and returns the Check_Output_JSON object.
"""
@@ -228,6 +358,19 @@ def generate_provider_output_json(provider: str, finding, audit_info, mode: str,
finding_output.ResourceId = finding.resource_id
finding_output.ResourceName = finding.resource_name
finding_output.FindingUniqueId = f"prowler-{provider}-{finding.check_metadata.CheckID}-{finding.subscription}-{finding.resource_id}"
finding_output.Compliance = get_check_compliance(
finding, provider, output_options
)
if provider == "gcp":
finding_output.ProjectId = audit_info.project_id
finding_output.Location = finding.location
finding_output.ResourceId = finding.resource_id
finding_output.ResourceName = finding.resource_name
finding_output.FindingUniqueId = f"prowler-{provider}-{finding.check_metadata.CheckID}-{finding.project_id}-{finding.resource_id}"
finding_output.Compliance = get_check_compliance(
finding, provider, output_options
)
if provider == "aws":
finding_output.Profile = audit_info.profile
@@ -235,7 +378,11 @@ def generate_provider_output_json(provider: str, finding, audit_info, mode: str,
finding_output.Region = finding.region
finding_output.ResourceId = finding.resource_id
finding_output.ResourceArn = finding.resource_arn
finding_output.ResourceTags = parse_json_tags(finding.resource_tags)
finding_output.FindingUniqueId = f"prowler-{provider}-{finding.check_metadata.CheckID}-{audit_info.audited_account}-{finding.region}-{finding.resource_id}"
finding_output.Compliance = get_check_compliance(
finding, provider, output_options
)
if audit_info.organizations_metadata:
finding_output.OrganizationsInfo = (
@@ -271,11 +418,11 @@ class Check_Output_JSON(BaseModel):
Severity: str
ResourceType: str
ResourceDetails: str = ""
Tags: dict
Description: str
Risk: str
RelatedUrl: str
Remediation: Remediation
Compliance: Optional[dict]
Categories: List[str]
DependsOn: List[str]
RelatedTo: List[str]
@@ -293,6 +440,7 @@ class Aws_Check_Output_JSON(Check_Output_JSON):
Region: str = ""
ResourceId: str = ""
ResourceArn: str = ""
ResourceTags: list = []
def __init__(self, **metadata):
super().__init__(**metadata)
@@ -300,7 +448,7 @@ class Aws_Check_Output_JSON(Check_Output_JSON):
class Azure_Check_Output_JSON(Check_Output_JSON):
"""
Aws_Check_Output_JSON generates a finding's output in JSON format for the AWS provider.
Azure_Check_Output_JSON generates a finding's output in JSON format for the AWS provider.
"""
Tenant_Domain: str = ""
@@ -312,12 +460,27 @@ class Azure_Check_Output_JSON(Check_Output_JSON):
super().__init__(**metadata)
class Gcp_Check_Output_JSON(Check_Output_JSON):
"""
Gcp_Check_Output_JSON generates a finding's output in JSON format for the AWS provider.
"""
ProjectId: str = ""
ResourceId: str = ""
ResourceName: str = ""
Location: str = ""
def __init__(self, **metadata):
super().__init__(**metadata)
class Check_Output_CSV_ENS_RD2022(BaseModel):
"""
Check_Output_CSV_ENS_RD2022 generates a finding's output in CSV ENS RD2022 format.
"""
Provider: str
Description: str
AccountId: str
Region: str
AssessmentDate: str
@@ -338,10 +501,11 @@ class Check_Output_CSV_ENS_RD2022(BaseModel):
class Check_Output_CSV_CIS(BaseModel):
"""
Check_Output_CSV_ENS_RD2022 generates a finding's output in CSV CIS format.
Check_Output_CSV_CIS generates a finding's output in CSV CIS format.
"""
Provider: str
Description: str
AccountId: str
Region: str
AssessmentDate: str
@@ -363,6 +527,29 @@ class Check_Output_CSV_CIS(BaseModel):
CheckId: str
class Check_Output_CSV_Generic_Compliance(BaseModel):
"""
Check_Output_CSV_Generic_Compliance generates a finding's output in CSV Generic Compliance format.
"""
Provider: str
Description: str
AccountId: str
Region: str
AssessmentDate: str
Requirements_Id: str
Requirements_Description: str
Requirements_Attributes_Section: Optional[str]
Requirements_Attributes_SubSection: Optional[str]
Requirements_Attributes_SubGroup: Optional[str]
Requirements_Attributes_Service: str
Requirements_Attributes_Soc_Type: Optional[str]
Status: str
StatusExtended: str
ResourceId: str
CheckId: str
# JSON ASFF Output
class ProductFields(BaseModel):
ProviderName: str = "Prowler"
@@ -384,6 +571,7 @@ class Resource(BaseModel):
class Compliance(BaseModel):
Status: str
RelatedRequirements: List[str]
AssociatedStandards: List[dict]
class Check_Output_JSON_ASFF(BaseModel):

View File

@@ -4,6 +4,7 @@ import sys
from colorama import Fore, Style
from prowler.config.config import (
available_compliance_frameworks,
csv_file_suffix,
html_file_suffix,
json_asff_file_suffix,
@@ -11,7 +12,7 @@ from prowler.config.config import (
orange_color,
)
from prowler.lib.logger import logger
from prowler.lib.outputs.compliance import fill_compliance
from prowler.lib.outputs.compliance import add_manual_controls, fill_compliance
from prowler.lib.outputs.file_descriptors import fill_file_descriptors
from prowler.lib.outputs.html import fill_html
from prowler.lib.outputs.json import fill_json_asff
@@ -19,6 +20,7 @@ from prowler.lib.outputs.models import (
Check_Output_JSON_ASFF,
generate_provider_output_csv,
generate_provider_output_json,
unroll_tags,
)
from prowler.providers.aws.lib.allowlist.allowlist import is_allowlisted
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
@@ -31,6 +33,8 @@ def stdout_report(finding, color, verbose, is_quiet):
details = finding.region
if finding.check_metadata.Provider == "azure":
details = finding.check_metadata.ServiceName
if finding.check_metadata.Provider == "gcp":
details = finding.location
if verbose and not (is_quiet and finding.status != "FAIL"):
print(
@@ -69,6 +73,7 @@ def report(check_findings, output_options, audit_info):
finding.check_metadata.CheckID,
finding.region,
finding.resource_id,
unroll_tags(finding.resource_tags),
):
finding.status = "WARNING"
# Print findings by stdout
@@ -82,9 +87,9 @@ def report(check_findings, output_options, audit_info):
if not (finding.status != "FAIL" and output_options.is_quiet):
# AWS specific outputs
if finding.check_metadata.Provider == "aws":
if (
"ens_rd2022_aws" in output_options.output_modes
or "cis" in str(output_options.output_modes)
if any(
compliance in output_options.output_modes
for compliance in available_compliance_frameworks
):
fill_compliance(
output_options,
@@ -93,13 +98,17 @@ def report(check_findings, output_options, audit_info):
file_descriptors,
)
if "html" in file_descriptors:
fill_html(file_descriptors["html"], finding)
file_descriptors["html"].write("")
add_manual_controls(
output_options,
audit_info,
file_descriptors,
)
if "json-asff" in file_descriptors:
finding_output = Check_Output_JSON_ASFF()
fill_json_asff(finding_output, audit_info, finding)
fill_json_asff(
finding_output, audit_info, finding, output_options
)
json.dump(
finding_output.dict(),
@@ -122,6 +131,10 @@ def report(check_findings, output_options, audit_info):
)
# Common outputs
if "html" in file_descriptors:
fill_html(file_descriptors["html"], finding, output_options)
file_descriptors["html"].write("")
if "csv" in file_descriptors:
csv_writer, finding_output = generate_provider_output_csv(
finding.check_metadata.Provider,
@@ -129,6 +142,7 @@ def report(check_findings, output_options, audit_info):
audit_info,
"csv",
file_descriptors["csv"],
output_options,
)
csv_writer.writerow(finding_output.__dict__)
@@ -138,7 +152,7 @@ def report(check_findings, output_options, audit_info):
finding,
audit_info,
"json",
file_descriptors["json"],
output_options,
)
json.dump(
finding_output.dict(),
@@ -196,6 +210,8 @@ def send_to_s3_bucket(
filename = f"{output_filename}{json_asff_file_suffix}"
elif output_mode == "html":
filename = f"{output_filename}{html_file_suffix}"
else: # Compliance output mode
filename = f"{output_filename}_{output_mode}{csv_file_suffix}"
logger.info(f"Sending outputs to S3 bucket {output_bucket}")
bucket_remote_dir = output_directory
while "prowler/" in bucket_remote_dir: # Check if it is not a custom directory

View File

@@ -0,0 +1,135 @@
import sys
from slack_sdk import WebClient
from prowler.config.config import aws_logo, azure_logo, gcp_logo, square_logo_img
from prowler.lib.logger import logger
def send_slack_message(token, channel, stats, provider, audit_info):
try:
client = WebClient(token=token)
identity, logo = create_message_identity(provider, audit_info)
response = client.chat_postMessage(
username="Prowler",
icon_url=square_logo_img,
channel="#" + channel,
blocks=create_message_blocks(identity, logo, stats),
)
return response
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def create_message_identity(provider, audit_info):
try:
identity = ""
logo = aws_logo
if provider == "aws":
identity = f"AWS Account *{audit_info.audited_account}*"
elif provider == "gcp":
identity = f"GCP Project *{audit_info.project_id}*"
logo = gcp_logo
elif provider == "azure":
printed_subscriptions = []
for key, value in audit_info.identity.subscriptions.items():
intermediate = "- *" + key + ": " + value + "*\n"
printed_subscriptions.append(intermediate)
identity = f"Azure Subscriptions:\n{''.join(printed_subscriptions)}"
logo = azure_logo
return identity, logo
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def create_message_blocks(identity, logo, stats):
try:
blocks = [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": f"Hey there 👋 \n I'm *Prowler*, _the handy cloud security tool_ :cloud::key:\n\n I have just finished the security assessment on your {identity} with a total of *{stats['findings_count']}* findings.",
},
"accessory": {
"type": "image",
"image_url": logo,
"alt_text": "Provider Logo",
},
},
{"type": "divider"},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": f"\n:white_check_mark: *{stats['total_pass']} Passed findings* ({round(stats['total_pass']/stats['findings_count']*100,2)}%)\n",
},
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": f"\n:x: *{stats['total_fail']} Failed findings* ({round(stats['total_fail']/stats['findings_count']*100,2)}%)\n ",
},
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": f"\n:bar_chart: *{stats['resources_count']} Scanned Resources*\n",
},
},
{"type": "divider"},
{
"type": "context",
"elements": [
{
"type": "mrkdwn",
"text": f"Used parameters: `prowler {' '.join(sys.argv[1:])} `",
}
],
},
{"type": "divider"},
{
"type": "section",
"text": {"type": "mrkdwn", "text": "Join our Slack Community!"},
"accessory": {
"type": "button",
"text": {"type": "plain_text", "text": "Prowler :slack:"},
"url": "https://join.slack.com/t/prowler-workspace/shared_invite/zt-1hix76xsl-2uq222JIXrC7Q8It~9ZNog",
},
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "Feel free to contact us in our repo",
},
"accessory": {
"type": "button",
"text": {"type": "plain_text", "text": "Prowler :github:"},
"url": "https://github.com/prowler-cloud/prowler",
},
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "See all the things you can do with ProwlerPro",
},
"accessory": {
"type": "button",
"text": {"type": "plain_text", "text": "Prowler Pro"},
"url": "https://prowler.pro",
},
},
]
return blocks
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)

View File

@@ -20,12 +20,18 @@ def display_summary_table(
entity_type = "Account"
audited_entities = audit_info.audited_account
elif provider == "azure":
if audit_info.identity.domain:
if (
audit_info.identity.domain
!= "Unknown tenant domain (missing AAD permissions)"
):
entity_type = "Tenant Domain"
audited_entities = audit_info.identity.domain
else:
entity_type = "Tenant ID/s"
audited_entities = " ".join(audit_info.identity.tenant_ids)
elif provider == "gcp":
entity_type = "Project ID"
audited_entities = audit_info.project_id
if findings:
current = {
@@ -53,7 +59,6 @@ def display_summary_table(
current["Service"] != finding.check_metadata.ServiceName
and current["Service"]
):
add_service_to_table(findings_table, current)
current["Total"] = current["Critical"] = current["High"] = current[

View File

@@ -1,16 +1,26 @@
import json
import os
import sys
import tempfile
from hashlib import sha512
from io import TextIOWrapper
from os.path import exists
from typing import Any
from detect_secrets import SecretsCollection
from detect_secrets.settings import default_settings
from prowler.lib.logger import logger
def open_file(input_file: str, mode: str = "r") -> TextIOWrapper:
try:
f = open(input_file, mode)
except OSError:
logger.critical(
"Ooops! You reached your user session maximum open files. To solve this issue, increase the shell session limit by running this command `ulimit -n 4096`. For more info visit https://docs.prowler.cloud/en/latest/troubleshooting/"
)
sys.exit(1)
except Exception as e:
logger.critical(
f"{input_file}: {e.__class__.__name__}[{e.__traceback__.tb_lineno}]"
@@ -49,3 +59,20 @@ def file_exists(filename: str):
# create sha512 hash for string
def hash_sha512(string: str) -> str:
return sha512(string.encode("utf-8")).hexdigest()[0:9]
def detect_secrets_scan(data):
temp_data_file = tempfile.NamedTemporaryFile(delete=False)
temp_data_file.write(bytes(data, encoding="raw_unicode_escape"))
temp_data_file.close()
secrets = SecretsCollection()
with default_settings():
secrets.scan_file(temp_data_file.name)
os.remove(temp_data_file.name)
detect_secrets_output = secrets.json()
if detect_secrets_output:
return detect_secrets_output[temp_data_file.name]
else:
return None

View File

@@ -7,6 +7,7 @@ from botocore.credentials import RefreshableCredentials
from botocore.session import get_session
from prowler.config.config import aws_services_json_file
from prowler.lib.check.check import list_modules, recover_checks_from_service
from prowler.lib.logger import logger
from prowler.lib.utils.utils import open_file, parse_json_file
from prowler.providers.aws.lib.audit_info.models import AWS_Assume_Role, AWS_Audit_Info
@@ -35,7 +36,7 @@ class AWS_Provider:
secret_key=audit_info.credentials.aws_secret_access_key,
token=audit_info.credentials.aws_session_token,
expiry_time=audit_info.credentials.expiration,
refresh_using=self.refresh,
refresh_using=self.refresh_credentials,
method="sts-assume-role",
)
# Here we need the botocore session since it needs to use refreshable credentials
@@ -59,7 +60,7 @@ class AWS_Provider:
# Refresh credentials method using assume role
# This method is called "adding ()" to the name, so it cannot accept arguments
# https://github.com/boto/botocore/blob/098cc255f81a25b852e1ecdeb7adebd94c7b1b73/botocore/credentials.py#L570
def refresh(self):
def refresh_credentials(self):
logger.info("Refreshing assumed credentials...")
response = assume_role(self.aws_session, self.role_info)
@@ -84,7 +85,7 @@ def assume_role(session: session.Session, assumed_role_info: AWS_Assume_Role) ->
if assumed_role_info.external_id:
assumed_credentials = sts_client.assume_role(
RoleArn=assumed_role_info.role_arn,
RoleSessionName="ProwlerProAsessmentSession",
RoleSessionName="ProwlerAsessmentSession",
DurationSeconds=assumed_role_info.session_duration,
ExternalId=assumed_role_info.external_id,
)
@@ -110,8 +111,8 @@ def generate_regional_clients(
regional_clients = {}
# Get json locally
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
f = open_file(f"{actual_directory}/{aws_services_json_file}")
data = parse_json_file(f)
with open_file(f"{actual_directory}/{aws_services_json_file}") as f:
data = parse_json_file(f)
# Check if it is a subservice
json_regions = data["services"][service]["regions"][
audit_info.audited_partition
@@ -130,7 +131,7 @@ def generate_regional_clients(
regions = regions[:1]
for region in regions:
regional_client = audit_info.audit_session.client(
service, region_name=region
service, region_name=region, config=audit_info.session_config
)
regional_client.region = region
regional_clients[region] = regional_client
@@ -144,8 +145,8 @@ def generate_regional_clients(
def get_aws_available_regions():
try:
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
f = open_file(f"{actual_directory}/{aws_services_json_file}")
data = parse_json_file(f)
with open_file(f"{actual_directory}/{aws_services_json_file}") as f:
data = parse_json_file(f)
regions = set()
for service in data["services"].values():
@@ -156,3 +157,73 @@ def get_aws_available_regions():
except Exception as error:
logger.error(f"{error.__class__.__name__}: {error}")
return []
def get_checks_from_input_arn(audit_resources: list, provider: str) -> set:
"""get_checks_from_input_arn gets the list of checks from the input arns"""
checks_from_arn = set()
# Handle if there are audit resources so only their services are executed
if audit_resources:
services_without_subservices = ["guardduty", "kms", "s3", "elb"]
service_list = set()
sub_service_list = set()
for resource in audit_resources:
service = resource.split(":")[2]
sub_service = resource.split(":")[5].split("/")[0].replace("-", "_")
# WAF Services does not have checks
if service != "wafv2" and service != "waf":
# Parse services when they are different in the ARNs
if service == "lambda":
service = "awslambda"
if service == "elasticloadbalancing":
service = "elb"
elif service == "logs":
service = "cloudwatch"
# Check if Prowler has checks in service
try:
list_modules(provider, service)
except ModuleNotFoundError:
# Service is not supported
pass
else:
service_list.add(service)
# Get subservices to execute only applicable checks
if service not in services_without_subservices:
# Parse some specific subservices
if service == "ec2":
if sub_service == "security_group":
sub_service = "securitygroup"
if sub_service == "network_acl":
sub_service = "networkacl"
if sub_service == "image":
sub_service = "ami"
if service == "rds":
if sub_service == "cluster_snapshot":
sub_service = "snapshot"
sub_service_list.add(sub_service)
else:
sub_service_list.add(service)
checks = recover_checks_from_service(service_list, provider)
# Filter only checks with audited subservices
for check in checks:
if any(sub_service in check for sub_service in sub_service_list):
if not (sub_service == "policy" and "password_policy" in check):
checks_from_arn.add(check)
# Return final checks list
return sorted(checks_from_arn)
def get_regions_from_audit_resources(audit_resources: list) -> list:
"""get_regions_from_audit_resources gets the regions from the audit resources arns"""
audited_regions = []
for resource in audit_resources:
region = resource.split(":")[3]
if region and region not in audited_regions: # Check if arn has a region
audited_regions.append(region)
if audited_regions:
return audited_regions
return None

File diff suppressed because it is too large Load Diff

View File

@@ -3,12 +3,20 @@ import sys
import yaml
from boto3.dynamodb.conditions import Attr
from schema import Schema
from schema import Optional, Schema
from prowler.lib.logger import logger
allowlist_schema = Schema(
{"Accounts": {str: {"Checks": {str: {"Regions": list, "Resources": list}}}}}
{
"Accounts": {
str: {
"Checks": {
str: {"Regions": list, "Resources": list, Optional("Tags"): list}
}
}
}
}
)
@@ -61,14 +69,25 @@ def parse_allowlist_file(audit_info, allowlist_file):
dynamodb_items.update(response["Items"])
for item in dynamodb_items:
# Create allowlist for every item
allowlist["Accounts"][item["Accounts"]] = {
"Checks": {
item["Checks"]: {
"Regions": item["Regions"],
"Resources": item["Resources"],
if "Tags" in item:
allowlist["Accounts"][item["Accounts"]] = {
"Checks": {
item["Checks"]: {
"Regions": item["Regions"],
"Resources": item["Resources"],
"Tags": item["Tags"],
}
}
}
else:
allowlist["Accounts"][item["Accounts"]] = {
"Checks": {
item["Checks"]: {
"Regions": item["Regions"],
"Resources": item["Resources"],
}
}
}
}
else:
with open(allowlist_file) as f:
allowlist = yaml.safe_load(f)["Allowlist"]
@@ -87,18 +106,18 @@ def parse_allowlist_file(audit_info, allowlist_file):
sys.exit(1)
def is_allowlisted(allowlist, audited_account, check, region, resource):
def is_allowlisted(allowlist, audited_account, check, region, resource, tags):
try:
if audited_account in allowlist["Accounts"]:
if is_allowlisted_in_check(
allowlist, audited_account, check, region, resource
allowlist, audited_account, check, region, resource, tags
):
return True
# If there is a *, it affects to all accounts
if "*" in allowlist["Accounts"]:
audited_account = "*"
if is_allowlisted_in_check(
allowlist, audited_account, check, region, resource
allowlist, audited_account, check, region, resource, tags
):
return True
return False
@@ -109,21 +128,35 @@ def is_allowlisted(allowlist, audited_account, check, region, resource):
sys.exit(1)
def is_allowlisted_in_check(allowlist, audited_account, check, region, resource):
def is_allowlisted_in_check(allowlist, audited_account, check, region, resource, tags):
try:
# If there is a *, it affects to all checks
if "*" in allowlist["Accounts"][audited_account]["Checks"]:
check = "*"
if is_allowlisted_in_region(
allowlist, audited_account, check, region, resource
):
return True
# Check if there is the specific check
if check in allowlist["Accounts"][audited_account]["Checks"]:
if is_allowlisted_in_region(
allowlist, audited_account, check, region, resource
):
return True
for allowlisted_check in allowlist["Accounts"][audited_account][
"Checks"
].keys():
# If there is a *, it affects to all checks
if "*" == allowlisted_check:
check = "*"
if is_allowlisted_in_region(
allowlist, audited_account, check, region, resource, tags
):
return True
# Check if there is the specific check
elif check == allowlisted_check:
if is_allowlisted_in_region(
allowlist, audited_account, check, region, resource, tags
):
return True
# Check if check is a regex
elif re.search(allowlisted_check, check):
if is_allowlisted_in_region(
allowlist,
audited_account,
allowlisted_check,
region,
resource,
tags,
):
return True
return False
except Exception as error:
logger.critical(
@@ -132,30 +165,67 @@ def is_allowlisted_in_check(allowlist, audited_account, check, region, resource)
sys.exit(1)
def is_allowlisted_in_region(allowlist, audited_account, check, region, resource):
def is_allowlisted_in_region(allowlist, audited_account, check, region, resource, tags):
try:
# If there is a *, it affects to all regions
if "*" in allowlist["Accounts"][audited_account]["Checks"][check]["Regions"]:
for elem in allowlist["Accounts"][audited_account]["Checks"][check][
"Resources"
]:
# Check if it is an *
if elem == "*":
elem = ".*"
if re.search(elem, resource):
if is_allowlisted_in_tags(
allowlist["Accounts"][audited_account]["Checks"][check],
elem,
resource,
tags,
):
return True
# Check if there is the specific region
if region in allowlist["Accounts"][audited_account]["Checks"][check]["Regions"]:
for elem in allowlist["Accounts"][audited_account]["Checks"][check][
"Resources"
]:
# Check if it is an *
if elem == "*":
elem = ".*"
if re.search(elem, resource):
if is_allowlisted_in_tags(
allowlist["Accounts"][audited_account]["Checks"][check],
elem,
resource,
tags,
):
return True
except Exception as error:
logger.critical(
f"{error.__class__.__name__} -- {error}[{error.__traceback__.tb_lineno}]"
)
sys.exit(1)
def is_allowlisted_in_tags(check_allowlist, elem, resource, tags):
try:
# Check if it is an *
if elem == "*":
elem = ".*"
# Check if there are allowlisted tags
if "Tags" in check_allowlist:
# Check if there are resource tags
if not tags or not re.search(elem, resource):
return False
all_allowed_tags_in_resource_tags = True
for allowed_tag in check_allowlist["Tags"]:
found_allowed_tag = False
for resource_tag in tags:
if re.search(allowed_tag, resource_tag):
found_allowed_tag = True
break
if not found_allowed_tag:
all_allowed_tags_in_resource_tags = False
return all_allowed_tags_in_resource_tags
else:
if re.search(elem, resource):
return True
except Exception as error:
logger.critical(
f"{error.__class__.__name__} -- {error}[{error.__traceback__.tb_lineno}]"
)
sys.exit(1)

View File

@@ -1,50 +1,48 @@
import re
from arnparse import arnparse
from prowler.providers.aws.lib.arn.error import (
RoleArnParsingEmptyResource,
RoleArnParsingFailedMissingFields,
RoleArnParsingIAMRegionNotEmpty,
RoleArnParsingInvalidAccountID,
RoleArnParsingInvalidResourceType,
RoleArnParsingPartitionEmpty,
RoleArnParsingServiceNotIAM,
RoleArnParsingServiceNotIAMnorSTS,
)
from prowler.providers.aws.lib.arn.models import ARN
def arn_parsing(arn):
# check for number of fields, must be six
if len(arn.split(":")) != 6:
raise RoleArnParsingFailedMissingFields
def parse_iam_credentials_arn(arn: str) -> ARN:
arn_parsed = ARN(arn)
# First check if region is empty (in IAM ARN's region is always empty)
if arn_parsed.region:
raise RoleArnParsingIAMRegionNotEmpty
else:
arn_parsed = arnparse(arn)
# First check if region is empty (in IAM arns region is always empty)
if arn_parsed.region is not None:
raise RoleArnParsingIAMRegionNotEmpty
# check if needed fields are filled:
# - partition
# - service
# - account_id
# - resource_type
# - resource
if arn_parsed.partition is None or arn_parsed.partition == "":
raise RoleArnParsingPartitionEmpty
elif arn_parsed.service != "iam" and arn_parsed.service != "sts":
raise RoleArnParsingServiceNotIAMnorSTS
elif (
arn_parsed.account_id is None
or len(arn_parsed.account_id) != 12
or not arn_parsed.account_id.isnumeric()
):
raise RoleArnParsingInvalidAccountID
elif (
arn_parsed.resource_type != "role"
and arn_parsed.resource_type != "user"
and arn_parsed.resource_type != "assumed-role"
):
raise RoleArnParsingInvalidResourceType
elif arn_parsed.resource == "":
raise RoleArnParsingEmptyResource
else:
# check if needed fields are filled:
# - partition
# - service
# - account_id
# - resource_type
# - resource
if arn_parsed.partition is None:
raise RoleArnParsingPartitionEmpty
elif arn_parsed.service != "iam":
raise RoleArnParsingServiceNotIAM
elif (
arn_parsed.account_id is None
or len(arn_parsed.account_id) != 12
or not arn_parsed.account_id.isnumeric()
):
raise RoleArnParsingInvalidAccountID
elif arn_parsed.resource_type != "role":
raise RoleArnParsingInvalidResourceType
elif arn_parsed.resource == "":
raise RoleArnParsingEmptyResource
else:
return arn_parsed
return arn_parsed
def is_valid_arn(arn: str) -> bool:

View File

@@ -1,43 +1,49 @@
class RoleArnParsingFailedMissingFields(Exception):
# The arn contains a numberof fields different than six separated by :"
# The ARN contains a numberof fields different than six separated by :"
def __init__(self):
self.message = "The assumed role arn contains a number of fields different than six separated by :, please input a valid arn"
self.message = "The assumed role ARN contains an invalid number of fields separated by : or it does not start by arn, please input a valid ARN"
super().__init__(self.message)
class RoleArnParsingIAMRegionNotEmpty(Exception):
# The arn contains a non-empty value for region, since it is an IAM arn is not valid
# The ARN contains a non-empty value for region, since it is an IAM ARN is not valid
def __init__(self):
self.message = "The assumed role arn contains a non-empty value for region, since it is an IAM arn is not valid, please input a valid arn"
self.message = "The assumed role ARN contains a non-empty value for region, since it is an IAM ARN is not valid, please input a valid ARN"
super().__init__(self.message)
class RoleArnParsingPartitionEmpty(Exception):
# The arn contains an empty value for partition
# The ARN contains an empty value for partition
def __init__(self):
self.message = "The assumed role arn does not contain a value for partition, please input a valid arn"
self.message = "The assumed role ARN does not contain a value for partition, please input a valid ARN"
super().__init__(self.message)
class RoleArnParsingServiceNotIAM(Exception):
class RoleArnParsingServiceNotIAMnorSTS(Exception):
def __init__(self):
self.message = "The assumed role arn contains a value for service distinct than iam, please input a valid arn"
self.message = "The assumed role ARN contains a value for service distinct than IAM or STS, please input a valid ARN"
super().__init__(self.message)
class RoleArnParsingServiceNotSTS(Exception):
def __init__(self):
self.message = "The assumed role ARN contains a value for service distinct than STS, please input a valid ARN"
super().__init__(self.message)
class RoleArnParsingInvalidAccountID(Exception):
def __init__(self):
self.message = "The assumed role arn contains a value for account id empty or invalid, a valid account id must be composed of 12 numbers, please input a valid arn"
self.message = "The assumed role ARN contains a value for account id empty or invalid, a valid account id must be composed of 12 numbers, please input a valid ARN"
super().__init__(self.message)
class RoleArnParsingInvalidResourceType(Exception):
def __init__(self):
self.message = "The assumed role arn contains a value for resource type different than role, please input a valid arn"
self.message = "The assumed role ARN contains a value for resource type different than role, please input a valid ARN"
super().__init__(self.message)
class RoleArnParsingEmptyResource(Exception):
def __init__(self):
self.message = "The assumed role arn does not contain a value for resource, please input a valid arn"
self.message = "The assumed role ARN does not contain a value for resource, please input a valid ARN"
super().__init__(self.message)

View File

@@ -0,0 +1,57 @@
from typing import Optional
from pydantic import BaseModel
from prowler.providers.aws.lib.arn.error import RoleArnParsingFailedMissingFields
class ARN(BaseModel):
partition: str
service: str
region: Optional[str] # In IAM ARN's do not have region
account_id: str
resource: str
resource_type: str
def __init__(self, arn):
# Validate the ARN
## Check that arn starts with arn
if not arn.startswith("arn:"):
raise RoleArnParsingFailedMissingFields
## Retrieve fields
arn_elements = arn.split(":", 5)
data = {
"partition": arn_elements[1],
"service": arn_elements[2],
"region": arn_elements[3] if arn_elements[3] != "" else None,
"account_id": arn_elements[4],
"resource": arn_elements[5],
"resource_type": get_arn_resource_type(arn, arn_elements[2]),
}
if "/" in data["resource"]:
data["resource"] = data["resource"].split("/", 1)[1]
elif ":" in data["resource"]:
data["resource"] = data["resource"].split(":", 1)[1]
# Calls Pydantic's BaseModel __init__
super().__init__(**data)
def get_arn_resource_type(arn, service):
if service == "s3":
resource_type = "bucket"
elif service == "sns":
resource_type = "topic"
elif service == "sqs":
resource_type = "queue"
elif service == "apigateway":
split_parts = arn.split(":")[5].split("/")
if "integration" in split_parts and "responses" in split_parts:
resource_type = "restapis-resources-methods-integration-response"
elif "documentation" in split_parts and "parts" in split_parts:
resource_type = "restapis-documentation-parts"
else:
resource_type = arn.split(":")[5].split("/")[1]
else:
resource_type = arn.split(":")[5].split("/")[0]
return resource_type

View File

@@ -1,4 +1,5 @@
from boto3 import session
from botocore.config import Config
from prowler.providers.aws.lib.audit_info.models import AWS_Assume_Role, AWS_Audit_Info
@@ -9,6 +10,9 @@ current_audit_info = AWS_Audit_Info(
profile_name=None,
botocore_session=None,
),
# Default standard retrier config
# https://boto3.amazonaws.com/v1/documentation/api/latest/guide/retries.html
session_config=Config(retries={"max_attempts": 3, "mode": "standard"}),
audited_account=None,
audited_user_id=None,
audited_partition=None,

View File

@@ -3,6 +3,7 @@ from datetime import datetime
from typing import Any, Optional
from boto3 import session
from botocore.config import Config
@dataclass
@@ -33,6 +34,8 @@ class AWS_Organizations_Info:
class AWS_Audit_Info:
original_session: session.Session
audit_session: session.Session
# https://boto3.amazonaws.com/v1/documentation/api/latest/guide/retries.html
session_config: Config
audited_account: int
audited_identity_arn: str
audited_user_id: str

View File

@@ -0,0 +1,59 @@
import sys
from boto3 import session
from colorama import Fore, Style
from prowler.lib.logger import logger
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
AWS_STS_GLOBAL_ENDPOINT_REGION = "us-east-1"
def validate_aws_credentials(session: session, input_regions: list) -> dict:
try:
# For a valid STS GetCallerIdentity we have to use the right AWS Region
if input_regions is None or len(input_regions) == 0:
if session.region_name is not None:
aws_region = session.region_name
else:
# If there is no region set passed with -f/--region
# we use the Global STS Endpoint Region, us-east-1
aws_region = AWS_STS_GLOBAL_ENDPOINT_REGION
else:
# Get the first region passed to the -f/--region
aws_region = input_regions[0]
validate_credentials_client = session.client("sts", aws_region)
caller_identity = validate_credentials_client.get_caller_identity()
# Include the region where the caller_identity has validated the credentials
caller_identity["region"] = aws_region
except Exception as error:
logger.critical(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
sys.exit(1)
else:
return caller_identity
def print_aws_credentials(audit_info: AWS_Audit_Info):
# Beautify audited regions, set "all" if there is no filter region
regions = (
", ".join(audit_info.audited_regions)
if audit_info.audited_regions is not None
else "all"
)
# Beautify audited profile, set "default" if there is no profile set
profile = audit_info.profile if audit_info.profile is not None else "default"
report = f"""
This report is being generated using credentials below:
AWS-CLI Profile: {Fore.YELLOW}[{profile}]{Style.RESET_ALL} AWS Filter Region: {Fore.YELLOW}[{regions}]{Style.RESET_ALL}
AWS Account: {Fore.YELLOW}[{audit_info.audited_account}]{Style.RESET_ALL} UserId: {Fore.YELLOW}[{audit_info.audited_user_id}]{Style.RESET_ALL}
Caller Identity ARN: {Fore.YELLOW}[{audit_info.audited_identity_arn}]{Style.RESET_ALL}
"""
# If -A is set, print Assumed Role ARN
if audit_info.assumed_role_info.role_arn is not None:
report += f"""Assumed Role ARN: {Fore.YELLOW}[{audit_info.assumed_role_info.role_arn}]{Style.RESET_ALL}
"""
print(report)

View File

@@ -0,0 +1,40 @@
import sys
from boto3 import client
from prowler.lib.logger import logger
from prowler.providers.aws.lib.audit_info.models import AWS_Organizations_Info
def get_organizations_metadata(
metadata_account: str, assumed_credentials: dict
) -> AWS_Organizations_Info:
try:
organizations_client = client(
"organizations",
aws_access_key_id=assumed_credentials["Credentials"]["AccessKeyId"],
aws_secret_access_key=assumed_credentials["Credentials"]["SecretAccessKey"],
aws_session_token=assumed_credentials["Credentials"]["SessionToken"],
)
organizations_metadata = organizations_client.describe_account(
AccountId=metadata_account
)
list_tags_for_resource = organizations_client.list_tags_for_resource(
ResourceId=metadata_account
)
except Exception as error:
logger.critical(f"{error.__class__.__name__} -- {error}")
sys.exit(1)
else:
# Convert Tags dictionary to String
account_details_tags = ""
for tag in list_tags_for_resource["Tags"]:
account_details_tags += tag["Key"] + ":" + tag["Value"] + ","
organizations_info = AWS_Organizations_Info(
account_details_email=organizations_metadata["Account"]["Email"],
account_details_name=organizations_metadata["Account"]["Name"],
account_details_arn=organizations_metadata["Account"]["Arn"],
account_details_org=organizations_metadata["Account"]["Arn"].split("/")[1],
account_details_tags=account_details_tags,
)
return organizations_info

Some files were not shown because too many files have changed in this diff Show More