Compare commits

...

850 Commits
3.0.2 ... 3.9.0

Author SHA1 Message Date
github-actions
38bc336771 chore(release): 3.9.0 2023-08-25 10:43:41 +00:00
Nacho Rivera
276f6f9fb1 fix(ec2_securitygroup_default_restrict_traffic): fix check only allow empty rules (#2777) 2023-08-25 12:42:26 +02:00
Sergio Garcia
2386c71c4f chore(regions_update): Changes in regions for AWS services. (#2776)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-08-25 11:24:43 +02:00
Pepe Fagoaga
21c52db66b test(vpc_endpoint_services_allowed_principals_trust_boundaries) (#2768) 2023-08-25 10:56:47 +02:00
Pepe Fagoaga
13cfa02f80 fix(test): Update moto to 4.1.15 and update tests (#2769) 2023-08-25 10:56:39 +02:00
Pepe Fagoaga
eedfbe3e7a fix(iam_policy_allows_privilege_escalation): Not use search for checking API actions (#2772) 2023-08-25 10:56:28 +02:00
Pepe Fagoaga
fe03eb4436 docs: explain output formats (#2774) 2023-08-25 10:56:15 +02:00
Pepe Fagoaga
d8e45d5c3f docs: Include new config ecr_repository_vulnerability_minimum_severity (#2775) 2023-08-25 10:56:04 +02:00
Sergio Garcia
12e9fb5eeb chore(regions_update): Changes in regions for AWS services. (#2773)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-08-24 12:07:05 +02:00
gerardocampo
957ffaabae feat(compliance): Update AWS compliance frameworks after PR 2750 (#2771)
Co-authored-by: Gerard Ocampo <gerard.ocampo@zelis.com>
2023-08-24 08:01:00 +02:00
Pepe Fagoaga
cb76e5a23c chore(s3): Move lib to the AWS provider and include tests (#2664) 2023-08-23 16:12:48 +02:00
Sergio Garcia
b17cc563ff chore(regions_update): Changes in regions for AWS services. (#2767)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-08-23 11:29:12 +02:00
Pepe Fagoaga
06a0b12efb fix(iam_policy_allows_privilege_escalation): Handle admin permission so * (#2763) 2023-08-23 10:40:06 +02:00
Pepe Fagoaga
d5bd5ebb7d chore(parser): Move provider logic to their folder (#2746) 2023-08-23 10:33:36 +02:00
Nacho Rivera
0a9a1c26db fix(get_regions_from_audit_resources): fix logic and add tests (#2766) 2023-08-23 10:20:12 +02:00
Nacho Rivera
83bfd8a2d4 fix(get_checks_from_input_arn): fix logic and add tests (#2764) 2023-08-23 09:35:42 +02:00
gerardocampo
e5d2c0c700 feat(iam): Check inline policies in IAM Users, Groups & Roles for admin priv's (#2750)
Co-authored-by: Gerard Ocampo <gerard.ocampo@zelis.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-08-23 08:29:13 +02:00
Pepe Fagoaga
590a5669d6 fix(nacls): Tests (#2760) 2023-08-22 22:26:19 +02:00
Sergio Garcia
e042740f67 chore(regions_update): Changes in regions for AWS services. (#2759)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-08-22 11:43:58 +02:00
dependabot[bot]
dab2ecaa6b build(deps): bump shodan from 1.29.1 to 1.30.0 (#2754)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-22 09:16:08 +02:00
dependabot[bot]
f9f4133b48 build(deps): bump azure-mgmt-storage from 21.0.0 to 21.1.0 (#2756)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-22 08:49:06 +02:00
dependabot[bot]
33dd21897d build(deps-dev): bump pytest-randomly from 3.13.0 to 3.15.0 (#2755)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-22 08:30:07 +02:00
Geoff Singer
cb2ef23a29 feat(s3): Add S3 KMS encryption check (#2757)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-08-22 08:28:17 +02:00
dependabot[bot]
e70e01196f build(deps): bump google-api-python-client from 2.96.0 to 2.97.0 (#2753)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-22 08:08:13 +02:00
dependabot[bot]
f70b9e6eb4 build(deps): bump mkdocs-material from 9.1.21 to 9.2.1 (#2752)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-22 07:39:45 +02:00
Chris Farris
d186c69473 feat(checks): dump all checks as a json file (#2683)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-08-21 17:35:31 +02:00
Nacho Rivera
4d817c48a8 fix(get_checks_from_input_arn): fix function and add tests (#2749) 2023-08-21 13:23:43 +02:00
Pepe Fagoaga
c13cab792b docs(testing): Mocking the service and the service client at the service client level (#2747) 2023-08-21 09:05:57 +02:00
Pepe Fagoaga
80aa463aa2 fix(checks_to_execute): --checks and --resource_arn working together (#2743) 2023-08-21 09:04:15 +02:00
Sergio Garcia
bd28b17ad9 chore(regions_update): Changes in regions for AWS services. (#2748)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-08-21 08:15:25 +02:00
Sergio Garcia
223119e303 chore(regions_update): Changes in regions for AWS services. (#2744)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-08-18 12:38:17 +02:00
Pepe Fagoaga
7c45cb45ae feat(ecr_repositories_scan_vulnerabilities_in_latest_image): Minimum severity is configurable (#2736) 2023-08-18 09:17:02 +02:00
Pepe Fagoaga
ac11c6729b chore(tests): Replace sure with standard assert (#2738) 2023-08-17 11:36:45 +02:00
Pepe Fagoaga
1677654dea docs(audit_config): How to use it (#2739) 2023-08-17 11:36:32 +02:00
Pepe Fagoaga
bc5a7a961b tests(check_security_group) (#2740) 2023-08-17 11:36:17 +02:00
Sergio Garcia
c10462223d chore(regions_update): Changes in regions for AWS services. (#2741)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-08-17 11:31:31 +02:00
vysakh-devopspace
54a9f412e8 feat(ec2): New check ec2_instance_detailed_monitoring_enabled (#2735)
Co-authored-by: Vysakh <venugopal.vysakh@gmail.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-08-16 14:31:06 +02:00
Sergio Garcia
5a107c58bb chore(regions_update): Changes in regions for AWS services. (#2737)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-08-16 11:42:47 +02:00
Pepe Fagoaga
8f091e7548 fix(gcp): Status extended ends with a dot (#2734) 2023-08-16 10:14:41 +02:00
Pepe Fagoaga
8cdc7b18c7 fix(test-vpc): use the right import paths (#2732) 2023-08-16 09:17:18 +02:00
christiandavilakoobin
9f2e87e9fb fix(is_account_only_allowed_in_condition): Context name on conditions are case-insensitive (#2726) 2023-08-16 08:27:24 +02:00
Sergio Garcia
e119458048 chore(regions_update): Changes in regions for AWS services. (#2733)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-08-15 16:25:17 +02:00
dependabot[bot]
c2983faf1d build(deps): bump azure-identity from 1.13.0 to 1.14.0 (#2731)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-15 10:34:56 +02:00
dependabot[bot]
a09855207e build(deps-dev): bump coverage from 7.2.7 to 7.3.0 (#2730)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-15 09:50:18 +02:00
Pepe Fagoaga
1e1859ba6f docs(style): Add more details (#2724) 2023-08-15 09:26:48 +02:00
dependabot[bot]
a3937e48a8 build(deps): bump google-api-python-client from 2.95.0 to 2.96.0 (#2729)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-15 09:22:59 +02:00
dependabot[bot]
d2aa53a2ec build(deps): bump mkdocs-material from 9.1.20 to 9.1.21 (#2728)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-15 08:57:24 +02:00
dependabot[bot]
b0bdeea60f build(deps-dev): bump vulture from 2.7 to 2.8 (#2727) 2023-08-15 08:33:27 +02:00
Pepe Fagoaga
465e64b9ac fix(azure): Status extended ends with a dot (#2725) 2023-08-14 21:48:16 +02:00
Pepe Fagoaga
fc53b28997 test(s3): Mock S3Control when used (#2722) 2023-08-14 21:48:05 +02:00
Pepe Fagoaga
72e701a4b5 fix(security): GitPython issue (#2720) 2023-08-14 21:09:12 +02:00
Pepe Fagoaga
2298d5356d test(coverage): Add Codecov (#2719) 2023-08-14 21:08:45 +02:00
Pepe Fagoaga
54137be92b test(python): 3.9, 3.10, 3.11 (#2718) 2023-08-14 21:08:29 +02:00
Sergio Garcia
7ffb12268d chore(release): update Prowler Version to 3.8.2 (#2721)
Co-authored-by: github-actions <noreply@github.com>
2023-08-14 09:18:23 +02:00
Sergio Garcia
790fff460a chore(regions_update): Changes in regions for AWS services. (#2717)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-08-14 08:13:10 +02:00
Chris Farris
9055dbafe3 fix(s3_bucket_policy_public_write_access): look at account and bucket-level public access block settings (#2715) 2023-08-12 01:46:24 +02:00
Pepe Fagoaga
4454d9115e chore(aws): 2nd round - Improve tests and include dot in status extended (#2714) 2023-08-12 01:41:35 +02:00
Sergio Garcia
0d74dec446 chore(regions_update): Changes in regions for AWS services. (#2712)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-08-11 11:18:18 +02:00
Pepe Fagoaga
0313dba7b4 chore(aws): Improve tests and status from accessanalyzer to cloudwatch (#2711) 2023-08-11 11:04:04 +02:00
Pepe Fagoaga
3fafac75ef docs(dev-guide): Fix a list and include some details to use the report (#2710) 2023-08-11 11:01:58 +02:00
Sergio Garcia
6b24b46f3d fix(security-hub): handle default output filename error (#2709) 2023-08-11 09:12:25 +02:00
Pepe Fagoaga
474e39a4c9 docs(developer-guide): Update checks, services and include testing (#2705)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-08-10 17:28:35 +02:00
Sergio Garcia
e652298b6a chore(release): update Prowler Version to 3.8.1 (#2706)
Co-authored-by: github-actions <noreply@github.com>
2023-08-10 14:08:48 +02:00
Pepe Fagoaga
9340ae43f3 fix(ds): Restore enums without optional (#2704) 2023-08-10 13:43:31 +02:00
Sergio Garcia
552024c53e fix(Enum): handle Enum classes correctly (#2702) 2023-08-10 13:21:24 +02:00
Pepe Fagoaga
3aba71ad2f docs(aws-orgs): Update syntax (#2703) 2023-08-10 12:40:17 +02:00
christiandavilakoobin
ade511df28 fix(sns): allow default SNS policy with SourceOwner (#2698)
Co-authored-by: Azure Pipeplines CI <monitor@koobin.com>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-08-10 12:13:57 +02:00
Sergio Garcia
fc650214d4 fix(security hub): include custom output filename in resolve_security_hub_previous_findings (#2687) 2023-08-10 12:11:10 +02:00
Sergio Garcia
8266fd0c6f chore(print): prettify prints of listings and logs (#2699) 2023-08-10 12:08:07 +02:00
Pepe Fagoaga
f4308032c3 fix(cloudfront): fix ViewerProtocolPolicy and GeoRestrictionType (#2701) 2023-08-10 12:02:49 +02:00
Sergio Garcia
1e1f445ade chore(regions_update): Changes in regions for AWS services. (#2700)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-08-10 11:29:05 +02:00
Pepe Fagoaga
d41b0332ac feat(athena): New AWS Athena service + 2 workgroup checks (#2696) 2023-08-10 10:23:17 +02:00
Pepe Fagoaga
7258466572 fix(iam): password policy expiration (#2694) 2023-08-10 10:10:20 +02:00
Pepe Fagoaga
76db92ea14 chore(service): service class type hints (#2695) 2023-08-10 10:01:54 +02:00
Sergio Garcia
ad3cd66e08 docs(organizations): fix script and improve titles (#2693) 2023-08-10 09:56:47 +02:00
Sergio Garcia
22f8855ad7 chore(regions_update): Changes in regions for AWS services. (#2692)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-08-09 11:23:28 +02:00
Sergio Garcia
36e095c830 fix(iam_role_cross_service_confused_deputy_prevention): add ResourceAccount and PrincipalAccount conditions (#2689) 2023-08-09 10:41:48 +02:00
Sergio Garcia
887cac1264 fix(typo): spelling typo in organizations_scp_check_deny_regions (#2691) 2023-08-09 10:24:29 +02:00
Pepe Fagoaga
13059e0568 fix(ec2-securitygroups): Handle IPv6 public (#2690) 2023-08-09 10:08:30 +02:00
Pepe Fagoaga
9e8023d716 fix(config): Pass a configuration file using --config-file config.yaml (#2679) 2023-08-09 09:52:45 +02:00
Sergio Garcia
c54ba5fd8c chore(regions_update): Changes in regions for AWS services. (#2688)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-08-09 09:34:52 +02:00
dependabot[bot]
db80e063d4 build(deps-dev): bump pylint from 2.17.4 to 2.17.5 (#2685)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-08 10:48:42 +02:00
dependabot[bot]
b6aa12706a build(deps): bump mkdocs from 1.4.3 to 1.5.2 (#2684)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-08 10:22:20 +02:00
Chris Farris
c1caf6717d fix(organizations): request Organization Info after assume_role occurs (#2682)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-08-07 15:17:05 +02:00
Pepe Fagoaga
513fd9f532 fix(iam-dynamodb): Handle errors (#2680) 2023-08-07 10:04:19 +02:00
Pepe Fagoaga
bf77f817cb chore(azure): Improve AzureService class with __set_clients__ (#2676) 2023-08-04 13:04:05 +02:00
Sergio Garcia
e0bfef2ece chore(regions_update): Changes in regions for AWS services. (#2677)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-08-04 12:10:19 +02:00
Sergio Garcia
4a87f908a8 chore(release): update Prowler Version to 3.8.0 (#2674)
Co-authored-by: github-actions <noreply@github.com>
2023-08-03 18:34:23 +02:00
Sergio Garcia
16d95e5155 chore(readme): update providers summary table (#2673) 2023-08-03 16:45:09 +02:00
Pepe Fagoaga
1797b54259 test(azure): Storage Service (#2672) 2023-08-03 15:07:17 +02:00
Pepe Fagoaga
f289c8fb2e test(azure): SQL Server Service (#2671) 2023-08-03 14:43:18 +02:00
Pepe Fagoaga
e4ad881a69 test(azure): IAM service (#2670) 2023-08-03 14:15:34 +02:00
Pepe Fagoaga
138bca38e7 test(azure): Defender service (#2669) 2023-08-03 13:52:55 +02:00
edurra
44f7af3580 feat(azure): add Azure SQL Server service and 3 checks (#2665)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-08-03 11:29:17 +02:00
Sergio Garcia
2d832bca15 feat(gcp): Improve gcp performance (#2662) 2023-08-03 10:52:52 +02:00
Pepe Fagoaga
efa75a62e3 fix(iam_policy_allows_privilege_escalation): Handle permissions in groups (#2655) 2023-08-03 10:40:51 +02:00
Pepe Fagoaga
5763bca317 refactor(vpc_endpoint_connections_trust_boundaries) (#2667) 2023-08-03 09:56:09 +02:00
Pepe Fagoaga
c335334402 fix(test_only_aws_service_linked_roles): Flaky test (#2666) 2023-08-03 09:18:06 +02:00
Pepe Fagoaga
5bf3f70717 fix(vpc_endpoint_connections_trust_boundaries): Handle AWS Account ID as Principal (#2611) 2023-08-03 09:16:58 +02:00
Pepe Fagoaga
92c8a440ea feat(gcp): Add internet-exposed and encryption categories (#2663) 2023-08-02 15:53:12 +02:00
Pepe Fagoaga
b92d8a014c fix(cryptography): Update to 41.0.3 (#2661) 2023-08-02 11:47:51 +02:00
Sergio Garcia
aced44f051 fix(sns): handle topic policy conditions (#2660) 2023-08-02 11:45:27 +02:00
Sergio Garcia
49c9d2b077 chore(regions_update): Changes in regions for AWS services. (#2658) 2023-08-02 11:32:11 +02:00
Pepe Fagoaga
61beacf085 fix(docs): Azure auth and Slack integration (#2659) 2023-08-02 11:18:45 +02:00
Pepe Fagoaga
02f432238e fix(outputs): Not use reserved keyword list as variable (#2657) 2023-08-02 09:00:04 +02:00
Sergio Garcia
864d178e01 chore(regions_update): Changes in regions for AWS services. (#2654)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-08-01 11:52:02 +02:00
Sergio Garcia
78f0b823a9 fix(s3_bucket_level_public_access_block): check s3 public access block at account level (#2653) 2023-08-01 11:24:58 +02:00
dependabot[bot]
26cdc7a0ee build(deps-dev): bump flake8 from 6.0.0 to 6.1.0 (#2651)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-08-01 10:59:58 +02:00
dependabot[bot]
5e773f1eee build(deps): bump azure-mgmt-authorization from 3.0.0 to 4.0.0 (#2652)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-01 10:18:56 +02:00
dependabot[bot]
4a7ac7df22 build(deps-dev): bump moto from 4.1.13 to 4.1.14 (#2650)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-01 10:03:03 +02:00
dependabot[bot]
5250670d5d build(deps): bump google-api-python-client from 2.94.0 to 2.95.0 (#2649)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-01 09:49:51 +02:00
Gabriel Pragin
de4a825db8 fix(metadata): Typos (#2646) 2023-08-01 09:07:23 +02:00
dependabot[bot]
c256419144 build(deps): bump mkdocs-material from 9.1.19 to 9.1.20 (#2648) 2023-08-01 08:58:32 +02:00
Pepe Fagoaga
7bdca0420e fix(cloudtrail): Set status to INFO when trail is outside the audited account (#2643) 2023-07-31 17:50:21 +02:00
Pepe Fagoaga
3aa1fbced9 feat(azure_service): New parent class (#2642) 2023-07-31 16:03:49 +02:00
Pepe Fagoaga
dbbb70027a feat(gcp_service): Parent class (#2641) 2023-07-31 15:01:25 +02:00
Pepe Fagoaga
b4e78d28f8 fix(test): mock VPC client (#2640) 2023-07-31 11:19:15 +02:00
Pepe Fagoaga
e3d4e38a59 feat(aws): New AWSService class as parent (#2638) 2023-07-31 11:18:54 +02:00
Pepe Fagoaga
386f558eae fix(ec2_instance_secrets_user_data): Include line numbers in status (#2639) 2023-07-31 10:33:34 +02:00
Sergio Garcia
e08424d3a3 chore(regions_update): Changes in regions for AWS services. (#2637)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-07-31 09:54:44 +02:00
Chris Farris
03ad403e7a feat(s3): Add checks for publicly listable Buckets or writable buckets by ACL (#2628)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-07-31 08:35:18 +02:00
Sergio Garcia
4a674aae99 chore(regions_update): Changes in regions for AWS services. (#2634)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-07-28 11:34:30 +02:00
Pepe Fagoaga
8ee3744027 chore(security-hub): Explain Unique ID (#2631) 2023-07-27 13:39:12 +02:00
Gabriel Pragin
965327e801 chore(typos): Update check's status (#2629)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-07-27 11:44:09 +02:00
Sergio Garcia
f82ea43324 chore(regions_update): Changes in regions for AWS services. (#2630)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-07-27 11:31:45 +02:00
Pepe Fagoaga
a5c63845b4 test: security groups (#2627) 2023-07-26 16:29:27 +02:00
Sergio Garcia
034faa72cf chore(release): update Prowler Version to 3.7.2 (#2625)
Co-authored-by: github-actions <noreply@github.com>
2023-07-26 13:37:31 +02:00
Sergio Garcia
9bcd617964 chore(ec2): add SG name to resource_details (#2495) 2023-07-26 13:12:36 +02:00
Sergio Garcia
0db975dc7b fix(pypi-release): solve GH action for release (#2624) 2023-07-26 13:03:34 +02:00
Pepe Fagoaga
a51fa7703b fix(security): certifi issue (#2623) 2023-07-26 12:45:07 +02:00
Sergio Garcia
69fad0009d fix(ec2_ami_public): correct check metadata and logic (#2618) 2023-07-26 10:34:04 +02:00
Sergio Garcia
e721251936 fix(compute): solve key errors in compute service (#2610) 2023-07-26 08:49:09 +02:00
Pepe Fagoaga
2fe767e3e5 fix(ecs_task_def_secrets): Improve description to explain findings (#2621) 2023-07-25 18:26:22 +02:00
Sergio Garcia
6328ef4444 fix(guardduty): handle disabled detectors in guardduty_is_enabled (#2616) 2023-07-25 12:26:37 +02:00
dependabot[bot]
50b8e084e7 build(deps): bump google-api-python-client from 2.93.0 to 2.94.0 (#2614)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-25 09:37:10 +02:00
dependabot[bot]
3d88544feb build(deps): bump mkdocs-material from 9.1.18 to 9.1.19 (#2615)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-25 09:10:01 +02:00
dependabot[bot]
62e602c32e build(deps): bump pydantic from 1.10.11 to 1.10.12 (#2613)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-25 08:55:43 +02:00
Pepe Fagoaga
47a82560ea fix(s3): __get_object_lock_configuration__ warning logs (#2608) 2023-07-24 10:49:50 +02:00
Pepe Fagoaga
f7bbcc98b3 docs(boto3-configuration): format list (#2609) 2023-07-24 10:47:55 +02:00
Sergio Garcia
98a587aa15 chore(regions_update): Changes in regions for AWS services. (#2606)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-07-23 18:30:30 +02:00
Sergio Garcia
d2e34c42fd chore(regions_update): Changes in regions for AWS services. (#2599) 2023-07-18 17:38:43 +02:00
dependabot[bot]
605b07901e build(deps): bump google-api-python-client from 2.92.0 to 2.93.0 (#2597)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-18 10:52:26 +02:00
dependabot[bot]
18f02fac68 build(deps-dev): bump moto from 4.1.12 to 4.1.13 (#2598)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-07-18 10:37:34 +02:00
Pepe Fagoaga
28ea37f367 test(aws_provider): Role and User MFA (#2486) 2023-07-18 09:36:37 +02:00
Gabriel Pragin
65a737bb58 chore(metadata): Typos (#2595)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-07-18 09:27:58 +02:00
dependabot[bot]
7423cd2f93 build(deps): bump azure-storage-blob from 12.16.0 to 12.17.0 (#2596)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-18 09:25:51 +02:00
Gabriel Pragin
babd026351 chore(metadata): Typos (#2594) 2023-07-17 22:28:24 +02:00
Sergio Garcia
dd6e5a9029 fix(security): solve dependabot security alert (#2592) 2023-07-17 12:03:35 +02:00
Pepe Fagoaga
02519a4429 fix(assume_role): Set the AWS STS endpoint region (#2587) 2023-07-17 10:09:48 +02:00
Pepe Fagoaga
6575121b7a fix(ssm_incidents): Handle empty name (#2591) 2023-07-17 09:20:44 +02:00
Pepe Fagoaga
5b66368f0d fix(opensearch): log exception as WARNING (#2581) 2023-07-17 09:18:42 +02:00
Sergio Garcia
971c6720e4 chore(regions_update): Changes in regions for AWS services. (#2590) 2023-07-16 21:56:21 +02:00
Sergio Garcia
3afccc279f chore(regions_update): Changes in regions for AWS services. (#2588)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-07-14 11:34:21 +02:00
Nacho Rivera
8f015d0672 fix(allowlist): single account checks handling (#2585)
Co-authored-by: thomscode <thomscode@gmail.com>
2023-07-14 09:55:27 +02:00
Pepe Fagoaga
f33b96861c release: v3.7.1 (#2578) 2023-07-13 16:48:18 +02:00
Sergio Garcia
9832ce2ff9 chore(regions_update): Changes in regions for AWS services. (#2580)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-07-13 12:34:16 +02:00
Kay Agahd
490cbbaa48 docs: typos in README.md (#2579) 2023-07-13 07:34:27 +02:00
Nacho Rivera
d1c91093e2 feat(cond parser): add policy cond parser & apply in sqs public check (#2575) 2023-07-12 15:39:01 +02:00
Nacho Rivera
66fe101ccd fix(allowlist): handle wildcard in account field (#2577) 2023-07-12 14:22:42 +02:00
Pepe Fagoaga
7ab8c6b154 fix(iam): Handle NoSuchEntityException when calling list_attached_role_policies (#2571) 2023-07-12 12:48:57 +02:00
Sergio Garcia
73017b14c3 chore(regions_update): Changes in regions for AWS services. (#2574)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-07-12 11:17:00 +02:00
Sergio Garcia
f55495cd6a chore(regions_update): Changes in regions for AWS services. (#2572)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-07-11 11:45:43 +02:00
dependabot[bot]
e97146b5a3 build(deps): bump google-api-python-client from 2.91.0 to 2.92.0 (#2570)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-11 11:45:21 +02:00
dependabot[bot]
58f056c76d build(deps-dev): bump openapi-spec-validator from 0.5.7 to 0.6.0 (#2569)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-11 11:16:23 +02:00
dependabot[bot]
338bbc7a1f build(deps): bump pydantic from 1.10.9 to 1.10.11 (#2568)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-11 09:59:01 +02:00
dependabot[bot]
4ba54738a9 build(deps): bump boto3 from 1.26.161 to 1.26.165 (#2566)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-07-11 09:37:29 +02:00
Toni de la Fuente
235fd2adc4 docs: Update Compliance in README (#2563) 2023-07-11 09:12:11 +02:00
Toni de la Fuente
b15d518c94 feat(compliance): CIS Benchmark 2.0 for AWS (#2562) 2023-07-11 09:12:03 +02:00
dependabot[bot]
021e1c122c build(deps-dev): bump pytest-randomly from 3.12.0 to 3.13.0 (#2567)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-11 09:07:05 +02:00
Sergio Garcia
014b0dd6f6 chore(regions_update): Changes in regions for AWS services. (#2561)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-07-10 08:28:09 +02:00
Sergio Garcia
f9f68f9b86 chore(regions_update): Changes in regions for AWS services. (#2560)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-07-07 11:34:53 +02:00
Pepe Fagoaga
11a8ba131a test(outputs): Remove debug (#2559) 2023-07-07 10:14:47 +02:00
Sergio Garcia
858de64f8e chore(release): version 3.7.0 (#2558) 2023-07-06 21:17:21 +02:00
Sergio Garcia
676e60afb7 feat(gcp): add CIS checks (#2544) 2023-07-06 17:01:56 +02:00
Nacho Rivera
b1968f3f8b fix(allowlist): reformat allowlist logic (#2555)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-07-06 15:33:32 +02:00
Sergio Garcia
d2d077afaa chore(regions_update): Changes in regions for AWS services. (#2557)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-07-06 11:29:50 +02:00
Nacho Rivera
7097ca401d feat(lambda allowlist): mapping lambda/awslambda in allowlist (#2554) 2023-07-05 11:49:42 +02:00
Antoine Cichowicz
73e9a1eb9e docs: Update Amazon Linux 2 installation (#2553) 2023-07-05 07:54:18 +02:00
Nacho Rivera
0439d455fb fix(reporting docs): fix S3 reporting desc (#2551) 2023-07-04 12:43:39 +02:00
Sergio Garcia
d57f665a78 docs(allowlist): update DynamoDB allowlist example (#2552)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-07-04 11:55:33 +02:00
dependabot[bot]
859c731a13 build(deps): bump google-api-python-client from 2.90.0 to 2.91.0 (#2548)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-04 11:08:13 +02:00
Sergio Garcia
2e7613ddec docs(OCSF): add docs for OCSF output (#2550) 2023-07-04 10:37:42 +02:00
dependabot[bot]
57e9436783 build(deps): bump botocore from 1.29.161 to 1.29.165 (#2547)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-04 10:23:03 +02:00
dependabot[bot]
2f153fda2e build(deps): bump mkdocs-material from 9.1.17 to 9.1.18 (#2546)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-04 09:02:25 +02:00
dependabot[bot]
cbcb5905a3 build(deps): bump boto3 from 1.26.156 to 1.26.161 (#2545)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-04 08:46:49 +02:00
Sergio Garcia
6a2fb37615 fix(bigquery_dataset_public_access): handle status correctly (#2542) 2023-07-03 13:01:51 +02:00
Nacho Rivera
6403feaff9 fix(cloudwatch secrets): fix nonetype error handling (#2543) 2023-07-03 12:52:46 +02:00
Sergio Garcia
47736910ca fix(list-checks): handle listing checks when -s (#2540) 2023-07-03 11:48:40 +02:00
Sergio Garcia
ead592a0bf chore(regions_update): Changes in regions for AWS services. (#2539)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-07-03 11:22:43 +02:00
Nacho Rivera
d5bdba9244 feat(lambda service): mapping lambda service to awslambda (#2538) 2023-07-03 11:19:02 +02:00
Sergio Garcia
4f033cec8d feat(MITRE): add MITRE ATT&CK framework for AWS (#2537) 2023-06-30 12:24:05 +02:00
sssalim-aws
a58f4b2498 feat(compliance): AWS Well-Architected Framework Reliability Pillar v0.1 (#2536)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-06-29 11:13:38 +02:00
Sergio Garcia
01522ed8c7 feat(ENS): complete ENS Compliance Framework mapping (#2534) 2023-06-27 15:22:25 +02:00
Sergio Garcia
fa99ee9d5b feat(allowlist): add exceptions to allowlist (#2527) 2023-06-27 12:57:18 +02:00
Sergio Garcia
6efe634850 fix(iam): add StringLike condition in iam_role_cross_service_confused_deputy_prevention (#2533) 2023-06-27 10:06:46 +02:00
dependabot[bot]
60a1497eaf build(deps-dev): bump moto from 4.1.11 to 4.1.12 (#2530)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-27 09:07:44 +02:00
dependabot[bot]
1d0cbc08df build(deps): bump google-api-python-client from 2.89.0 to 2.90.0 (#2531)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-27 08:36:41 +02:00
dependabot[bot]
4d4280033b build(deps-dev): bump pytest from 7.3.2 to 7.4.0 (#2532)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-27 07:55:26 +02:00
dependabot[bot]
fd58775cae build(deps): bump mkdocs-material from 9.1.16 to 9.1.17 (#2529)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-27 07:39:58 +02:00
dependabot[bot]
ccb0e93da2 build(deps): bump botocore from 1.29.156 to 1.29.161 (#2528)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-27 07:19:22 +02:00
Sergio Garcia
c2a05da908 chore(ec2): reduce noise in Security Groups checks (#2525) 2023-06-23 15:06:09 +02:00
Sergio Garcia
e1da9e60fc chore(region): add get_default_region function in AWS Services (#2524) 2023-06-23 14:10:49 +02:00
Sergio Garcia
d044e535e0 fix(compliance): add version to ISO27001 (#2523) 2023-06-21 17:04:08 +02:00
Sergio Garcia
293560dcd4 fix(contrib): migrate multi-account-securityhub/run-prowler-securityhub.sh to v3 (#2503)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-06-21 15:18:02 +02:00
Sergio Garcia
90ebb815d5 fix(security hub): solve Security Hub format requirements (#2520) 2023-06-21 13:04:14 +02:00
Sergio Garcia
3d3d418ee6 chore(regions_update): Changes in regions for AWS services. (#2522)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-06-21 11:32:35 +02:00
Pedro Martín
f875cd05be feat(compliance): add ISO27001 compliance framework (#2517)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-06-20 16:57:28 +02:00
Sergio Garcia
435911489f fix(gcp): update Prowler SDK info of GCP (#2515) 2023-06-20 14:32:24 +02:00
Sergio Garcia
5fcfcd53aa fix(compliance): remove unnecessary Optional attributes (#2514) 2023-06-20 14:22:13 +02:00
dependabot[bot]
bc09215aad build(deps): bump boto3 from 1.26.147 to 1.26.156 (#2511)
Signed-off-by: dependabot[bot] <support@github.com>
2023-06-20 10:36:53 +02:00
dependabot[bot]
5f7e109e3d build(deps-dev): bump openapi-spec-validator from 0.5.6 to 0.5.7 (#2507)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-20 09:44:30 +02:00
Nacho Rivera
b75a5050d7 fix(apigw): Update metadata for API GW checks (#2512) 2023-06-20 09:22:00 +02:00
dependabot[bot]
be497f7083 build(deps): bump google-api-python-client from 2.88.0 to 2.89.0 (#2510) 2023-06-20 08:40:41 +02:00
dependabot[bot]
0ccae3e15b build(deps): bump mkdocs-material from 9.1.15 to 9.1.16 (#2508)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-20 08:08:17 +02:00
dependabot[bot]
d736c32aec build(deps): bump botocore from 1.29.152 to 1.29.156 (#2506)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-20 07:41:30 +02:00
Sergio Garcia
8ea5ba5d3f chore(OCSF): improve OCSF logic (#2502) 2023-06-19 12:37:04 +02:00
Nacho Rivera
60c341befd fix(vpc): handle ephemeral VPC endpoint services (#2501) 2023-06-19 12:23:52 +02:00
Sergio Garcia
be4f58ed8f chore(regions_update): Changes in regions for AWS services. (#2500) 2023-06-19 07:59:42 +02:00
Sergio Garcia
d82d1abab6 chore(3.6.1): release version (#2498) 2023-06-16 12:34:17 +02:00
Sergio Garcia
0d81bd457c fix(asff): handle empty Recommendation Url (#2496)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-06-16 12:17:09 +02:00
Sergio Garcia
af2b19436f fix(route53): correct Hosted Zone ARN (#2494) 2023-06-15 16:32:54 +02:00
Sergio Garcia
51beb3c7e4 chore(regions_update): Changes in regions for AWS services. (#2497)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-06-15 15:56:23 +02:00
Chris Kelly
5061456735 fix(security hub): Adds logic to map to valid ASFF statuses (#2491) 2023-06-15 15:52:19 +02:00
Nacho Rivera
b01eb3af95 fix(rds checks): test if key exists prior checking it (#2489) 2023-06-14 12:15:33 +02:00
Sergio Garcia
328bebc168 chore(regions_update): Changes in regions for AWS services. (#2487)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-06-14 11:52:11 +02:00
Sergio Garcia
fc63fffa15 chore(release): 3.6.0 (#2485) 2023-06-13 17:38:51 +02:00
Sebastian Nyberg
707584b2ef feat(aws): Add MFA flag if try to assume role in AWS (#2478)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-06-13 17:18:10 +02:00
Nacho Rivera
561459d93b fix(dataevents checks): add trails home region (#2484) 2023-06-13 11:48:55 +02:00
Sergio Garcia
25e48ae546 chore(arn): include ARN of AWS accounts (#2477)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-06-13 10:18:23 +02:00
dependabot[bot]
513bb3e8d0 build(deps): bump botocore from 1.29.147 to 1.29.152 (#2482)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-13 10:07:57 +02:00
dependabot[bot]
04710ca908 build(deps): bump google-api-python-client from 2.86.0 to 2.88.0 (#2483)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-13 09:50:10 +02:00
dependabot[bot]
fcf0fcf20c build(deps): bump pydantic from 1.10.8 to 1.10.9 (#2481)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-13 09:06:59 +02:00
dependabot[bot]
2ff40d8e37 build(deps): bump boto3 from 1.26.142 to 1.26.147 (#2480)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-13 08:11:54 +02:00
dependabot[bot]
1bab5b06a4 build(deps-dev): bump pytest from 7.3.1 to 7.3.2 (#2479)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-13 07:50:41 +02:00
Sergio Garcia
01cd4bcb47 chore(arn): add missing ARNs to AWS Services (#2476) 2023-06-12 13:33:12 +02:00
Sebastian Nyberg
49b2a559ae feat(vpc): add check vpc_subnet_no_public_ip_by_default (#2472)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2023-06-12 09:44:10 +02:00
Sergio Garcia
9212d24685 chore(regions_update): Changes in regions for AWS services. (#2474)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-06-12 08:48:44 +02:00
Nacho Rivera
eb43b11202 fix(arn validator): include : in regex (#2471) 2023-06-09 13:24:29 +02:00
Sergio Garcia
5c4cae8c9d feat(wellarchitected): add WellArchitected service and check (#2461) 2023-06-09 13:19:01 +02:00
Sergio Garcia
cfd7099743 chore(regions_update): Changes in regions for AWS services. (#2469)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-06-09 13:09:30 +02:00
Sergio Garcia
19ae237d29 chore(regions_update): Changes in regions for AWS services. (#2462)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-06-09 13:09:01 +02:00
Sergio Garcia
9cda78e561 chore(docs): improve allowlist suggestion (#2466) 2023-06-09 13:07:28 +02:00
Sergio Garcia
cc31872a7f fix(kms): check only KMS CMK tags (#2468) 2023-06-09 13:06:06 +02:00
Sebastian Nyberg
3c2c896708 chore(vpc): add mapPublicIpOnLaunch attribute to VPC subnets (#2470) 2023-06-09 12:45:28 +02:00
Jit
b73da9c54c feat(gcp): add 12 new checks for CIS Framework (#2426)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-06-08 11:25:51 +02:00
Sergio Garcia
414a45bfb0 chore(quick inventory): add warning message (#2460) 2023-06-07 15:16:52 +02:00
Sergio Garcia
2a6f808bca chore(boto3): update boto3 config (#2459) 2023-06-07 14:32:40 +02:00
Sergio Garcia
cdf2a13bbd feat(oscf): add OCSF format as JSON output for AWS, Azure and GCP. Hello Amazon Security Lake! (#2429) 2023-06-07 14:28:43 +02:00
Sergio Garcia
3e3e8a14ee fix(inventory): handle exception for every call (#2457) 2023-06-07 09:33:10 +02:00
Nacho Rivera
37e180827a fix(azure): fix empty subscriptions case (#2455) 2023-06-06 17:31:43 +02:00
Pepe Fagoaga
b047b54545 fix(backup): Handle last_execution_date when None (#2454) 2023-06-06 16:57:17 +02:00
Pepe Fagoaga
b7bb4bbd57 fix(aws): Add missing resources ARN (#2453) 2023-06-06 16:56:59 +02:00
Pepe Fagoaga
86cf2cd233 fix(efs): Include resource ARN and handle from input (#2452) 2023-06-06 14:29:58 +02:00
Sergio Garcia
ab12c201b4 chore(docs): improve custom checks docs (#2428) 2023-06-06 11:58:20 +02:00
Sergio Garcia
a8f03d859c feat(gcp): add --project-ids flag and scan all projects by default (#2393)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-06-06 11:56:39 +02:00
Sergio Garcia
3c7580f024 fix(ec2): handle false positive in ec2_securitygroup_allow_ingress_from_internet_to_any_port (#2449) 2023-06-06 11:55:27 +02:00
Sergio Garcia
277833e388 fix(services): verify Route53 records and handle TrustedAdvisor error (#2448) 2023-06-06 11:50:44 +02:00
Sergio Garcia
eb16d7e6f9 chore(regions_update): Changes in regions for AWS services. (#2450)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-06-06 11:20:03 +02:00
Pepe Fagoaga
1418068d2b fix(services): Handle AWS service errors (#2440) 2023-06-06 09:23:03 +02:00
dependabot[bot]
774346f5f8 build(deps): bump botocore from 1.29.142 to 1.29.147 (#2447)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-06 08:38:49 +02:00
dependabot[bot]
1aab88e6ca build(deps): bump alive-progress from 3.1.1 to 3.1.4 (#2446)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-06 08:25:06 +02:00
dependabot[bot]
613f49b8bb build(deps-dev): bump docker from 6.1.2 to 6.1.3 (#2445)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-06 08:03:03 +02:00
dependabot[bot]
5c95dc6e20 build(deps): bump boto3 from 1.26.138 to 1.26.142 (#2444)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-06 07:45:14 +02:00
dependabot[bot]
cbc2713bee build(deps-dev): bump moto from 4.1.10 to 4.1.11 (#2443) 2023-06-06 07:29:25 +02:00
christiandavilakoobin
2955975793 fix(cloudfront): fix DefaultCacheConfigBehaviour enum type(#2430)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-06-05 15:48:34 +02:00
Sergio Garcia
f8299d7f40 chore(regions_update): Changes in regions for AWS services. (#2441)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-06-05 14:44:30 +02:00
Toni de la Fuente
e855d44523 docs: Create CONTRIBUTING.md (#2416)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-06-05 08:52:57 +02:00
dependabot[bot]
64e7715480 build(deps): bump cryptography from 40.0.2 to 41.0.0 (#2436)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-05 08:52:11 +02:00
Nacho Rivera
2e9a74f609 fix(README): add references to tenant-id when browser auth (#2439) 2023-06-05 08:39:59 +02:00
Sergio Garcia
11a1230738 chore(regions_update): Changes in regions for AWS services. (#2437)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-06-05 08:09:21 +02:00
Sergio Garcia
298373742e chore(regions_update): Changes in regions for AWS services. (#2427)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-06-02 13:32:04 +02:00
Sergio Garcia
dc7aeecd85 chore(regions_update): Changes in regions for AWS services. (#2434)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-06-02 13:24:47 +02:00
Nacho Rivera
15a7de7b24 fix(browser auth): fix browser auth in Azure to include tenant id (#2415)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-06-02 13:22:43 +02:00
sssalim-aws
714d0d4092 Update aws_well_architected_framework_security_pillar_aws.json (#2432) 2023-06-02 11:58:31 +02:00
Jenny Kim
225d7f39d1 chore(logo): Add Prowler logo in SVG format & Propose to Prowler icon design (#2423)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-06-01 12:03:49 +02:00
Sergio Garcia
0005798c83 chore(regions_update): Changes in regions for AWS services. (#2424)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-31 18:22:44 +02:00
dependabot[bot]
1d9078f9be build(deps): bump mkdocs-material from 9.1.12 to 9.1.15 (#2420)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-30 12:51:50 +02:00
dependabot[bot]
510ac7005a build(deps-dev): bump pytest-xdist from 3.3.0 to 3.3.1 (#2421)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-30 11:00:11 +02:00
dependabot[bot]
c049b968a5 build(deps): bump pydantic from 1.10.7 to 1.10.8 (#2418)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-30 10:45:13 +02:00
dependabot[bot]
858698f7cd build(deps): bump botocore from 1.29.138 to 1.29.142 (#2419)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-30 09:42:19 +02:00
dependabot[bot]
d104f6f8fc build(deps-dev): bump coverage from 7.2.5 to 7.2.7 (#2422)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-30 07:52:01 +02:00
Sergio Garcia
3ecf0d3230 chore(regions_update): Changes in regions for AWS services. (#2414) 2023-05-29 07:20:44 +02:00
Sergio Garcia
6e4131fee4 fix(ecr): handle LifecyclePolicyNotFoundException (#2411)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-05-26 17:15:49 +02:00
Sergio Garcia
41fa6bc8ed chore(regions_update): Changes in regions for AWS services. (#2413)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-26 13:02:37 +02:00
Sergio Garcia
58a29bf058 fix(codebuild): handle FAIL in codebuild_project_user_controlled_buildspec (#2410)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-05-25 13:30:01 +02:00
Sergio Garcia
7dac17de18 chore(regions_update): Changes in regions for AWS services. (#2409)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-25 11:51:32 +02:00
Toni de la Fuente
799d7de182 fix: typo in README.md (#2407)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2023-05-24 16:55:49 +02:00
Pedro Martín
735af02f59 feat(new_security_framework): AWS Well Architected Framework security pillar (#2382)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-05-24 16:38:32 +02:00
Sergio Garcia
ad3f3799fa fix(typo): typo in README.md (#2406) 2023-05-24 14:22:58 +02:00
Sergio Garcia
5f97df015e chore(release): change release version to 3.5.3 (#2405) 2023-05-24 13:56:53 +02:00
Toni de la Fuente
ff18fd2c38 chore(docs): add summary table to README.md (#2402)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2023-05-24 13:56:17 +02:00
Jit
3ab0cd02df feat(checks-gcp): Include 4 new checks covering GCP CIS (#2376)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2023-05-24 12:10:43 +02:00
Sergio Garcia
c31072f42f chore(regions_update): Changes in regions for AWS services. (#2403)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-24 11:59:15 +02:00
Sergio Garcia
c01c59023a fix(ClientError): handle ClientErrors in DynamoDB and Directory Service (#2400) 2023-05-24 11:50:08 +02:00
Sergio Garcia
4329aac377 chore(quick-inventory): send quick inventory to output bucket (#2399)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-05-24 11:48:49 +02:00
Sergio Garcia
c10b31e9d0 fix(categories): remove empty categories from metadata (#2401) 2023-05-24 10:44:51 +02:00
kij
71a789c0b4 fix(OSError): handle different OSErrors (#2398)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-05-23 17:16:17 +02:00
Sergio Garcia
deb9847e2b fix(route53_dangling_ip_subdomain_takeover): notify only IPs with AWS IP Ranges (#2396) 2023-05-23 16:35:13 +02:00
Pepe Fagoaga
9e9e7e1e96 fix(aws): Handle unique map keys (#2390)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-05-23 15:54:22 +02:00
Sergio Garcia
d34e0341e2 chore(regions_update): Changes in regions for AWS services. (#2392)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-23 12:28:38 +02:00
Sergio Garcia
aec254b05a fix(inspector2): fix active findings count (#2395) 2023-05-23 12:26:09 +02:00
dependabot[bot]
f8b420047a build(deps): bump boto3 from 1.26.125 to 1.26.138 (#2389)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-23 11:15:42 +02:00
dependabot[bot]
7e6e4c0bc6 build(deps): bump shodan from 1.29.0 to 1.29.1 (#2385)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-23 10:56:50 +02:00
dependabot[bot]
71fb59943c build(deps): bump requests from 2.30.0 to 2.31.0 (#2388)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-23 10:25:28 +02:00
dependabot[bot]
34419d0ca1 build(deps): bump azure-identity from 1.12.0 to 1.13.0 (#2386)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-23 10:22:05 +02:00
dependabot[bot]
475a36f0d7 build(deps-dev): bump moto from 4.1.9 to 4.1.10 (#2384)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-23 09:52:18 +02:00
Kevin Pullin
1234c1e7e2 fix(allowlist) - tags parameter is a string, not a list (#2375) 2023-05-23 09:51:50 +02:00
dependabot[bot]
a4a400facf build(deps): bump botocore from 1.29.134 to 1.29.138 (#2383)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-23 07:52:47 +02:00
Sergio Garcia
ed2ca4d896 chore(regions_update): Changes in regions for AWS services. (#2378)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-19 11:36:08 +02:00
Pepe Fagoaga
ce42e4d1cd fix(pypi-release): Push version change to the branch (#2374) 2023-05-18 18:46:11 +02:00
Sergio Garcia
b048128e77 chore(release): release version 3.5.2 (#2373) 2023-05-18 17:04:18 +02:00
Sergio Garcia
635c257502 fix(ssm incidents): check if service available in aws partition (#2372) 2023-05-18 16:44:52 +02:00
Pepe Fagoaga
58a38c08d7 docs: format regions-and-partitions (#2371) 2023-05-18 16:35:54 +02:00
Pepe Fagoaga
8fbee7737b fix(resource_not_found): Handle error (#2370) 2023-05-18 16:26:08 +02:00
Pepe Fagoaga
e84f5f184e fix(sts): Use the right region to validate credentials (#2349)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-05-18 15:51:57 +02:00
Sergio Garcia
0bd26b19d7 chore(regions_update): Changes in regions for AWS services. (#2368)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-18 11:17:28 +02:00
Sergio Garcia
64f82d5d51 chore(regions_update): Changes in regions for AWS services. (#2366)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-17 11:52:16 +02:00
Sergio Garcia
f63ff994ce fix(action): solve pypi-release action creating the release branch (#2364) 2023-05-16 13:32:46 +02:00
Sergio Garcia
a10ee43271 release: 3.5.1 (#2363) 2023-05-16 11:42:08 +02:00
Sergio Garcia
54ed29e08d fix(route53): handle empty Records in Zones (#2351) 2023-05-16 10:51:43 +02:00
dependabot[bot]
cc097e7a3f build(deps-dev): bump docker from 6.1.1 to 6.1.2 (#2360)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-16 09:39:24 +02:00
dependabot[bot]
5de92ada43 build(deps): bump mkdocs-material from 9.1.8 to 9.1.12 (#2359)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-16 09:24:39 +02:00
dependabot[bot]
0c546211cf build(deps-dev): bump pytest-xdist from 3.2.1 to 3.3.0 (#2358)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-16 08:09:55 +02:00
dependabot[bot]
4dc5a3a67c build(deps): bump botocore from 1.29.125 to 1.29.134 (#2357)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-16 07:51:19 +02:00
dependabot[bot]
c51b226ceb build(deps): bump shodan from 1.28.0 to 1.29.0 (#2356)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-16 07:34:51 +02:00
dependabot[bot]
0a5ca6cf74 build(deps): bump pymdown-extensions from 9.11 to 10.0 (#2355)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-16 07:33:56 +02:00
Sergio Garcia
96957219e4 chore(regions_update): Changes in regions for AWS services. (#2353)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-16 07:32:41 +02:00
Sergio Garcia
32b7620db3 chore(regions_update): Changes in regions for AWS services. (#2350)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-12 11:37:53 +02:00
Sergio Garcia
347f65e089 chore(release): 3.5.0 (#2346) 2023-05-11 17:42:46 +02:00
Sergio Garcia
16628a427e fix(README): update Architecture image and PyPi links (#2345) 2023-05-11 17:29:17 +02:00
Sergio Garcia
ed16034a25 fix(README): order providers alphbetically (#2344) 2023-05-11 16:30:04 +02:00
Pepe Fagoaga
0c5f144e41 fix(poetry): Skip updates during pre-commit (#2342) 2023-05-11 12:17:21 +02:00
Sergio Garcia
acc7d6e7dc chore(regions_update): Changes in regions for AWS services. (#2341)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-11 11:41:39 +02:00
Sergio Garcia
84b4139052 chore(iam): add new permissions (#2339) 2023-05-11 11:35:32 +02:00
Sergio Garcia
9943643958 fix(s3): improve error handling (#2337) 2023-05-10 16:43:06 +02:00
Pepe Fagoaga
9ceaefb663 fix(access-analyzer): Handle ResourceNotFoundException (#2336) 2023-05-10 15:44:14 +02:00
Gabriel Soltz
ec03ea5bc1 feat(workspaces): New check workspaces_vpc_2private_1public_subnets_nat (#2286)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: n4ch04 <nachor1992@gmail.com>
2023-05-10 15:40:42 +02:00
Sergio Garcia
5855633c1f fix(resourceexplorer2): add resource id (#2335)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-05-10 14:48:34 +02:00
Pedro Martín
a53bc2bc2e feat(rds): new check rds_instance_deprecated_engine_version (#2298)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2023-05-10 14:48:12 +02:00
Sergio Garcia
88445820ed feat(slack): add Slack App integration (#2305)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-05-10 13:38:28 +02:00
Sergio Garcia
044ed3ae98 chore(regions_update): Changes in regions for AWS services. (#2334)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-10 13:30:24 +02:00
Pepe Fagoaga
6f48012234 fix(ecr): Refactor service (#2302)
Co-authored-by: Gabriel Soltz <thegaby@gmail.com>
Co-authored-by: Kay Agahd <kagahd@users.noreply.github.com>
Co-authored-by: Nacho Rivera <nachor1992@gmail.com>
Co-authored-by: Kevin Pullin <kevin.pullin@gmail.com>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-05-09 17:04:21 +02:00
Sergio Garcia
d344318dd4 feat(allowlist): allowlist a specific service (#2331) 2023-05-09 15:43:04 +02:00
Sergio Garcia
6273dd3d83 chore(regions_update): Changes in regions for AWS services. (#2330)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-09 12:21:07 +02:00
dependabot[bot]
0f3f3cbffd build(deps-dev): bump moto from 4.1.8 to 4.1.9 (#2328)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-05-09 11:38:41 +02:00
Pepe Fagoaga
3244123b21 fix(cloudfront_distributions_https_enabled): Add default case (#2329)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-05-09 11:09:18 +02:00
dependabot[bot]
cba2ee3622 build(deps): bump boto3 from 1.26.115 to 1.26.125 (#2327)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-09 08:48:15 +02:00
dependabot[bot]
25ed925df5 build(deps-dev): bump docker from 6.0.1 to 6.1.1 (#2326)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-09 08:22:03 +02:00
dependabot[bot]
8c5bd60bab build(deps-dev): bump pylint from 2.17.3 to 2.17.4 (#2325)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-09 07:59:21 +02:00
dependabot[bot]
c5510556a7 build(deps): bump mkdocs from 1.4.2 to 1.4.3 (#2324)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-09 07:38:43 +02:00
Sergio Garcia
bbcfca84ef fix(trustedadvisor): avoid not_available checks (#2323) 2023-05-08 17:55:31 +02:00
Sergio Garcia
1260e94c2a fix(cloudtrail): handle InsightNotEnabledException error (#2322) 2023-05-08 16:06:13 +02:00
Pepe Fagoaga
8a02574303 fix(sagemaker): Handle ValidationException (#2321) 2023-05-08 14:52:28 +02:00
Pepe Fagoaga
c930f08348 fix(emr): Handle InvalidRequestException (#2320) 2023-05-08 14:52:12 +02:00
Pepe Fagoaga
5204acb5d0 fix(iam): Handle ListRoleTags and policy errors (#2319) 2023-05-08 14:42:23 +02:00
Sergio Garcia
784aaa98c9 feat(iam): add iam_role_cross_account_readonlyaccess_policy check (#2312) 2023-05-08 13:27:51 +02:00
Sergio Garcia
745e2494bc chore(docs): improve GCP docs (#2318) 2023-05-08 13:26:23 +02:00
Sergio Garcia
c00792519d chore(docs): improve GCP docs (#2318) 2023-05-08 13:26:02 +02:00
Sergio Garcia
142fe5a12c chore(regions_update): Changes in regions for AWS services. (#2315)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-08 12:40:31 +02:00
Sergio Garcia
5b127f232e fix(typo): typo in backup_vaults_exist check title (#2317) 2023-05-08 12:29:08 +02:00
Kevin Pullin
c22bf01003 feat(allowlist): Support regexes in Tags to allow "or"-like conditional matching (#2300)
Co-authored-by: Kevin Pullin <kevinp@nexttrucking.com>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-05-05 14:56:27 +02:00
Nacho Rivera
05e4911d6f fix(vpc services): list to dicts in vpc and subnets (#2310) 2023-05-04 15:35:02 +02:00
Nacho Rivera
9b551ef0ba feat(pre-commit): added trufflehog to pre-commit (#2311) 2023-05-04 15:33:11 +02:00
Sergio Garcia
56a8bb2349 chore(regions_update): Changes in regions for AWS services. (#2309)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-04 12:30:10 +02:00
Pepe Fagoaga
8503c6a64d fix(client_error): Handle errors (#2308) 2023-05-04 11:06:24 +02:00
Pepe Fagoaga
820f18da4d release: 3.4.1 (#2303) 2023-05-03 19:24:17 +02:00
Kay Agahd
51a2432ebf fix(typo): remove redundant lines (#2307) 2023-05-03 19:23:48 +02:00
Gabriel Soltz
6639534e97 feat(ssmincidents): Use regional_client region instead of audit_profile region (#2306) 2023-05-03 19:22:30 +02:00
Gabriel Soltz
0621577c7d fix(backup): Return [] when None AdvancedBackupSettings (#2304) 2023-05-03 17:10:53 +02:00
Sergio Garcia
26a507e3db feat(route53): add route53_dangling_ip_subdomain_takeover check (#2288)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-05-03 11:47:36 +02:00
Sergio Garcia
244b540fe0 fix(s3): handle NoSuchBucket error (#2289) 2023-05-03 09:55:19 +02:00
Gabriel Soltz
030ca4c173 fix(backups): change severity and only check report_plans if plans exists (#2291)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-05-03 09:00:15 +02:00
dependabot[bot]
88a2810f29 build(deps): bump botocore from 1.29.115 to 1.29.125 (#2301)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-03 08:55:14 +02:00
dependabot[bot]
9164ee363a build(deps-dev): bump coverage from 7.2.3 to 7.2.5 (#2297)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-03 08:38:03 +02:00
dependabot[bot]
4cd47fdcc5 build(deps): bump google-api-python-client from 2.84.0 to 2.86.0 (#2296)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-03 08:11:36 +02:00
dependabot[bot]
708852a3cb build(deps): bump mkdocs-material from 9.1.6 to 9.1.8 (#2294)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-03 07:49:52 +02:00
Sergio Garcia
4a93bdf3ea chore(regions_update): Changes in regions for AWS services. (#2293)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-05-03 07:49:27 +02:00
Gabriel Soltz
22e7d2a811 feat(Organizations): New check organizations_tags_policies_enabled_and_attached (#2287)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-28 16:14:08 +02:00
Sergio Garcia
93eca1dff2 chore(regions_update): Changes in regions for AWS services. (#2290)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-28 13:19:46 +02:00
Gabriel Soltz
9afe7408cd feat(FMS): New Service FMS and Check fms_accounts_compliant (#2259)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: Nacho Rivera <nacho@verica.io>
2023-04-28 11:47:55 +02:00
Sergio Garcia
5dc2347a25 docs(security hub): improve security hub docs (#2285)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-27 16:22:49 +02:00
Pepe Fagoaga
e3a0124b10 fix(opensearch): Handle invalid JSON policy (#2262) 2023-04-27 12:05:43 +02:00
Gabriel Soltz
16af89c281 feat(autoscaling): new check autoscaling_group_multiple_az (#2273) 2023-04-26 15:10:04 +02:00
Sergio Garcia
621e4258c8 feat(s3): add s3_bucket_object_lock check (#2274) 2023-04-26 15:04:45 +02:00
Sergio Garcia
ac6272e739 fix(rds): check configurations for DB instances at cluster level (#2277)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-26 13:51:07 +02:00
Sergio Garcia
6e84f517a9 fix(apigateway2): correct paginator name (#2283) 2023-04-26 13:43:15 +02:00
Pepe Fagoaga
fdbdb3ad86 fix(sns_topics_not_publicly_accessible): Change PASS behaviour (#2282) 2023-04-26 12:51:51 +02:00
Sergio Garcia
7adcf5ca46 chore(regions_update): Changes in regions for AWS services. (#2280)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-26 11:59:34 +02:00
Gabriel Soltz
fe6716cf76 feat(NetworkFirewall): New Service and Check (#2261)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2023-04-26 11:58:11 +02:00
dependabot[bot]
3c2096db68 build(deps): bump azure-mgmt-security from 4.0.0 to 5.0.0 (#2270)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-25 11:59:30 +02:00
Pepe Fagoaga
58cad1a6b3 fix(log_group_retention): handle log groups that never expire (#2272) 2023-04-25 10:45:43 +02:00
dependabot[bot]
662e67ff16 build(deps): bump boto3 from 1.26.105 to 1.26.115 (#2269)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-25 10:35:15 +02:00
dependabot[bot]
8d577b872f build(deps-dev): bump moto from 4.1.7 to 4.1.8 (#2268)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-25 10:12:25 +02:00
dependabot[bot]
b55290f3cb build(deps-dev): bump pylint from 2.17.2 to 2.17.3 (#2267)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-25 09:20:15 +02:00
dependabot[bot]
e8d3eb7393 build(deps-dev): bump pytest from 7.3.0 to 7.3.1 (#2266)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-25 08:03:45 +02:00
Sergio Garcia
47fa16e35f chore(test): add CloudWatch and Logs tests (#2264) 2023-04-24 17:05:05 +02:00
Gabriel Soltz
a87f769b85 feat(DRS): New DRS Service and Checks (#2257)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-24 14:22:22 +02:00
Sergio Garcia
8e63fa4594 fix(version): execute check current version function only when -v (#2263) 2023-04-24 12:45:59 +02:00
Gabriel Soltz
63501a0d59 feat(inspector2): New Service and Check (#2250)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2023-04-24 12:15:16 +02:00
Sergio Garcia
828fb37ca8 chore(regions_update): Changes in regions for AWS services. (#2258)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-24 08:32:40 +02:00
Sergio Garcia
40f513d3b6 chore(regions_update): Changes in regions for AWS services. (#2251)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-21 12:10:15 +02:00
Sergio Garcia
f0b8b66a75 chore(test): add rds_instance_transport_encrypted test (#2252) 2023-04-21 12:09:47 +02:00
Sergio Garcia
d51cdc068b fix(iam_role_cross_service_confused_deputy_prevention): avoid service linked roles (#2249) 2023-04-21 10:42:05 +02:00
Sergio Garcia
f8b382e480 fix(version): update version to 3.4.0 (#2247) 2023-04-20 17:05:18 +02:00
Ronen Atias
1995f43b67 fix(redshift): correct description in redshift_cluster_automatic_upgrades (#2246) 2023-04-20 15:19:49 +02:00
Sergio Garcia
69e0392a8b fix(rds): exclude Aurora in rds_instance_transport_encrypted check (#2245) 2023-04-20 14:28:12 +02:00
Sergio Garcia
1f6319442e chore(docs): improve GCP docs (#2242) 2023-04-20 14:15:28 +02:00
Sergio Garcia
559c4c0c2c chore(regions_update): Changes in regions for AWS services. (#2243)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-20 11:43:02 +02:00
Sergio Garcia
feeb5b58d9 fix(checks): improve --list-checks function (#2240) 2023-04-19 17:00:20 +02:00
Sergio Garcia
7a00f79a56 fix(iam_policy_no_administrative_privileges): check attached policies and AWS-Managed (#2200)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-19 14:34:53 +02:00
Sergio Garcia
10d744704a fix(errors): solve ECR and CodeArtifact errors (#2239) 2023-04-19 13:27:19 +02:00
Gabriel Soltz
eee35f9cc3 feat(ssmincidents): New Service and Checks (#2219)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-19 12:26:20 +02:00
Gabriel Soltz
b3656761eb feat(check): New VPC checks (#2218) 2023-04-19 12:01:12 +02:00
Sergio Garcia
7b5fe34316 feat(html): add html to Azure and GCP (#2181)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-18 16:13:57 +02:00
Sergio Garcia
4536780a19 feat(check): new check ecr_registry_scan_images_on_push_enabled (#2237) 2023-04-18 15:45:21 +02:00
Sergio Garcia
05d866e6b3 chore(regions_update): Changes in regions for AWS services. (#2236)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-18 13:43:15 +02:00
dependabot[bot]
0d138cf473 build(deps): bump botocore from 1.29.105 to 1.29.115 (#2233)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-18 13:42:50 +02:00
dependabot[bot]
dbe539ac80 build(deps): bump boto3 from 1.26.90 to 1.26.105 (#2232)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-18 12:35:33 +02:00
dependabot[bot]
665a39d179 build(deps): bump azure-storage-blob from 12.15.0 to 12.16.0 (#2230)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-18 11:02:39 +02:00
dependabot[bot]
5fd5d8c8c5 build(deps-dev): bump coverage from 7.2.2 to 7.2.3 (#2234)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-18 08:03:44 +02:00
dependabot[bot]
2832b4564c build(deps-dev): bump moto from 4.1.6 to 4.1.7 (#2231)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-18 07:40:50 +02:00
dependabot[bot]
d4369a64ee build(deps): bump azure-mgmt-security from 3.0.0 to 4.0.0 (#2141)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-17 13:22:09 +02:00
Sergio Garcia
81fa1630b7 chore(regions_update): Changes in regions for AWS services. (#2227)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-17 11:18:41 +02:00
Sergio Garcia
a1c4b35205 chore(regions_update): Changes in regions for AWS services. (#2217)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-17 11:16:22 +02:00
Sergio Garcia
5e567f3e37 fix(iam tests): mock audit_info object (#2226)
Co-authored-by: n4ch04 <nachor1992@gmail.com>
2023-04-17 11:14:48 +02:00
Pepe Fagoaga
c4757684c1 fix(test): Mock audit into in SecurityHub CodeBuild (#2225) 2023-04-17 11:14:36 +02:00
Sergio Garcia
a55a6bf94b fix(test): Mock audit info in EC2 (#2224) 2023-04-17 10:54:56 +02:00
Pepe Fagoaga
fa1792eb77 fix(test): Mock audit into in CloudWatch (#2223) 2023-04-17 10:54:01 +02:00
Nacho Rivera
93a8f6e759 fix(rds tests): mocked audit_info object (#2222) 2023-04-17 10:06:25 +02:00
Nacho Rivera
4a614855d4 fix(s3 tests): audit_info object mocked (#2221) 2023-04-17 10:04:28 +02:00
Pepe Fagoaga
8bdd47f912 fix(test): Mock audit info in KMS (#2215) 2023-04-14 14:34:55 +02:00
Nacho Rivera
f9e82abadc fix(vpc tests): mock current_audit_info (#2214) 2023-04-14 14:31:34 +02:00
Gabriel Soltz
428fda81e2 feat(check): New GuardDuty check guardduty_centrally_managed (#2195)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-14 14:30:51 +02:00
Pepe Fagoaga
29c9ad602d fix(test): Mock audit into in Macie (#2213) 2023-04-14 14:29:19 +02:00
Pepe Fagoaga
44458e2a97 fix(test): Mock audit info codeartifact-config-ds (#2210) 2023-04-14 14:25:45 +02:00
Pepe Fagoaga
861fb1f54b fix(test): Mock audit into in Glacier (#2212) 2023-04-14 14:20:03 +02:00
Pepe Fagoaga
02534f4d55 fix(test): Mock audit info DynamoDB (#2211) 2023-04-14 14:19:08 +02:00
Pepe Fagoaga
5532cb95a2 fix(test): Mock audit info in appstream and autoscaling (#2209) 2023-04-14 14:06:07 +02:00
Pepe Fagoaga
9176e43fc9 fix(test): Mock audit info API Gateway (#2208) 2023-04-14 13:49:38 +02:00
Pepe Fagoaga
cb190f54fc fix(elb-test): Use a mocked current audit info (#2207) 2023-04-14 12:43:08 +02:00
Sergio Garcia
4be2539bc2 fix(resourceexplorer2): solve test and region (#2206) 2023-04-14 12:33:52 +02:00
Sergio Garcia
291e2adffa chore(regions_update): Changes in regions for AWS services. (#2205)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-14 12:32:58 +02:00
Gabriel Soltz
fa2ec63f45 feat(check): New Check and Service: resourceexplorer2_indexes_found (#2196)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2023-04-14 10:18:36 +02:00
Nacho Rivera
946c943457 fix(global services): fixed global services region (#2203)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-14 09:57:33 +02:00
Pepe Fagoaga
0e50766d6e fix(test): call cloudtrail_s3_dataevents_write_enabled check (#2204) 2023-04-14 09:35:29 +02:00
Sergio Garcia
58a1610ae0 chore(regions_update): Changes in regions for AWS services. (#2201)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-13 15:53:56 +02:00
Nacho Rivera
06dc21168a feat(orgs checks region): added region to all orgs checks (#2202) 2023-04-13 14:41:18 +02:00
Gabriel Soltz
305b67fbed feat(check): New check cloudtrail_bucket_requires_mfa_delete (#2194)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-13 14:18:31 +02:00
Sergio Garcia
4da6d152c3 feat(custom checks): add -x/--checks-folder for custom checks (#2191) 2023-04-13 13:44:25 +02:00
Sergio Garcia
25630f1ef5 chore(regions): sort AWS regions (#2198)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-12 13:24:14 +02:00
Sergio Garcia
9b01e3f1c9 chore(regions_update): Changes in regions for AWS services. (#2197)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-12 12:53:03 +02:00
Sergio Garcia
99450400eb chore(regions_update): Changes in regions for AWS services. (#2189)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-12 10:47:21 +02:00
Gabriel Soltz
2f8a8988d7 feat(checks): New IAM Checks no full access to critical services (#2183) 2023-04-12 07:47:21 +02:00
Sergio Garcia
9104d2e89e fix(kms): handle empty principal error (#2192) 2023-04-11 16:59:29 +02:00
Gabriel Soltz
e75022763c feat(checks): New iam_securityaudit_role_created (#2182) 2023-04-11 14:15:39 +02:00
Gabriel Soltz
f0f3fb337d feat(check): New CloudTrail check cloudtrail_insights_exist (#2184) 2023-04-11 13:49:54 +02:00
dependabot[bot]
f7f01a34c2 build(deps): bump google-api-python-client from 2.81.0 to 2.84.0 (#2188)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-11 12:13:41 +02:00
dependabot[bot]
f9f9ff0cb8 build(deps): bump alive-progress from 3.1.0 to 3.1.1 (#2187)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-11 08:13:17 +02:00
dependabot[bot]
522ba05ba8 build(deps): bump mkdocs-material from 9.1.5 to 9.1.6 (#2186)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-11 07:54:41 +02:00
Gabriel Soltz
f4f4093466 feat(backup): New backup service and checks (#2172)
Co-authored-by: Nacho Rivera <nacho@verica.io>
2023-04-11 07:43:40 +02:00
dependabot[bot]
2e16ab0c2c build(deps-dev): bump pytest from 7.2.2 to 7.3.0 (#2185)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-11 07:39:09 +02:00
Sergio Garcia
6f02606fb7 fix(iam): handle no display name error in service account (#2176) 2023-04-10 12:06:08 +02:00
Sergio Garcia
df40142b51 chore(regions_update): Changes in regions for AWS services. (#2180)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-10 12:05:48 +02:00
Sergio Garcia
cc290d488b chore(regions_update): Changes in regions for AWS services. (#2178)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-10 12:05:30 +02:00
Nacho Rivera
64328218fc feat(banner): azure credential banner (#2179) 2023-04-10 09:58:28 +02:00
Sergio Garcia
8d1356a085 fix(logging): add default resource id when no resources (#2177) 2023-04-10 08:02:40 +02:00
Sergio Garcia
4f39dd0f73 fix(version): handle request response property (#2175)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-05 15:17:30 +02:00
Pepe Fagoaga
54ffc8ae45 chore(release): 3.3.4 (#2174) 2023-04-05 14:18:07 +02:00
Sergio Garcia
78ab1944bd chore(regions_update): Changes in regions for AWS services. (#2173)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-05 12:32:25 +02:00
dependabot[bot]
434cf94657 build(deps-dev): bump moto from 4.1.5 to 4.1.6 (#2164)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-04-05 12:31:58 +02:00
Nacho Rivera
dcb893e230 fix(elbv2 desync check): Mixed elbv2 desync and smuggling (#2171) 2023-04-05 11:36:06 +02:00
Sergio Garcia
ce4fadc378 chore(regions_update): Changes in regions for AWS services. (#2170)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-05 08:47:19 +02:00
dependabot[bot]
5683d1b1bd build(deps): bump botocore from 1.29.100 to 1.29.105 (#2163)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-04 13:24:03 +02:00
dependabot[bot]
0eb88d0c10 build(deps): bump mkdocs-material from 9.1.4 to 9.1.5 (#2162)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-04 11:07:41 +02:00
Nacho Rivera
eb1367e54d fix(pipeline build): fixed wording when build and push (#2169) 2023-04-04 10:21:28 +02:00
dependabot[bot]
33a4786206 build(deps-dev): bump pylint from 2.17.0 to 2.17.2 (#2161)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-04 09:35:10 +02:00
Pepe Fagoaga
8c6606ad95 fix(dax): Call list_tags using the cluster ARN (#2167) 2023-04-04 09:30:36 +02:00
Pepe Fagoaga
cde9519a76 fix(iam): Handle LimitExceededException when calling generate_credential_report (#2168) 2023-04-04 09:29:27 +02:00
Pepe Fagoaga
7b2e0d79cb fix(cloudformation): Handle ValidationError (#2166) 2023-04-04 09:28:11 +02:00
Pepe Fagoaga
5b0da8e92a fix(rds): Handle DBSnapshotNotFound (#2165) 2023-04-04 09:27:36 +02:00
Michael Göhler
0126d2f77c fix(secretsmanager_automatic_rotation_enabled): Improve description for Secrets Manager secret rotation (#2156) 2023-04-03 11:01:29 +02:00
Sergio Garcia
0b436014c9 chore(regions_update): Changes in regions for AWS services. (#2159)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-04-03 11:01:15 +02:00
Igor Ceron
2cb7f223ed fix(docs): check extra_742 name adjusted in the V2 to V3 mapping (#2154) 2023-03-31 12:54:13 +02:00
Sergio Garcia
eca551ed98 chore(regions_update): Changes in regions for AWS services. (#2155)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-31 12:53:49 +02:00
Gabriel Soltz
608fd92861 feat(new_checks): New AWS Organizations related checks (#2133)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-03-30 17:36:23 +02:00
Sergio Garcia
e37d8fe45f chore(release): update Prowler Version to 3.3.2 (#2150)
Co-authored-by: github-actions <noreply@github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-03-30 11:33:33 +02:00
Sergio Garcia
4cce91ec97 chore(regions_update): Changes in regions for AWS services. (#2153)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-30 11:29:00 +02:00
Pepe Fagoaga
72fdde35dc fix(pypi): Set base branch when updating release version (#2152) 2023-03-30 10:59:58 +02:00
Pepe Fagoaga
d425187778 fix(pypi): Build from release branch (#2151) 2023-03-30 10:14:49 +02:00
Sergio Garcia
e419aa1f1a chore(regions_update): Changes in regions for AWS services. (#2149)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-29 11:45:35 +02:00
Pepe Fagoaga
5506547f7f fix(ssm): Handle ValidationException when retrieving documents (#2146) 2023-03-29 09:16:52 +02:00
Nacho Rivera
568ed72b3e fix(audit_info): azure subscriptions parsing error (#2147) 2023-03-29 09:15:53 +02:00
Nacho Rivera
e8cc0e6684 fix(delete check): delete check ec2_securitygroup_in_use_without_ingress_filtering (#2148) 2023-03-29 09:13:43 +02:00
Sergio Garcia
4331f69395 chore(regions_update): Changes in regions for AWS services. (#2145)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-28 13:08:02 +02:00
dependabot[bot]
7cc67ae7cb build(deps): bump botocore from 1.29.90 to 1.29.100 (#2142)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-28 13:07:23 +02:00
dependabot[bot]
244b3438fc build(deps): bump mkdocs-material from 9.1.3 to 9.1.4 (#2140)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-28 12:39:00 +02:00
Nacho Rivera
1a741f7ca0 fix(azure output): change default values of audit identity metadata (#2144) 2023-03-28 10:42:47 +02:00
dependabot[bot]
1447800e2b build(deps): bump pydantic from 1.10.6 to 1.10.7 (#2139)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-28 10:41:09 +02:00
Sergio Garcia
f968fe7512 fix(readme): add GCP provider to README introduction (#2143) 2023-03-28 10:40:56 +02:00
dependabot[bot]
0a2349fad7 build(deps): bump alive-progress from 3.0.1 to 3.1.0 (#2138)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-28 09:55:18 +02:00
Sergio Garcia
941b8cbc1e chore(docs): Developer Guide - how to create a new check (#2137) 2023-03-27 20:20:13 +02:00
Pepe Fagoaga
3b7b16acfd fix(resource_not_found): Handle error (#2136) 2023-03-27 17:27:50 +02:00
Nacho Rivera
fbc7bb68fc feat(defender service): retrieving key dicts with get (#2129) 2023-03-27 17:13:11 +02:00
Pepe Fagoaga
0d16880596 fix(s3): handle if ignore_public_acls is None (#2128) 2023-03-27 17:00:20 +02:00
Sergio Garcia
3b5218128f fix(brew): move brew formula action to the bottom (#2135) 2023-03-27 11:24:28 +02:00
Pepe Fagoaga
cb731bf1db fix(aws_provider): Fix assessment session name (#2132) 2023-03-25 00:11:16 +01:00
Sergio Garcia
7c4d6eb02d fix(gcp): handle error when Project ID is None (#2130) 2023-03-24 18:30:33 +01:00
Sergio Garcia
c14e7fb17a feat(gcp): add Google Cloud provider with 43 checks (#2125) 2023-03-24 13:38:41 +01:00
Sergio Garcia
fe57811bc5 chore(regions_update): Changes in regions for AWS services. (#2126)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-24 10:18:33 +01:00
Sergio Garcia
e073b48f7d chore(regions_update): Changes in regions for AWS services. (#2123)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-23 15:58:47 +01:00
Ben Nugent
a9df609593 fix(quickinventory): AttributError when creating inventory table (#2122) 2023-03-23 10:22:14 +01:00
Sergio Garcia
6c3db9646e fix(output bucket): solve IsADirectoryError using compliance flag (#2121) 2023-03-22 13:38:41 +01:00
Sergio Garcia
ff9c4c717e chore(regions_update): Changes in regions for AWS services. (#2120)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-22 12:18:44 +01:00
Sergio Garcia
182374b46f docs: improve reporting documentation (#2119) 2023-03-22 10:02:52 +01:00
Sergio Garcia
0871cda526 docs: improve quick inventory section (#2117) 2023-03-21 18:09:40 +01:00
Toni de la Fuente
1b47cba37a docs(developer-guide): added phase 1 of the developer guide (#1904)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2023-03-21 15:35:26 +01:00
Pepe Fagoaga
e5bef36905 docs: Remove list severities (#2116) 2023-03-21 14:18:07 +01:00
Sergio Garcia
706d723703 chore(version): check latest version (#2106) 2023-03-21 11:16:13 +01:00
Sergio Garcia
51eacbfac5 feat(allowlist): add tags filter to allowlist (#2105) 2023-03-21 11:14:59 +01:00
dependabot[bot]
5c2a411982 build(deps): bump boto3 from 1.26.86 to 1.26.90 (#2114)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-21 11:04:26 +01:00
Sergio Garcia
08d65cbc41 chore(regions_update): Changes in regions for AWS services. (#2115)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-21 11:03:54 +01:00
dependabot[bot]
9d2bf429c1 build(deps): bump mkdocs-material from 9.1.2 to 9.1.3 (#2113)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-21 10:18:36 +01:00
dependabot[bot]
d34f863bd4 build(deps-dev): bump moto from 4.1.4 to 4.1.5 (#2111)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-03-21 09:27:44 +01:00
Sergio Garcia
b4abf1c2c7 chore(regions_update): Changes in regions for AWS services. (#2104)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-21 08:32:26 +01:00
dependabot[bot]
68baaf589e build(deps-dev): bump coverage from 7.2.1 to 7.2.2 (#2112)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-21 08:18:47 +01:00
dependabot[bot]
be74e41d84 build(deps-dev): bump openapi-spec-validator from 0.5.5 to 0.5.6 (#2110)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-21 07:52:50 +01:00
Sergio Garcia
848122b0ec chore(release): update Prowler Version to 3.3.0 (#2102)
Co-authored-by: github-actions <noreply@github.com>
2023-03-16 22:30:02 +01:00
Nacho Rivera
0edcb7c0d9 fix(ulimit check): try except when checking ulimit (#2096)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2023-03-16 17:39:46 +01:00
Pepe Fagoaga
cc58e06b5e fix(providers): Move provider's logic outside main (#2043)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-03-16 17:32:53 +01:00
Sergio Garcia
0d6ca606ea fix(ec2_securitygroup_allow_wide_open_public_ipv4): correct check title (#2101) 2023-03-16 17:25:32 +01:00
Sergio Garcia
75ee93789f chore(regions_update): Changes in regions for AWS services. (#2095)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-16 17:14:40 +01:00
Sergio Garcia
05daddafbf feat(SecurityHub): add compliance details to Security Hub findings (#2100) 2023-03-16 17:11:55 +01:00
Nacho Rivera
7bbce6725d fix(ulimit check): test only when platform is not windows (#2094) 2023-03-16 08:38:37 +01:00
Nacho Rivera
789b211586 feat(lambda_cloudtrail check): improved logic and status extended (#2092) 2023-03-15 12:32:58 +01:00
Sergio Garcia
826a043748 chore(regions_update): Changes in regions for AWS services. (#2091)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-15 12:28:03 +01:00
Sergio Garcia
6761048298 fix(cloudwatch): solve inexistent filterPattern error (#2087) 2023-03-14 14:46:34 +01:00
Sergio Garcia
738fc9acad feat(compliance): add compliance field to HTML, CSV and JSON outputs including frameworks and reqs (#2060)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-03-14 14:20:46 +01:00
Sergio Garcia
43c0540de7 chore(regions_update): Changes in regions for AWS services. (#2085)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-14 13:11:02 +01:00
Sergio Garcia
2d1c3d8121 fix(emr): solve emr_cluster_publicly_accesible error (#2086) 2023-03-14 13:10:21 +01:00
dependabot[bot]
f48a5c650d build(deps-dev): bump pytest-xdist from 3.2.0 to 3.2.1 (#2084)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-14 10:21:17 +01:00
dependabot[bot]
66c18eddb8 build(deps): bump botocore from 1.29.86 to 1.29.90 (#2083)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-14 10:01:23 +01:00
dependabot[bot]
fdd2ee6365 build(deps-dev): bump bandit from 1.7.4 to 1.7.5 (#2082)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-14 09:03:46 +01:00
dependabot[bot]
c207f60ad8 build(deps): bump pydantic from 1.10.5 to 1.10.6 (#2081)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-14 08:02:28 +01:00
dependabot[bot]
0eaa95c8c0 build(deps): bump mkdocs-material from 9.1.1 to 9.1.2 (#2080)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-14 07:48:02 +01:00
Pepe Fagoaga
df2fca5935 fix(bug_report): typo in bug reporting template (#2078)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2023-03-13 18:42:34 +01:00
Toni de la Fuente
dcaf5d9c7d update(docs): update readme with new ECR alias (#2079) 2023-03-13 18:07:51 +01:00
Sergio Garcia
0112969a97 fix(compliance): add check to 2.1.5 CIS (#2077) 2023-03-13 09:25:51 +01:00
Sergio Garcia
3ec0f3d69c chore(regions_update): Changes in regions for AWS services. (#2075)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-13 07:51:13 +01:00
Pepe Fagoaga
5555d300a1 fix(bug_report): Update wording (#2074) 2023-03-10 12:21:51 +01:00
Nacho Rivera
8155ef4b60 feat(templates): New versions of issues and fr templates (#2072) 2023-03-10 10:32:17 +01:00
Sergio Garcia
a12402f6c8 chore(regions_update): Changes in regions for AWS services. (#2073)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-10 10:27:29 +01:00
Sergio Garcia
cf28b814cb fix(ec2): avoid terminated instances (#2063) 2023-03-10 08:11:35 +01:00
Pepe Fagoaga
b05f67db19 chore(actions): Missing cache in the PR (#2067) 2023-03-09 11:50:49 +01:00
Pepe Fagoaga
260f4659d5 chore(actions): Use GHA cache (#2066) 2023-03-09 10:29:16 +01:00
dependabot[bot]
9e700f298c build(deps-dev): bump pylint from 2.16.4 to 2.17.0 (#2062)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-08 15:41:22 +01:00
dependabot[bot]
56510734c4 build(deps): bump boto3 from 1.26.85 to 1.26.86 (#2061)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-08 15:14:18 +01:00
Pepe Fagoaga
3938a4d14e chore(dependabot): Change to weekly (#2057) 2023-03-08 14:41:34 +01:00
Sergio Garcia
fa3b9eeeaf chore(regions_update): Changes in regions for AWS services. (#2058)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-08 14:38:56 +01:00
dependabot[bot]
eb9d6fa25c build(deps): bump botocore from 1.29.85 to 1.29.86 (#2054)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-08 09:57:44 +01:00
Alex Nelson
b53307c1c2 docs: Corrected spelling mistake in multiacount (#2056) 2023-03-08 09:57:08 +01:00
dependabot[bot]
c3fc708a66 build(deps): bump boto3 from 1.26.82 to 1.26.85 (#2053)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-08 09:03:00 +01:00
Sergio Garcia
b34ffbe6d0 feat(inventory): add tags to quick inventory (#2051) 2023-03-07 14:20:50 +01:00
Sergio Garcia
f364315e48 chore(iam): update Prowler permissions (#2050) 2023-03-07 14:14:31 +01:00
Sergio Garcia
3ddb5a13a5 fix(ulimit): handle low ulimit OSError (#2042)
Co-authored-by: Toni de la Fuente <toni@blyx.com>
2023-03-07 13:19:24 +01:00
dependabot[bot]
a24cc399a4 build(deps-dev): bump moto from 4.1.3 to 4.1.4 (#2045)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-03-07 12:45:50 +01:00
Sergio Garcia
305f4b2688 chore(regions_update): Changes in regions for AWS services. (#2049)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-07 11:27:28 +01:00
dependabot[bot]
9823171d65 build(deps-dev): bump pylint from 2.16.3 to 2.16.4 (#2048)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 10:11:19 +01:00
dependabot[bot]
4761bd8fda build(deps): bump mkdocs-material from 9.1.0 to 9.1.1 (#2047)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 09:33:19 +01:00
dependabot[bot]
9c22698723 build(deps-dev): bump pytest from 7.2.1 to 7.2.2 (#2046)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 08:32:19 +01:00
dependabot[bot]
e3892bbcc6 build(deps): bump botocore from 1.29.84 to 1.29.85 (#2044)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-07 08:18:53 +01:00
Sergio Garcia
629b156f52 fix(quick inventory): add non-tagged s3 buckets to inventory (#2041) 2023-03-06 16:55:03 +01:00
Gary Mclean
c45dd47d34 fix(windows-path): --list-services bad split (#2028)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-03-06 14:00:07 +01:00
Sergio Garcia
ef8831f784 feat(quick_inventory): add regions to inventory table (#2026) 2023-03-06 13:41:30 +01:00
Sergio Garcia
c5a42cf5de feat(rds_instance_transport_encrypted): add new check (#1963)
Co-authored-by: Toni de la Fuente <toni@blyx.com>
2023-03-06 13:18:41 +01:00
dependabot[bot]
90ebbfc20f build(deps-dev): bump pylint from 2.16.2 to 2.16.3 (#2038)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-06 13:18:26 +01:00
Fennerr
17cd0dc91d feat(new_check): cloudwatch_log_group_no_secrets_in_logs (#1980)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: Jeffrey Souza <JeffreySouza@users.noreply.github.com>
2023-03-06 12:16:46 +01:00
dependabot[bot]
fa1f42af59 build(deps): bump botocore from 1.29.82 to 1.29.84 (#2037)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-06 12:14:48 +01:00
Sergio Garcia
f45ea1ab53 fix(check): change cloudformation_outputs_find_secrets name (#2027) 2023-03-06 12:11:58 +01:00
Sergio Garcia
0dde3fe483 chore(poetry): add poetry checks to pre-commit (#2040) 2023-03-06 11:44:04 +01:00
dependabot[bot]
277dc7dd09 build(deps-dev): bump freezegun from 1.2.1 to 1.2.2 (#2033)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-06 11:06:23 +01:00
dependabot[bot]
3215d0b856 build(deps-dev): bump coverage from 7.1.0 to 7.2.1 (#2032)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-06 09:55:19 +01:00
dependabot[bot]
0167d5efcd build(deps): bump mkdocs-material from 9.0.15 to 9.1.0 (#2031)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-06 09:15:44 +01:00
Sergio Garcia
b48ac808a6 chore(regions_update): Changes in regions for AWS services. (#2035)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-03 10:14:20 +01:00
dependabot[bot]
616524775c build(deps-dev): bump docker from 6.0.0 to 6.0.1 (#2030)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-03 10:02:11 +01:00
dependabot[bot]
5832849b11 build(deps): bump boto3 from 1.26.81 to 1.26.82 (#2029)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-03 09:43:43 +01:00
Sergio Garcia
467c5d01e9 fix(cloudtrail): list tags only in owned trails (#2025) 2023-03-02 16:16:19 +01:00
Sergio Garcia
24711a2f39 feat(tags): add resource tags to S-W services (#2020) 2023-03-02 14:21:05 +01:00
Nacho Rivera
24e8286f35 feat(): 7 chars in dispatch commit message (#2024) 2023-03-02 14:20:31 +01:00
Sergio Garcia
e8a1378ad0 feat(tags): add resource tags to G-R services (#2009) 2023-03-02 13:56:22 +01:00
Sergio Garcia
76bb418ea9 feat(tags): add resource tags to E services (#2007)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-03-02 13:55:26 +01:00
Nacho Rivera
cd8770a3e3 fix(actions): fixed dispatch commit message (#2023) 2023-03-02 13:55:03 +01:00
Sergio Garcia
da834c0935 feat(tags): add resource tags to C-D services (#2003)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-03-02 13:14:53 +01:00
Nacho Rivera
024ffb1117 fix(head): Pass head commit to dispatch action (#2022) 2023-03-02 12:06:41 +01:00
Nacho Rivera
eed7ab9793 fix(iam): refactor IAM service (#2010) 2023-03-02 11:16:05 +01:00
Sergio Garcia
032feb343f feat(tags): add resource tags in A services (#1997) 2023-03-02 10:59:49 +01:00
Pepe Fagoaga
eabccba3fa fix(actions): push should be true (#2019) 2023-03-02 10:37:29 +01:00
Nacho Rivera
d86d656316 feat(dispatch): add tag info to dispatch (#2002) 2023-03-02 10:31:30 +01:00
Sergio Garcia
fa73c91b0b chore(regions_update): Changes in regions for AWS services. (#2018)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-02 10:23:59 +01:00
Pepe Fagoaga
2eee50832d fix(actions): Stop using github storage (#2016) 2023-03-02 10:23:04 +01:00
Toni de la Fuente
b40736918b docs(install): Add brew and github installation to quick start (#1991) 2023-03-02 10:21:57 +01:00
Sergio Garcia
ffb1a2e30f chore(regions_update): Changes in regions for AWS services. (#1995)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-02 10:21:41 +01:00
Sergio Garcia
d6c3c0c6c1 feat(s3_bucket_level_public_access_block): new check (#1953) 2023-03-02 10:18:27 +01:00
dependabot[bot]
ee251721ac build(deps): bump botocore from 1.29.81 to 1.29.82 (#2015)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-02 09:53:24 +01:00
dependabot[bot]
fdbb9195d5 build(deps-dev): bump moto from 4.1.2 to 4.1.3 (#2014)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-02 09:23:48 +01:00
dependabot[bot]
c68b08d9af build(deps-dev): bump black from 22.10.0 to 22.12.0 (#2013)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-02 08:59:18 +01:00
dependabot[bot]
3653bbfca0 build(deps-dev): bump flake8 from 5.0.4 to 6.0.0 (#2012)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-02 08:32:41 +01:00
dependabot[bot]
05c7cc7277 build(deps): bump boto3 from 1.26.80 to 1.26.81 (#2011)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-02 07:54:33 +01:00
Sergio Garcia
5670bf099b chore(regions_update): Changes in regions for AWS services. (#2006)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-03-01 10:16:58 +01:00
Nacho Rivera
0c324b0f09 fix(awslambdacloudtrail): include advanced event and all lambdas in check (#1994) 2023-03-01 10:04:06 +01:00
dependabot[bot]
968557e38e build(deps): bump botocore from 1.29.80 to 1.29.81 (#2005)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-01 08:59:54 +01:00
dependabot[bot]
882cdebacb build(deps): bump boto3 from 1.26.79 to 1.26.80 (#2004) 2023-03-01 08:40:41 +01:00
Sergio Garcia
07753e1774 feat(encryption): add new encryption category (#1999) 2023-02-28 13:42:11 +01:00
Pepe Fagoaga
5b984507fc fix(emr): KeyError EmrManagedSlaveSecurityGroup (#2000) 2023-02-28 13:41:58 +01:00
Sergio Garcia
27df481967 chore(metadata): remove tags from metadata (#1998) 2023-02-28 12:27:59 +01:00
dependabot[bot]
0943031f23 build(deps): bump mkdocs-material from 9.0.14 to 9.0.15 (#1993)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-28 11:02:59 +01:00
dependabot[bot]
2d95168de0 build(deps): bump botocore from 1.29.79 to 1.29.80 (#1992)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-28 10:46:25 +01:00
Sergio Garcia
97cae8f92c chore(brew): bump new version to brew (#1990) 2023-02-27 18:07:05 +01:00
github-actions
eb213bac92 chore(release): 3.2.4 2023-02-27 14:25:52 +01:00
Sergio Garcia
8187788b2c fix(pypi-release.yml): create PR before replicating (#1986) 2023-02-27 14:16:53 +01:00
Sergio Garcia
c80e08abce fix(compliance): solve AWS compliance dir path (#1987) 2023-02-27 14:16:17 +01:00
github-actions[bot]
42fd851e5c chore(release): update Prowler Version to 3.2.3 (#1985)
Co-authored-by: github-actions <noreply@github.com>
Co-authored-by: jfagoagas <jfagoagas@users.noreply.github.com>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-02-27 13:59:28 +01:00
Pepe Fagoaga
70e4ebccab chore(codeowners): Update team to OSS (#1984) 2023-02-27 13:31:16 +01:00
Sergio Garcia
140f87c741 chore(readme): add brew stats (#1982) 2023-02-27 13:17:48 +01:00
Pepe Fagoaga
b0d756123e fix(action): Use PathContext to get version changes (#1983) 2023-02-27 13:17:09 +01:00
Pedro Martín González
6188c92916 chore(compliance): implements dynamic handling of available compliance frameworks (#1977)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-02-27 10:47:47 +01:00
dependabot[bot]
34c6f96728 build(deps): bump boto3 from 1.26.74 to 1.26.79 (#1981)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-27 09:45:45 +01:00
dependabot[bot]
50fd047c0b build(deps): bump botocore from 1.29.78 to 1.29.79 (#1978)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-27 09:14:29 +01:00
Sergio Garcia
5bcc05b536 chore(regions_update): Changes in regions for AWS services. (#1972)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-24 12:10:27 +01:00
Sergio Garcia
ce7d6c8dd5 fix(service errors): solve EMR, VPC and ELBv2 service errors (#1974) 2023-02-24 10:49:54 +01:00
dependabot[bot]
d87a1e28b4 build(deps): bump alive-progress from 2.4.1 to 3.0.1 (#1965)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-24 10:12:52 +01:00
Pepe Fagoaga
227306c572 fix(acm): Fix issues with list-certificates (#1970) 2023-02-24 10:12:38 +01:00
dependabot[bot]
45c2691f89 build(deps): bump mkdocs-material from 8.2.1 to 9.0.14 (#1964)
Signed-off-by: dependabot[bot] <support@github.com>
2023-02-24 10:03:52 +01:00
Pepe Fagoaga
d0c81245b8 fix(directoryservice): tzinfo without _ (#1971) 2023-02-24 10:03:34 +01:00
dependabot[bot]
e494afb1aa build(deps): bump botocore from 1.29.74 to 1.29.78 (#1968)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-24 09:43:14 +01:00
dependabot[bot]
ecc3c1cf3b build(deps): bump azure-storage-blob from 12.14.1 to 12.15.0 (#1966)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-24 08:42:44 +01:00
dependabot[bot]
228b16416a build(deps): bump colorama from 0.4.5 to 0.4.6 (#1967)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-24 07:56:47 +01:00
Nacho Rivera
17eb74842a fix(cloudfront): handle empty objects in checks (#1962) 2023-02-23 16:57:44 +01:00
Nacho Rivera
c01ff74c73 fix(kms): handle if describe_keys returns no value 2023-02-23 15:54:23 +01:00
Sergio Garcia
f88613b26d fix(toml): add toml dependency to pypi release action (#1960) 2023-02-23 15:24:46 +01:00
Sergio Garcia
3464f4241f chore(release): 3.2.2 (#1959)
Co-authored-by: github-actions <noreply@github.com>
2023-02-23 15:10:03 +01:00
Sergio Garcia
849b703828 chore(resource-based scan): execute only applicable checks (#1934) 2023-02-23 13:30:21 +01:00
Sergio Garcia
4b935a40b6 fix(metadata): remove us-east-1 in remediation (#1958) 2023-02-23 13:19:10 +01:00
Sergio Garcia
5873a23ccb fix(key errors): solver EMR and IAM errrors (#1957) 2023-02-23 13:15:00 +01:00
Nacho Rivera
eae2786825 fix(cloudtrail): Handle when the CloudTrail bucket is in another account (#1956) 2023-02-23 13:04:32 +01:00
github-actions[bot]
6407386de5 chore(regions_update): Changes in regions for AWS services. (#1952)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-23 12:24:36 +01:00
Sergio Garcia
3fe950723f fix(actions): add README to docker action and filter steps for releases (#1955) 2023-02-23 12:22:41 +01:00
Sergio Garcia
52bf6acd46 chore(regions): add secret token to avoid stuck checks (#1954) 2023-02-23 12:11:54 +01:00
Sergio Garcia
9590e7d7e0 chore(poetry): make python-poetry as packaging and dependency manager (#1935)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-02-23 11:50:29 +01:00
github-actions[bot]
7a08140a2d chore(regions_update): Changes in regions for AWS services. (#1950)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-23 08:42:36 +01:00
dependabot[bot]
d1491cfbd1 build(deps): bump boto3 from 1.26.74 to 1.26.76 (#1948)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-22 08:01:13 +01:00
dependabot[bot]
695b80549d build(deps): bump botocore from 1.29.75 to 1.29.76 (#1946)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-22 07:50:39 +01:00
Sergio Garcia
11c60a637f release: 3.2.1 (#1945) 2023-02-21 17:22:02 +01:00
Sergio Garcia
844ad70bb9 fix(cloudwatch): allow " in regex patterns (#1943) 2023-02-21 16:46:23 +01:00
Sergio Garcia
5ac7cde577 chore(iam_disable_N_days_credentials): improve checks logic (#1923) 2023-02-21 15:20:33 +01:00
Sergio Garcia
ce3ef0550f chore(Security Hub): add status extended to Security Hub (#1921) 2023-02-21 15:11:43 +01:00
Sergio Garcia
813f3e7d42 fix(errors): handle errors when S3 buckets or EC2 instances are deleted (#1942) 2023-02-21 12:31:23 +01:00
Sergio Garcia
d03f97af6b fix(regions): add unique branch name (#1941) 2023-02-21 11:53:36 +01:00
github-actions[bot]
019ab0286d chore(regions_update): Changes in regions for AWS services. (#1940)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-21 11:47:03 +01:00
Fennerr
c6647b4706 chore(secrets): Improve the status_extended with more information (#1937)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-02-21 11:37:20 +01:00
Sergio Garcia
f913536d88 fix(services): solve errors in EMR, RDS, S3 and VPC services (#1913) 2023-02-21 11:11:39 +01:00
dependabot[bot]
640d1bd176 build(deps-dev): bump moto from 4.1.2 to 4.1.3 (#1939)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-21 07:48:08 +01:00
dependabot[bot]
66baccf528 build(deps): bump botocore from 1.29.74 to 1.29.75 (#1938)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-21 07:32:44 +01:00
Sergio Garcia
6e6dacbace chore(security hub): add --skip-sh-update (#1911) 2023-02-20 09:58:00 +01:00
dependabot[bot]
cdbb10fb26 build(deps): bump boto3 from 1.26.72 to 1.26.74 (#1933)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-20 07:56:40 +01:00
dependabot[bot]
c34ba3918c build(deps): bump botocore from 1.29.73 to 1.29.74 (#1932)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-20 07:34:20 +01:00
Fennerr
fa228c876c fix(iam_rotate_access_key_90_days): check only active access keys (#1929)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-02-17 12:53:28 +01:00
dependabot[bot]
2f4d0af7d7 build(deps): bump botocore from 1.29.72 to 1.29.73 (#1926)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-17 12:14:23 +01:00
github-actions[bot]
2d3e5235a9 chore(regions_update): Changes in regions for AWS services. (#1927)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-17 11:13:13 +01:00
dependabot[bot]
8e91ccaa54 build(deps): bump boto3 from 1.26.71 to 1.26.72 (#1925)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-17 10:56:19 +01:00
Fennerr
6955658b36 fix(quick_inventory): handle ApiGateway resources (#1924)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-02-16 18:29:23 +01:00
Fennerr
dbb44401fd fix(ecs_task_definitions_no_environment_secrets): dump_env_vars is reintialised (#1922) 2023-02-16 15:59:53 +01:00
dependabot[bot]
b42ed70c84 build(deps): bump botocore from 1.29.71 to 1.29.72 (#1919)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-16 14:21:46 +01:00
dependabot[bot]
a28276d823 build(deps): bump pydantic from 1.10.4 to 1.10.5 (#1918)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-16 13:51:37 +01:00
Pepe Fagoaga
fa4b27dd0e fix(compliance): Set Version as optional and fix list (#1899)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-02-16 12:47:39 +01:00
dependabot[bot]
0be44d5c49 build(deps): bump boto3 from 1.26.70 to 1.26.71 (#1920)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-16 12:38:10 +01:00
github-actions[bot]
2514596276 chore(regions_update): Changes in regions for AWS services. (#1910)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-16 11:56:10 +01:00
dependabot[bot]
7008d2a953 build(deps): bump botocore from 1.29.70 to 1.29.71 (#1909)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-15 07:39:16 +01:00
dependabot[bot]
2539fedfc4 build(deps): bump boto3 from 1.26.69 to 1.26.70 (#1908)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-15 07:12:18 +01:00
Ignacio Dominguez
b453df7591 fix(iam-credentials-expiration): IAM password policy expires passwords fix (#1903)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-02-14 13:54:58 +01:00
Pepe Fagoaga
9e5d5edcba fix(codebuild): Handle endTime in builds (#1900) 2023-02-14 11:27:53 +01:00
Nacho Rivera
2d5de6ff99 fix(cross account): cloudtrail s3 bucket logging (#1902) 2023-02-14 11:23:31 +01:00
github-actions[bot]
259e9f1c17 chore(regions_update): Changes in regions for AWS services. (#1901)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-14 10:28:04 +01:00
dependabot[bot]
daeb53009e build(deps): bump botocore from 1.29.69 to 1.29.70 (#1898)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-14 08:27:14 +01:00
dependabot[bot]
f12d271ca5 build(deps): bump boto3 from 1.26.51 to 1.26.69 (#1897)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-14 07:55:26 +01:00
dependabot[bot]
965185ca3b build(deps-dev): bump pylint from 2.16.1 to 2.16.2 (#1896) 2023-02-14 07:35:29 +01:00
Pepe Fagoaga
9c484f6a78 Release: 3.2.0 (#1894) 2023-02-13 15:42:57 +01:00
Fennerr
de18c3c722 docs: Minor changes to logging (#1893) 2023-02-13 15:31:23 +01:00
Fennerr
9be753b281 docs: Minor changes to the intro paragraph (#1892) 2023-02-13 15:20:48 +01:00
Pepe Fagoaga
d6ae122de1 docs: Boto3 configuration (#1885)
Co-authored-by: Toni de la Fuente <toni@blyx.com>
2023-02-13 15:20:33 +01:00
Pepe Fagoaga
c6b90044f2 chore(Dockerfile): Remove build files (#1886) 2023-02-13 15:19:05 +01:00
Nacho Rivera
14898b6422 fix(Azure_Audit_Info): Added audited_resources field (#1891) 2023-02-13 15:17:11 +01:00
Fennerr
26294b0759 docs: Update AWS Role Assumption (#1890) 2023-02-13 15:13:22 +01:00
Nacho Rivera
6da45b5c2b fix(list_checks): arn filtering checks after audit_info set (#1887) 2023-02-13 14:57:42 +01:00
Acknosyn
674332fddd update(logging): fix plural grammar for checks execution message (#1680)
Co-authored-by: Francesco Badraun <francesco.badraun@zxsecurity.co.nz>
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-02-13 14:33:34 +01:00
Sergio Garcia
ab8942d05a fix(service errors): solve errors in IAM, S3, Lambda, DS, Cloudfront services (#1882)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-02-13 10:35:04 +01:00
github-actions[bot]
29790b8a5c chore(regions_update): Changes in regions for AWS services. (#1884)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-13 10:01:43 +01:00
dependabot[bot]
4a4c26ffeb build(deps): bump botocore from 1.29.51 to 1.29.69 (#1883)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-13 09:19:01 +01:00
Sergio Garcia
25c9bc07b2 chore(compliance): add manual checks to compliance CSV (#1872)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-02-10 12:38:13 +01:00
Nacho Rivera
d22d4c4c83 fix(cloudtrail_multi_region_enabled): reformat check (#1880) 2023-02-10 12:34:53 +01:00
Sergio Garcia
d88640fd20 fix(errors): solve several services errors (AccessAnalyzer, AppStream, KMS, S3, SQS, R53, IAM, CodeArtifact and EC2) (#1879) 2023-02-10 12:26:00 +01:00
github-actions[bot]
57a2fca3a4 chore(regions_update): Changes in regions for AWS services. (#1878)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-10 11:25:00 +01:00
Sergio Garcia
f796688c84 fix(metadata): typo in appstream_fleet_session_disconnect_timeout.metadata.json (#1875) 2023-02-09 16:22:19 +01:00
alexr3y
d6bbf8b7cc update(compliance): ENS RD2022 Spanish security framework updates (#1809)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-02-09 14:14:38 +01:00
Nacho Rivera
37ec460f64 fix(hardware mfa): changed hardware mfa description (#1873) 2023-02-09 14:06:54 +01:00
Sergio Garcia
004b9c95e4 fix(key_errors): handle Key Errors in Lambda and EMR (#1871)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-09 10:32:00 +01:00
github-actions[bot]
86e27b465a chore(regions_update): Changes in regions for AWS services. (#1870)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-09 10:17:18 +01:00
Nacho Rivera
5e9afddc3a fix(permissive role assumption): actions list handling (#1869) 2023-02-09 10:06:53 +01:00
Pepe Fagoaga
de281535b1 feat(boto3-config): Use standard retrier (#1868)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2023-02-09 09:58:47 +01:00
Pedro Martín González
9df7def14e feat(compliance): Add 17 new security compliance frameworks for AWS (#1824)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-09 07:39:57 +01:00
Sergio Garcia
5b9db9795d feat(new check): add accessanalyzer_enabled check (#1864)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-08 17:39:25 +01:00
Sergio Garcia
7d2ce7e6ab fix(action): do not trigger action when editing release (#1865) 2023-02-08 17:34:02 +01:00
Oleksandr Mykytenko
3e807af2b2 fix(checks): added validation for non-existing VPC endpoint policy (#1859)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-08 12:13:22 +01:00
Oleksandr Mykytenko
4c64dc7885 Fixed elbv2 service for GWLB resources (#1860)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-08 10:38:34 +01:00
github-actions[bot]
e7a7874b34 chore(regions_update): Changes in regions for AWS services. (#1863)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-08 10:36:03 +01:00
dependabot[bot]
c78a47788b build(deps): bump cryptography from 39.0.0 to 39.0.1 (#1862) 2023-02-08 08:02:47 +01:00
dependabot[bot]
922698c5d9 build(deps-dev): bump pytest-xdist from 3.1.0 to 3.2.0 (#1858) 2023-02-07 18:04:30 +01:00
Sergio Garcia
8e8a490936 chore(release): 3.1.4 (#1857)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-07 17:49:27 +01:00
Sergio Garcia
231bc0605f fix(output_bucket): Use full path for -o option with output to S3 bucket (#1854)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-02-07 17:28:25 +01:00
Carlos
0298ff9478 Change prowler additional policy json due errors in creation (#1852)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2023-02-07 13:09:12 +01:00
Sergio Garcia
33a25dcf0e fix(exit_code): change sys exit code to 1 in Critical Errors (#1853) 2023-02-07 11:43:14 +01:00
Sergio Garcia
54c16e3cdb chore(security hub): improve securityhub_enabled check logic (#1851)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-07 11:29:39 +01:00
github-actions[bot]
28a978acc2 chore(regions_update): Changes in regions for AWS services. (#1849)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-07 10:58:10 +01:00
dependabot[bot]
bea26a461f build(deps-dev): bump openapi-spec-validator from 0.5.4 to 0.5.5 (#1846)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-07 09:58:56 +01:00
Sergio Garcia
ed54c5b8b9 feat(exit_code 3): add -z option (#1848)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-07 09:51:46 +01:00
Sergio Garcia
13316b68aa fix(checks): solve different errors in EFS, S3 and VPC (#1841)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-07 09:42:10 +01:00
dependabot[bot]
043986f35b build(deps-dev): bump sure from 2.0.0 to 2.0.1 (#1847)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-07 09:28:26 +01:00
dependabot[bot]
2dc4421dd6 build(deps-dev): bump moto from 4.1.1 to 4.1.2 (#1845)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-07 08:22:55 +01:00
Sergio Garcia
6c16e2bca2 fix(kms): call GetKeyRotationStatus only for Customer Keys (#1842) 2023-02-06 17:07:03 +01:00
Sergio Garcia
c2b4a8e115 fix(errors): solve CloudWatch, KMS, EMR and OpenSearch service errors (#1843)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-06 16:59:46 +01:00
Toni de la Fuente
63b7bc8794 chore(issues): update bug_report.md (#1844) 2023-02-06 16:45:52 +01:00
github-actions[bot]
f41ae74ae2 chore(regions_update): Changes in regions for AWS services. (#1840) 2023-02-06 09:59:50 +01:00
Pepe Fagoaga
98689d223e fix(lambda-runtime): Init value must be empty string (#1837) 2023-02-06 09:38:35 +01:00
Sergio Garcia
f19cf21146 fix(readme): correct PyPi download link (#1836) 2023-02-03 16:43:43 +01:00
Sergio Garcia
24e19e6b18 fix(errors): solve different errors in KMS, EFS and Lambda (#1835)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-03 15:05:07 +01:00
Sergio Garcia
08376cb15e chore(release): 3.1.3 (#1832)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-03 14:32:15 +01:00
Pepe Fagoaga
5f6e4663c0 fix(action): Build from release branch (#1834) 2023-02-03 14:31:43 +01:00
Pepe Fagoaga
9b91c00fcc fix(awslambda_function_no_secrets_in_code): Retrieve Code if set (#1833) 2023-02-03 14:28:31 +01:00
Sergio Garcia
229ab88c2f fix(shub): update link to Security Hub documentation (#1830) 2023-02-03 14:10:27 +01:00
dependabot[bot]
8863d13578 build(deps-dev): bump pylint from 2.16.0 to 2.16.1 (#1823) 2023-02-03 14:03:20 +01:00
Nacho Rivera
e07fc9fbb9 fix(cloudtrail): included advanced data events selectors (#1814) 2023-02-03 14:02:16 +01:00
Sergio Garcia
0164574fdd fix(KeyError): handle service key errors (#1831) 2023-02-03 12:28:23 +01:00
github-actions[bot]
98eec332d8 chore(regions_update): Changes in regions for AWS services. (#1829) 2023-02-03 11:30:01 +01:00
Oleksandr Mykytenko
3d2986fc64 fix(metadata) fixed typo in title for awslambda_function_not_publicly… (#1826) 2023-02-03 10:34:24 +01:00
dependabot[bot]
29e7f8581e build(deps-dev): bump openapi-spec-validator from 0.5.2 to 0.5.4 (#1821) 2023-02-02 18:04:24 +01:00
dependabot[bot]
4ee3f6c87a build(deps-dev): bump pylint from 2.15.10 to 2.16.0 (#1815)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-02 11:39:32 +01:00
Sergio Garcia
b8c7440e1f fix(KeyError): Handle service key errors (#1819)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-02 11:34:19 +01:00
Sergio Garcia
d49ff8d9a4 chore(logs): improve check error logs (#1818) 2023-02-02 11:13:40 +01:00
github-actions[bot]
07198042bd chore(regions_update): Changes in regions for AWS services. (#1817)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-02 10:58:47 +01:00
Sergio Garcia
c7a9492e96 feat(scan-type): AWS Resource ARNs based scan (#1807)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-02-01 14:09:22 +01:00
Sergio Garcia
360c6f3c1c fix(cloudtrail): improve cloudtrail_cloudwatch_logging_enabled status extended (#1813)
Co-authored-by: sergargar <sergio@verica.io>
2023-02-01 14:08:11 +01:00
github-actions[bot]
89aab4acd5 chore(regions_update): Changes in regions for AWS services. (#1812)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-02-01 10:15:10 +01:00
Nacho Rivera
d9b3e842d9 fix(accessanalyzer): no analyzers using pydantic (#1806) 2023-01-31 13:01:54 +01:00
Sergio Garcia
3ac4dc8392 feat(scanner): Tag-based scan (#1751)
Co-authored-by: Toni de la Fuente <toni@blyx.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-01-31 12:19:29 +01:00
Nacho Rivera
0d1a5318ec feat(audit-metadata): retrieve audit metadata from execution (#1803) 2023-01-31 11:24:01 +01:00
Pepe Fagoaga
94b7a219fd chore(regions): Change feat to chore (#1805) 2023-01-31 10:32:32 +01:00
github-actions[bot]
ba3eb71abd feat(regions_update): Changes in regions for AWS services. (#1804)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-01-31 10:22:05 +01:00
Sergio Garcia
bbc9e11205 fix(ec2_securitygroup_not_used): ignore default security groups (#1800)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-30 16:51:07 +01:00
Sergio Garcia
75571e4266 fix(iam_avoid_root_usage): correct date logic (#1801) 2023-01-30 16:47:24 +01:00
Sergio Garcia
4e879271a0 fix(iam_policy_no_administrative_privileges): check only *:* permissions (#1802) 2023-01-30 16:47:09 +01:00
Nacho Rivera
552e0fefc3 fix(accessanalyzer_enabled_without_findings): fixed status findings (#1799) 2023-01-30 13:22:05 +01:00
Jose Luis Martinez
cb7439a831 feat(allowlist): AWS Lambda function support (#1793) 2023-01-30 11:30:29 +01:00
Sergio Garcia
35d6b8bbc6 chore(readme): add prowler PyPi stats (#1798) 2023-01-30 11:26:09 +01:00
Jose Luis Martinez
48b9220ffc fix(allowlist): validate allowlist for any database format (file, dynamo, s3, etc) (#1792) 2023-01-30 10:30:46 +01:00
ifduyue
5537981877 Use docs.aws.amazon.com like other aws checks, not docs.amazonaws.cn (#1790) 2023-01-30 10:29:18 +01:00
Sergio Garcia
711f24a5b2 fix(partition): add dynamic partition in CloudTrail S3 DataEvents checks (#1787)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-27 10:50:31 +01:00
Sergio Garcia
5d2b8bc8aa fix(kms): add symmetric condition to kms_cmk_rotation_enabled check (#1788)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-27 10:49:40 +01:00
github-actions[bot]
f6ea10db2d feat(regions_update): Changes in regions for AWS services. (#1786) 2023-01-27 10:17:22 +01:00
Sergio Garcia
fc38ba3acb docs(readme): correct compliance link (#1780) 2023-01-26 12:48:58 +01:00
Sergio Garcia
0830ad268f chore(release): new version 3.1.2 (#1779)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-26 12:44:43 +01:00
github-actions[bot]
e633664c2a feat(regions_update): Changes in regions for AWS services. (#1778)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-01-26 10:28:13 +01:00
Ozan-Ekinci
d4c7d9a60a docs(grammar): Improved grammar in the Documentation paragraph #HSFDPMUW (#1776) 2023-01-26 10:18:42 +01:00
dependabot[bot]
5ee0d964f3 build(deps-dev): bump coverage from 7.0.5 to 7.1.0 (#1777) 2023-01-26 10:18:00 +01:00
Sergio Garcia
ba5e0f145f fix(severity): update severities for Security Hub, GuardDuty and NACL related checks (#1775) 2023-01-25 15:03:43 +01:00
Nacho Rivera
34eb9cc063 fix(cloudtrail_multi_region_enabled.py): fixed region when no trails (#1774) 2023-01-25 14:33:24 +01:00
Sergio Garcia
a795fdc40d fix(IAM): remove duplicate list_policies function (#1763)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-25 13:58:58 +01:00
Sergio Garcia
24cba4c4ca chore(contrib): CloudFormation of CodeBuild for v3 (#1764)
Co-authored-by: sergargar <sergio@verica.io>
Co-authored-by: Toni de la Fuente <toni@blyx.com>
2023-01-25 13:57:47 +01:00
Sergio Garcia
3d13f4bb9b fix(apigatewayv2): correct apigatewayv2_access_logging_enabled check title (#1769)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-25 13:56:28 +01:00
Sergio Garcia
e713d0d321 chore(readme): update pip package name (#1768) 2023-01-25 13:55:35 +01:00
Sergio Garcia
4e34be87a1 fix(json): close Json correctly when no findings (#1773)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-25 13:54:48 +01:00
Sergio Garcia
07307d37a1 fix(iam): handle credential report errors (#1765)
Co-authored-by: sergargar <sergio@verica.io>
Co-authored-by: n4ch04 <nacho@verica.io>
2023-01-25 10:31:58 +01:00
github-actions[bot]
81463181bc feat(regions_update): Changes in regions for AWS services. (#1772)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-01-25 10:31:04 +01:00
Acknosyn
02e57927fc fix(): IAM status messages switched fail and pass text and some grammar (#1756)
Co-authored-by: Francesco Badraun <francesco.badraun@zxsecurity.co.nz>
Co-authored-by: sergargar <sergio@verica.io>
Co-authored-by: n4ch04 <nachor1992@gmail.com>
2023-01-25 10:29:04 +01:00
Sergio Garcia
36925f0dbd fix(): solve metadata replace (#1755)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-24 13:45:46 +01:00
github-actions[bot]
f9b985e03d feat(regions_update): Changes in regions for AWS services. (#1761)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-24 10:39:49 +01:00
dependabot[bot]
598ad62b92 build(deps-dev): bump moto from 4.1.0 to 4.1.1 (#1758)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-24 09:27:05 +01:00
github-actions[bot]
ea929ab713 feat(regions_update): Changes in regions for AWS services. (#1748)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-01-23 12:43:51 +01:00
Ozan-Ekinci
04e56ced58 docs: Improved grammar in the AZ CLI / Browser / Managed Identity authentication paragraph #HSFDPMUW (#1745) 2023-01-23 10:24:23 +01:00
Vaibhav Bagaria
2278565b86 Update resource type for SQS and SNS (#1747) 2023-01-23 10:22:26 +01:00
Leon
afd0c56b44 fix(docs): Changed the azure subscription file text #HSFDPMUW (#1749) 2023-01-23 09:31:34 +01:00
Sergio Garcia
5ebdf66d22 release: 3.1.1 (#1744)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-20 15:36:27 +01:00
Toni de la Fuente
177d8a72a7 docs: add mapping of v2 to v3 checks and update pip package name in docs (#1742) 2023-01-20 12:50:57 +01:00
Pepe Fagoaga
03ef80dd8e fix(actions): Exclude docs folder in action (#1743) 2023-01-20 12:50:28 +01:00
Pepe Fagoaga
6f9825362a chore(code-ql): test tool (#1703) 2023-01-20 12:31:53 +01:00
github-actions[bot]
2167154064 feat(regions_update): Changes in regions for AWS services. (#1741)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-01-20 10:24:37 +01:00
Sergio Garcia
f88b35bd80 fix(rds): remove DocumentDB from RDS (#1737)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-20 09:31:19 +01:00
Nacho Rivera
6b9520338e fix(pipeline): fixed typo in main pipeline (#1740) 2023-01-20 09:30:53 +01:00
Sergio Garcia
438c087856 fix(arguments): improve quiet option (#1723)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-20 09:14:38 +01:00
Nacho Rivera
2a43274b06 feat(dispatch): dispatch triggered actions (#1739) 2023-01-20 09:13:57 +01:00
github-actions[bot]
20a9336867 feat(regions_update): Changes in regions for AWS services. (#1736)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-01-19 12:45:35 +01:00
Sergio Garcia
c921782714 feat(allowlist): add yaml structure validator (#1735)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-18 17:49:13 +01:00
Sergio Garcia
776ac9e3d4 fix(lambda): solve lambda errors (#1732)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-18 17:47:45 +01:00
Sergio Garcia
d02bd9b717 fix(allowlist): remove re.escape (#1734)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-18 17:45:51 +01:00
Sergio Garcia
50070e8fe7 fix(IAM): add missing permissions for Prowler (#1731)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-18 11:45:37 +01:00
github-actions[bot]
e3e3b3e279 feat(regions_update): Changes in regions for AWS services. (#1730)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-01-18 11:01:46 +01:00
Pepe Fagoaga
38fba297e8 fix: remove old example (#1728) 2023-01-17 18:04:12 +01:00
Sergio Garcia
52d65ee4e8 feat(pypi): replicate PyPi package (#1727)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-17 17:53:08 +01:00
Sergio Garcia
9ad2f33dd8 fix: remove check_sample.metadata.json (#1725) 2023-01-17 14:36:00 +01:00
Sergio Garcia
02ae23b11d feat(release): add PyPi GitHub Action (#1724)
Co-authored-by: sergargar <sergio@verica.io>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-01-17 14:33:15 +01:00
Sergio Garcia
70c6d6e7ae release: 3.1.0 (#1722)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-17 13:15:07 +01:00
Sergio Garcia
8efebf992f fix(metadata): fix recommendation in iam_role_cross_service_confused_deputy_prevention check (#1721) 2023-01-17 13:11:46 +01:00
Sergio Garcia
b9be94bcc5 feat(README): add pypi downloads (#1720)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-17 13:05:44 +01:00
Sergio Garcia
e6310c32ac feat(check): add iam_role_cross_service_confused_deputy_prevention check (#1710)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-17 12:17:37 +01:00
Sergio Garcia
654b4702d0 fix(error): ecr_repositories_scan_vulnerabilities_in_latest_image report not found (#1719)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-17 12:17:15 +01:00
dependabot[bot]
262b5a7ee5 build(deps-dev): bump openapi-spec-validator from 0.5.1 to 0.5.2 (#1716)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-17 12:13:44 +01:00
Pepe Fagoaga
ef0d4fe34b fix(fill_html_overview_statistics): Handle if file exists (#1718) 2023-01-17 11:40:05 +01:00
github-actions[bot]
c08342f40c feat(regions_update): Changes in regions for AWS services. (#1717)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-01-17 10:18:40 +01:00
Pepe Fagoaga
e7796268b5 feat(only_logs): New logging flag to only show execution logs (#1708) 2023-01-17 10:13:09 +01:00
Nacho Rivera
0cbe80d2ab feat(report): conditional import (#1702)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-01-17 10:00:31 +01:00
Ozan-Ekinci
11d3ba70a0 docs: missing comma in the Service Principal authentication paragraph (#1713)
Co-authored-by: Ozan-Can Ekinci <ozan-can.ekinci1@informatik.hs-fulda>
2023-01-17 08:50:52 +01:00
dependabot[bot]
c30e4c4867 build(deps-dev): bump pytest from 7.2.0 to 7.2.1 (#1715)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-17 08:42:48 +01:00
Sergio Garcia
d1e5087c18 fix(): add permissions to Github action (#1712) 2023-01-16 16:04:57 +01:00
Gabriel Soltz
618dd442e3 Incorrect ResourceType for check ec2_elastic_ip_unassgined (#1711) 2023-01-16 14:16:35 +01:00
Sergio Garcia
7f26fdf2d0 feat(iam): add IAM Role Class (#1709)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-16 11:47:23 +01:00
Gabriel Soltz
64090474e1 fix(apigateway): Add ApiGateway ResourceArn and check fixes (#1707)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-16 10:23:14 +01:00
Leon
a69c28713a fix(docs): Include multiple commas in the troubleshooting file #HSFDPMUW (#1706) 2023-01-16 09:05:24 +01:00
Leon
1d4b3095af fix(docs): Include a new comma in the Basic Usage paragraph #HSFDPMUW (#1705) 2023-01-16 09:04:48 +01:00
Sergio Garcia
ff75125af8 fix(docs): correct permissions links (#1701) 2023-01-13 10:28:54 +01:00
Toni de la Fuente
aa0025abbe fix(quick_inventory): Prowler quick inventory for US GovCloud and China (#1698) 2023-01-12 17:40:10 +01:00
Sergio Garcia
c9436da235 fix: Solve IAM policy Errors (#1692)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-12 17:39:09 +01:00
Sergio Garcia
12f1eaace7 fix: VPC Key Error (#1695)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-12 17:35:57 +01:00
Sergio Garcia
09ef8aba0f fix(): set default region CloudWatch (#1693)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-12 17:17:40 +01:00
Toni de la Fuente
08c094b8a5 docs(SECURITY.md): Include Security Policy (#1697)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-01-12 17:16:46 +01:00
Sergio Garcia
e9fb4410cd fix(docs): Add security section and solve images location (#1696)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: Toni de la Fuente <toni@blyx.com>
2023-01-12 17:16:34 +01:00
Nacho Rivera
cbdda22a33 fix: deleted test exclusion in name loading checks (#1694) 2023-01-12 15:43:54 +01:00
Sergio Garcia
fe906477da fix(aws_regions_by_service.json): FileNotFoundError[13] (#1689)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-12 13:24:03 +01:00
dependabot[bot]
b03df619df build(deps-dev): bump coverage from 7.0.4 to 7.0.5 (#1688)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-12 11:32:41 +01:00
Sergio Garcia
53d89d8d17 fix: solve multiple errors (#1690)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-12 11:29:33 +01:00
Sergio Garcia
1e5a1f3e1f fix: remove unnecessary print (#1686)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-12 08:58:15 +01:00
Nacho Rivera
6efe2979c6 fix(): Edit troubleshooting page (#1685) 2023-01-11 11:18:37 +01:00
Sergio Garcia
92cc2c8e69 fix(config): path error in Windows environment (#1684)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-10 17:06:14 +01:00
dependabot[bot]
50dd2e4179 build(deps-dev): bump vulture from 2.6 to 2.7 (#1677)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-10 08:26:44 +01:00
dependabot[bot]
7a8fd9c3d3 build(deps-dev): bump coverage from 7.0.3 to 7.0.4 (#1678)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-10 08:15:55 +01:00
dependabot[bot]
d5a3fc490b build(deps-dev): bump moto from 4.0.13 to 4.1.0 (#1675)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-10 07:56:16 +01:00
dependabot[bot]
13f948062b build(deps-dev): bump pylint from 2.15.9 to 2.15.10 (#1676)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-10 07:43:54 +01:00
Fennerr
b965fda226 feat(ecs_task_definitions_no_environment_secrets): Update resource_id (#1665)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-09 16:05:45 +01:00
Sergio Garcia
f9d67f0e9d fix(compliance): Security Hub working with compliance (#1673)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-09 14:18:12 +01:00
Sergio Garcia
4dfa20e40b fix(Security Hub): associate resource_arn as resourceId (#1672)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-09 14:16:57 +01:00
Gabriel Soltz
d5edbaa3a9 fix(s3): Add S3 ResourceArn (#1666)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-09 11:04:09 +01:00
Leon
0cd5ce8c29 fix(docs): Include a comma in the permissions paragraph (#1668) 2023-01-09 09:52:36 +01:00
Sergio Garcia
1c50a87ca2 fix(trustedadvisor_errors_and_warnings): add region (#1662)
Co-authored-by: sergargar <sergio@verica.io>
2023-01-05 17:57:21 +01:00
1617 changed files with 105780 additions and 17697 deletions

2
.github/CODEOWNERS vendored
View File

@@ -1 +1 @@
* @prowler-cloud/prowler-team
* @prowler-cloud/prowler-oss

View File

@@ -1,52 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: "[Bug]: "
labels: bug, status/needs-triage
assignees: ''
---
<!--
Please use this template to create your bug report. By providing as much info as possible you help us understand the issue, reproduce it and resolve it for you quicker. Therefore, take a couple of extra minutes to make sure you have provided all info needed.
PROTIP: record your screen and attach it as a gif to showcase the issue.
- How to record and attach gif: https://bit.ly/2Mi8T6K
-->
**What happened?**
A clear and concise description of what the bug is or what is not working as expected
**How to reproduce it**
Steps to reproduce the behavior:
1. What command are you running?
2. Cloud provider you are launching
3. Environment you have like single account, multi-account, organizations, multi or single subsctiption, etc.
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots or Logs**
If applicable, add screenshots to help explain your problem.
Also, you can add logs (anonymize them first!). Here a command that may help to share a log
`prowler <your arguments> --log-level DEBUG --log-file $(date +%F)_debug.log` then attach here the log file.
**From where are you running Prowler?**
Please, complete the following information:
- Resource: (e.g. EC2 instance, Fargate task, Docker container manually, EKS, Cloud9, CodeBuild, workstation, etc.)
- OS: [e.g. Amazon Linux 2, Mac, Alpine, Windows, etc. ]
- Prowler Version [`./prowler --version`]:
- Python version [`python --version`]:
- Pip version [`pip --version`]:
- Installation method (Are you running it from pip package or cloning the github repo?):
- Others:
**Additional context**
Add any other context about the problem here.

97
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@@ -0,0 +1,97 @@
name: 🐞 Bug Report
description: Create a report to help us improve
title: "[Bug]: "
labels: ["bug", "status/needs-triage"]
body:
- type: textarea
id: reproduce
attributes:
label: Steps to Reproduce
description: Steps to reproduce the behavior
placeholder: |-
1. What command are you running?
2. Cloud provider you are launching
3. Environment you have, like single account, multi-account, organizations, multi or single subscription, etc.
4. See error
validations:
required: true
- type: textarea
id: expected
attributes:
label: Expected behavior
description: A clear and concise description of what you expected to happen.
validations:
required: true
- type: textarea
id: actual
attributes:
label: Actual Result with Screenshots or Logs
description: If applicable, add screenshots to help explain your problem. Also, you can add logs (anonymize them first!). Here a command that may help to share a log `prowler <your arguments> --log-level DEBUG --log-file $(date +%F)_debug.log` then attach here the log file.
validations:
required: true
- type: dropdown
id: type
attributes:
label: How did you install Prowler?
options:
- Cloning the repository from github.com (git clone)
- From pip package (pip install prowler)
- From brew (brew install prowler)
- Docker (docker pull toniblyx/prowler)
validations:
required: true
- type: textarea
id: environment
attributes:
label: Environment Resource
description: From where are you running Prowler?
placeholder: |-
1. EC2 instance
2. Fargate task
3. Docker container locally
4. EKS
5. Cloud9
6. CodeBuild
7. Workstation
8. Other(please specify)
validations:
required: true
- type: textarea
id: os
attributes:
label: OS used
description: Which OS are you using?
placeholder: |-
1. Amazon Linux 2
2. MacOS
3. Alpine Linux
4. Windows
5. Other(please specify)
validations:
required: true
- type: input
id: prowler-version
attributes:
label: Prowler version
description: Which Prowler version are you using?
placeholder: |-
prowler --version
validations:
required: true
- type: input
id: pip-version
attributes:
label: Pip version
description: Which pip version are you using?
placeholder: |-
pip --version
validations:
required: true
- type: textarea
id: additional
attributes:
description: Additional context
label: Context
validations:
required: false

View File

@@ -0,0 +1,36 @@
name: 💡 Feature Request
description: Suggest an idea for this project
labels: ["enhancement", "status/needs-triage"]
body:
- type: textarea
id: Problem
attributes:
label: New feature motivation
description: Is your feature request related to a problem? Please describe
placeholder: |-
1. A clear and concise description of what the problem is. Ex. I'm always frustrated when
validations:
required: true
- type: textarea
id: Solution
attributes:
label: Solution Proposed
description: A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
id: Alternatives
attributes:
label: Describe alternatives you've considered
description: A clear and concise description of any alternative solutions or features you've considered.
validations:
required: true
- type: textarea
id: Context
attributes:
label: Additional context
description: Add any other context or screenshots about the feature request here.
validations:
required: false

View File

@@ -1,20 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement, status/needs-triage
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@@ -8,7 +8,7 @@ updates:
- package-ecosystem: "pip" # See documentation for possible values
directory: "/" # Location of package manifests
schedule:
interval: "daily"
interval: "weekly"
target-branch: master
labels:
- "dependencies"

View File

@@ -7,115 +7,56 @@ on:
paths-ignore:
- ".github/**"
- "README.md"
- "docs/**"
release:
types: [published, edited]
types: [published]
env:
AWS_REGION_STG: eu-west-1
AWS_REGION_PLATFORM: eu-west-1
AWS_REGION_PRO: us-east-1
AWS_REGION: us-east-1
IMAGE_NAME: prowler
LATEST_TAG: latest
STABLE_TAG: stable
TEMPORARY_TAG: temporary
DOCKERFILE_PATH: ./Dockerfile
PYTHON_VERSION: 3.9
jobs:
# Lint Dockerfile using Hadolint
# dockerfile-linter:
# runs-on: ubuntu-latest
# steps:
# -
# name: Checkout
# uses: actions/checkout@v3
# -
# name: Install Hadolint
# run: |
# VERSION=$(curl --silent "https://api.github.com/repos/hadolint/hadolint/releases/latest" | \
# grep '"tag_name":' | \
# sed -E 's/.*"v([^"]+)".*/\1/' \
# ) && curl -L -o /tmp/hadolint https://github.com/hadolint/hadolint/releases/download/v${VERSION}/hadolint-Linux-x86_64 \
# && chmod +x /tmp/hadolint
# -
# name: Run Hadolint
# run: |
# /tmp/hadolint util/Dockerfile
# Build Prowler OSS container
container-build:
container-build-push:
# needs: dockerfile-linter
runs-on: ubuntu-latest
env:
POETRY_VIRTUALENVS_CREATE: "false"
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Build
uses: docker/build-push-action@v2
with:
# Without pushing to registries
push: false
tags: ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }}
file: ${{ env.DOCKERFILE_PATH }}
outputs: type=docker,dest=/tmp/${{ env.IMAGE_NAME }}.tar
- name: Share image between jobs
uses: actions/upload-artifact@v2
with:
name: ${{ env.IMAGE_NAME }}.tar
path: /tmp/${{ env.IMAGE_NAME }}.tar
# Lint Prowler OSS container using Dockle
# container-linter:
# needs: container-build
# runs-on: ubuntu-latest
# steps:
# -
# name: Get container image from shared
# uses: actions/download-artifact@v2
# with:
# name: ${{ env.IMAGE_NAME }}.tar
# path: /tmp
# -
# name: Load Docker image
# run: |
# docker load --input /tmp/${{ env.IMAGE_NAME }}.tar
# docker image ls -a
# -
# name: Install Dockle
# run: |
# VERSION=$(curl --silent "https://api.github.com/repos/goodwithtech/dockle/releases/latest" | \
# grep '"tag_name":' | \
# sed -E 's/.*"v([^"]+)".*/\1/' \
# ) && curl -L -o dockle.deb https://github.com/goodwithtech/dockle/releases/download/v${VERSION}/dockle_${VERSION}_Linux-64bit.deb \
# && sudo dpkg -i dockle.deb && rm dockle.deb
# -
# name: Run Dockle
# run: dockle ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }}
# Push Prowler OSS container to registries
container-push:
# needs: container-linter
needs: container-build
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read # This is required for actions/checkout
steps:
- name: Get container image from shared
uses: actions/download-artifact@v2
- name: Setup python (release)
if: github.event_name == 'release'
uses: actions/setup-python@v2
with:
name: ${{ env.IMAGE_NAME }}.tar
path: /tmp
- name: Load Docker image
python-version: ${{ env.PYTHON_VERSION }}
- name: Install dependencies (release)
if: github.event_name == 'release'
run: |
docker load --input /tmp/${{ env.IMAGE_NAME }}.tar
docker image ls -a
pipx install poetry
pipx inject poetry poetry-bumpversion
- name: Update Prowler version (release)
if: github.event_name == 'release'
run: |
poetry version ${{ github.event.release.tag_name }}
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to Public ECR
uses: docker/login-action@v2
with:
@@ -123,70 +64,54 @@ jobs:
username: ${{ secrets.PUBLIC_ECR_AWS_ACCESS_KEY_ID }}
password: ${{ secrets.PUBLIC_ECR_AWS_SECRET_ACCESS_KEY }}
env:
AWS_REGION: ${{ env.AWS_REGION_PRO }}
- name: Configure AWS Credentials -- STG
if: github.event_name == 'push'
uses: aws-actions/configure-aws-credentials@v1
with:
aws-region: ${{ env.AWS_REGION_STG }}
role-to-assume: ${{ secrets.STG_IAM_ROLE_ARN }}
role-session-name: build-lint-containers-stg
- name: Login to ECR -- STG
if: github.event_name == 'push'
uses: docker/login-action@v2
with:
registry: ${{ secrets.STG_ECR }}
- name: Configure AWS Credentials -- PLATFORM
if: github.event_name == 'release'
uses: aws-actions/configure-aws-credentials@v1
with:
aws-region: ${{ env.AWS_REGION_PLATFORM }}
role-to-assume: ${{ secrets.STG_IAM_ROLE_ARN }}
role-session-name: build-lint-containers-pro
- name: Login to ECR -- PLATFORM
if: github.event_name == 'release'
uses: docker/login-action@v2
with:
registry: ${{ secrets.PLATFORM_ECR }}
- # Push to master branch - push "latest" tag
name: Tag (latest)
if: github.event_name == 'push'
run: |
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.PLATFORM_ECR }}/${{ secrets.PLATFORM_ECR_REPOSITORY }}:${{ env.LATEST_TAG }}
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.LATEST_TAG }}
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.LATEST_TAG }}
- # Push to master branch - push "latest" tag
name: Push (latest)
if: github.event_name == 'push'
run: |
docker push ${{ secrets.PLATFORM_ECR }}/${{ secrets.PLATFORM_ECR_REPOSITORY }}:${{ env.LATEST_TAG }}
docker push ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.LATEST_TAG }}
docker push ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.LATEST_TAG }}
- # Tag the new release (stable and release tag)
name: Tag (release)
if: github.event_name == 'release'
run: |
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.PLATFORM_ECR }}/${{ secrets.PLATFORM_ECR_REPOSITORY }}:${{ github.event.release.tag_name }}
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ github.event.release.tag_name }}
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ github.event.release.tag_name }}
AWS_REGION: ${{ env.AWS_REGION }}
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.PLATFORM_ECR }}/${{ secrets.PLATFORM_ECR_REPOSITORY }}:${{ env.STABLE_TAG }}
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- # Push the new release (stable and release tag)
name: Push (release)
- name: Build and push container image (latest)
if: github.event_name == 'push'
uses: docker/build-push-action@v2
with:
push: true
tags: |
${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.LATEST_TAG }}
${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.LATEST_TAG }}
file: ${{ env.DOCKERFILE_PATH }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Build and push container image (release)
if: github.event_name == 'release'
uses: docker/build-push-action@v2
with:
# Use local context to get changes
# https://github.com/docker/build-push-action#path-context
context: .
push: true
tags: |
${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ github.event.release.tag_name }}
${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ github.event.release.tag_name }}
${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
file: ${{ env.DOCKERFILE_PATH }}
cache-from: type=gha
cache-to: type=gha,mode=max
dispatch-action:
needs: container-build-push
runs-on: ubuntu-latest
steps:
- name: Get latest commit info
if: github.event_name == 'push'
run: |
LATEST_COMMIT_HASH=$(echo ${{ github.event.after }} | cut -b -7)
echo "LATEST_COMMIT_HASH=${LATEST_COMMIT_HASH}" >> $GITHUB_ENV
- name: Dispatch event for latest
if: github.event_name == 'push'
run: |
curl https://api.github.com/repos/${{ secrets.DISPATCH_OWNER }}/${{ secrets.DISPATCH_REPO }}/dispatches -H "Accept: application/vnd.github+json" -H "Authorization: Bearer ${{ secrets.ACCESS_TOKEN }}" -H "X-GitHub-Api-Version: 2022-11-28" --data '{"event_type":"dispatch","client_payload":{"version":"latest", "tag": "${{ env.LATEST_COMMIT_HASH }}"}}'
- name: Dispatch event for release
if: github.event_name == 'release'
run: |
docker push ${{ secrets.PLATFORM_ECR }}/${{ secrets.PLATFORM_ECR_REPOSITORY }}:${{ github.event.release.tag_name }}
docker push ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ github.event.release.tag_name }}
docker push ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ github.event.release.tag_name }}
docker push ${{ secrets.PLATFORM_ECR }}/${{ secrets.PLATFORM_ECR_REPOSITORY }}:${{ env.STABLE_TAG }}
docker push ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
docker push ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
- name: Delete artifacts
if: always()
uses: geekyeggo/delete-artifact@v1
with:
name: ${{ env.IMAGE_NAME }}.tar
curl https://api.github.com/repos/${{ secrets.DISPATCH_OWNER }}/${{ secrets.DISPATCH_REPO }}/dispatches -H "Accept: application/vnd.github+json" -H "Authorization: Bearer ${{ secrets.ACCESS_TOKEN }}" -H "X-GitHub-Api-Version: 2022-11-28" --data '{"event_type":"dispatch","client_payload":{"version":"release", "tag":"${{ github.event.release.tag_name }}"}}'

57
.github/workflows/codeql.yml vendored Normal file
View File

@@ -0,0 +1,57 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ "master", prowler-2, prowler-3.0-dev ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ "master" ]
schedule:
- cron: '00 12 * * *'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'python' ]
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
steps:
- name: Checkout repository
uses: actions/checkout@v3
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# Details on CodeQL's query packs refer to : https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
with:
category: "/language:${{matrix.language}}"

View File

@@ -13,46 +13,56 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9"]
python-version: ["3.9", "3.10", "3.11"]
steps:
- uses: actions/checkout@v3
- name: Install poetry
run: |
python -m pip install --upgrade pip
pipx install poetry
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: "poetry"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pipenv
pipenv install --dev
pipenv run pip list
poetry install
poetry run pip list
VERSION=$(curl --silent "https://api.github.com/repos/hadolint/hadolint/releases/latest" | \
grep '"tag_name":' | \
sed -E 's/.*"v([^"]+)".*/\1/' \
) && curl -L -o /tmp/hadolint "https://github.com/hadolint/hadolint/releases/download/v${VERSION}/hadolint-Linux-x86_64" \
&& chmod +x /tmp/hadolint
- name: Poetry check
run: |
poetry lock --check
- name: Lint with flake8
run: |
pipenv run flake8 . --ignore=E266,W503,E203,E501,W605,E128 --exclude contrib
poetry run flake8 . --ignore=E266,W503,E203,E501,W605,E128 --exclude contrib
- name: Checking format with black
run: |
pipenv run black --check .
poetry run black --check .
- name: Lint with pylint
run: |
pipenv run pylint --disable=W,C,R,E -j 0 -rn -sn prowler/
poetry run pylint --disable=W,C,R,E -j 0 -rn -sn prowler/
- name: Bandit
run: |
pipenv run bandit -q -lll -x '*_test.py,./contrib/' -r .
poetry run bandit -q -lll -x '*_test.py,./contrib/' -r .
- name: Safety
run: |
pipenv run safety check
poetry run safety check
- name: Vulture
run: |
pipenv run vulture --exclude "contrib" --min-confidence 100 .
poetry run vulture --exclude "contrib" --min-confidence 100 .
- name: Hadolint
run: |
/tmp/hadolint Dockerfile --ignore=DL3013
- name: Test with pytest
run: |
pipenv run pytest tests -n auto
poetry run pytest -n auto --cov=./prowler --cov-report=xml tests
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v3
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

79
.github/workflows/pypi-release.yml vendored Normal file
View File

@@ -0,0 +1,79 @@
name: pypi-release
on:
release:
types: [published]
env:
RELEASE_TAG: ${{ github.event.release.tag_name }}
GITHUB_BRANCH: master
jobs:
release-prowler-job:
runs-on: ubuntu-latest
env:
POETRY_VIRTUALENVS_CREATE: "false"
name: Release Prowler to PyPI
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v3
with:
ref: ${{ env.GITHUB_BRANCH }}
- name: Install dependencies
run: |
pipx install poetry
pipx inject poetry poetry-bumpversion
- name: setup python
uses: actions/setup-python@v4
with:
python-version: 3.9
cache: 'poetry'
- name: Change version and Build package
run: |
poetry version ${{ env.RELEASE_TAG }}
git config user.name "github-actions"
git config user.email "<noreply@github.com>"
git add prowler/config/config.py pyproject.toml
git commit -m "chore(release): ${{ env.RELEASE_TAG }}" --no-verify
git tag -fa ${{ env.RELEASE_TAG }} -m "chore(release): ${{ env.RELEASE_TAG }}"
git push -f origin ${{ env.RELEASE_TAG }}
poetry build
- name: Publish prowler package to PyPI
run: |
poetry config pypi-token.pypi ${{ secrets.PYPI_API_TOKEN }}
poetry publish
# Create pull request with new version
- name: Create Pull Request
uses: peter-evans/create-pull-request@v4
with:
token: ${{ secrets.PROWLER_ACCESS_TOKEN }}
commit-message: "chore(release): update Prowler Version to ${{ env.RELEASE_TAG }}."
branch: release-${{ env.RELEASE_TAG }}
labels: "status/waiting-for-revision, severity/low"
title: "chore(release): update Prowler Version to ${{ env.RELEASE_TAG }}"
body: |
### Description
This PR updates Prowler Version to ${{ env.RELEASE_TAG }}.
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
- name: Replicate PyPi Package
run: |
rm -rf ./dist && rm -rf ./build && rm -rf prowler.egg-info
pip install toml
python util/replicate_pypi_package.py
poetry build
- name: Publish prowler-cloud package to PyPI
run: |
poetry config pypi-token.pypi ${{ secrets.PYPI_API_TOKEN }}
poetry publish
# Create pull request to github.com/Homebrew/homebrew-core to update prowler formula
- name: Bump Homebrew formula
uses: mislav/bump-homebrew-formula-action@v2
with:
formula-name: prowler
base-branch: release-${{ env.RELEASE_TAG }}
env:
COMMITTER_TOKEN: ${{ secrets.PROWLER_ACCESS_TOKEN }}

View File

@@ -19,6 +19,7 @@ jobs:
permissions:
id-token: write
pull-requests: write
contents: write
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
@@ -51,11 +52,11 @@ jobs:
- name: Create Pull Request
uses: peter-evans/create-pull-request@v4
with:
token: ${{ secrets.GITHUB_TOKEN }}
token: ${{ secrets.PROWLER_ACCESS_TOKEN }}
commit-message: "feat(regions_update): Update regions for AWS services."
branch: "aws-services-regions-updated"
branch: "aws-services-regions-updated-${{ github.sha }}"
labels: "status/waiting-for-revision, severity/low"
title: "feat(regions_update): Changes in regions for AWS services."
title: "chore(regions_update): Changes in regions for AWS services."
body: |
### Description

5
.gitignore vendored
View File

@@ -46,3 +46,8 @@ junit-reports/
# .env
.env*
# Coverage
.coverage*
.coverage
coverage*

View File

@@ -1,7 +1,7 @@
repos:
## GENERAL
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.3.0
rev: v4.4.0
hooks:
- id: check-merge-conflict
- id: check-yaml
@@ -13,14 +13,22 @@ repos:
- id: pretty-format-json
args: ["--autofix", --no-sort-keys, --no-ensure-ascii]
## TOML
- repo: https://github.com/macisamuele/language-formatters-pre-commit-hooks
rev: v2.10.0
hooks:
- id: pretty-format-toml
args: [--autofix]
files: pyproject.toml
## BASH
- repo: https://github.com/koalaman/shellcheck-precommit
rev: v0.8.0
rev: v0.9.0
hooks:
- id: shellcheck
## PYTHON
- repo: https://github.com/myint/autoflake
rev: v1.7.7
rev: v2.2.0
hooks:
- id: autoflake
args:
@@ -31,30 +39,32 @@ repos:
]
- repo: https://github.com/timothycrosley/isort
rev: 5.10.1
rev: 5.12.0
hooks:
- id: isort
args: ["--profile", "black"]
- repo: https://github.com/psf/black
rev: 22.10.0
rev: 22.12.0
hooks:
- id: black
- repo: https://github.com/pycqa/flake8
rev: 5.0.4
rev: 6.1.0
hooks:
- id: flake8
exclude: contrib
args: ["--ignore=E266,W503,E203,E501,W605"]
- repo: https://github.com/haizaar/check-pipfile-lock
rev: v0.0.5
- repo: https://github.com/python-poetry/poetry
rev: 1.6.0 # add version here
hooks:
- id: check-pipfile-lock
- id: poetry-check
- id: poetry-lock
args: ["--no-update"]
- repo: https://github.com/hadolint/hadolint
rev: v2.12.0
rev: v2.12.1-beta
hooks:
- id: hadolint
args: ["--ignore=DL3013"]
@@ -66,6 +76,15 @@ repos:
entry: bash -c 'pylint --disable=W,C,R,E -j 0 -rn -sn prowler/'
language: system
- id: trufflehog
name: TruffleHog
description: Detect secrets in your data.
# entry: bash -c 'trufflehog git file://. --only-verified --fail'
# For running trufflehog in docker, use the following entry instead:
entry: bash -c 'docker run -v "$(pwd):/workdir" -i --rm trufflesecurity/trufflehog:latest git file:///workdir --only-verified --fail'
language: system
stages: ["commit", "push"]
- id: pytest-check
name: pytest-check
entry: bash -c 'pytest tests -n auto'

23
.readthedocs.yaml Normal file
View File

@@ -0,0 +1,23 @@
# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
build:
os: "ubuntu-22.04"
tools:
python: "3.9"
jobs:
post_create_environment:
# Install poetry
# https://python-poetry.org/docs/#installing-manually
- pip install poetry
# Tell poetry to not use a virtual environment
- poetry config virtualenvs.create false
post_install:
- poetry install -E docs
mkdocs:
configuration: mkdocs.yml

13
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,13 @@
# Do you want to learn on how to...
- Contribute with your code or fixes to Prowler
- Create a new check for a provider
- Create a new security compliance framework
- Add a custom output format
- Add a new integration
- Contribute with documentation
Want some swag as appreciation for your contribution?
# Prowler Developer Guide
https://docs.prowler.cloud/en/latest/tutorials/developer-guide/

View File

@@ -16,6 +16,7 @@ USER prowler
WORKDIR /home/prowler
COPY prowler/ /home/prowler/prowler/
COPY pyproject.toml /home/prowler
COPY README.md /home/prowler
# Install dependencies
ENV HOME='/home/prowler'
@@ -24,4 +25,9 @@ ENV PATH="$HOME/.local/bin:$PATH"
RUN pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir .
# Remove Prowler directory and build files
USER 0
RUN rm -rf /home/prowler/prowler /home/prowler/pyproject.toml /home/prowler/README.md /home/prowler/build /home/prowler/prowler.egg-info
USER prowler
ENTRYPOINT ["prowler"]

View File

@@ -2,12 +2,19 @@
##@ Testing
test: ## Test with pytest
pytest -n auto -vvv -s -x
rm -rf .coverage && \
pytest -n auto -vvv -s --cov=./prowler --cov-report=xml tests
coverage: ## Show Test Coverage
coverage run --skip-covered -m pytest -v && \
coverage report -m && \
rm -rf .coverage
rm -rf .coverage && \
coverage report -m
coverage-html: ## Show Test Coverage
rm -rf ./htmlcov && \
coverage html && \
open htmlcov/index.html
##@ Linting
format: ## Format Code
@@ -24,11 +31,11 @@ lint: ## Lint Code
##@ PyPI
pypi-clean: ## Delete the distribution files
rm -rf ./dist && rm -rf ./build && rm -rf prowler_cloud.egg-info
rm -rf ./dist && rm -rf ./build && rm -rf prowler.egg-info
pypi-build: ## Build package
$(MAKE) pypi-clean && \
python3 -m build
poetry build
pypi-upload: ## Upload package
python3 -m twine upload --repository pypi dist/*

41
Pipfile
View File

@@ -1,41 +0,0 @@
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
colorama = "0.4.4"
boto3 = "1.26.3"
arnparse = "0.0.2"
botocore = "1.27.8"
pydantic = "1.9.1"
shodan = "1.28.0"
detect-secrets = "1.4.0"
alive-progress = "2.4.1"
tabulate = "0.9.0"
azure-identity = "1.12.0"
azure-storage-blob = "12.14.1"
msgraph-core = "0.2.2"
azure-mgmt-subscription = "3.1.1"
azure-mgmt-authorization = "3.0.0"
azure-mgmt-security = "3.0.0"
azure-mgmt-storage = "21.0.0"
[dev-packages]
black = "22.10.0"
pylint = "2.15.9"
flake8 = "5.0.4"
bandit = "1.7.4"
safety = "2.3.1"
vulture = "2.6"
moto = "4.0.13"
docker = "6.0.0"
openapi-spec-validator = "0.5.1"
pytest = "7.1.2"
pytest-xdist = "2.5.0"
coverage = "7.0.3"
sure = "2.0.0"
freezegun = "1.2.1"
[requires]
python_version = "3.9"

1511
Pipfile.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -11,12 +11,14 @@
</p>
<p align="center">
<a href="https://join.slack.com/t/prowler-workspace/shared_invite/zt-1hix76xsl-2uq222JIXrC7Q8It~9ZNog"><img alt="Slack Shield" src="https://img.shields.io/badge/slack-prowler-brightgreen.svg?logo=slack"></a>
<a href="https://pypi.org/project/prowler-cloud/"><img alt="Python Version" src="https://img.shields.io/pypi/v/prowler-cloud.svg"></a>
<a href="https://pypi.python.org/pypi/prowler-cloud/"><img alt="Python Version" src="https://img.shields.io/pypi/pyversions/prowler-cloud.svg"></a>
<a href="https://pypi.org/project/prowler/"><img alt="Python Version" src="https://img.shields.io/pypi/v/prowler.svg"></a>
<a href="https://pypi.python.org/pypi/prowler/"><img alt="Python Version" src="https://img.shields.io/pypi/pyversions/prowler.svg"></a>
<a href="https://pypistats.org/packages/prowler"><img alt="PyPI Prowler Downloads" src="https://img.shields.io/pypi/dw/prowler.svg?label=prowler%20downloads"></a>
<a href="https://pypistats.org/packages/prowler-cloud"><img alt="PyPI Prowler-Cloud Downloads" src="https://img.shields.io/pypi/dw/prowler-cloud.svg?label=prowler-cloud%20downloads"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/toniblyx/prowler"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker" src="https://img.shields.io/docker/cloud/build/toniblyx/prowler"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker" src="https://img.shields.io/docker/image-size/toniblyx/prowler"></a>
<a href="https://gallery.ecr.aws/o4g1s5r6/prowler"><img width="120" height=19" alt="AWS ECR Gallery" src="https://user-images.githubusercontent.com/3985464/151531396-b6535a68-c907-44eb-95a1-a09508178616.png"></a>
<a href="https://gallery.ecr.aws/prowler-cloud/prowler"><img width="120" height=19" alt="AWS ECR Gallery" src="https://user-images.githubusercontent.com/3985464/151531396-b6535a68-c907-44eb-95a1-a09508178616.png"></a>
</p>
<p align="center">
<a href="https://github.com/prowler-cloud/prowler"><img alt="Repo size" src="https://img.shields.io/github/repo-size/prowler-cloud/prowler"></a>
@@ -31,21 +33,34 @@
# Description
`Prowler` is an Open Source security tool to perform AWS and Azure security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness.
`Prowler` is an Open Source security tool to perform AWS, GCP and Azure security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness.
It contains hundreds of controls covering CIS, PCI-DSS, ISO27001, GDPR, HIPAA, FFIEC, SOC2, AWS FTR, ENS and custom security frameworks.
It contains hundreds of controls covering CIS, NIST 800, NIST CSF, CISA, RBI, FedRAMP, PCI-DSS, GDPR, HIPAA, FFIEC, SOC2, GXP, AWS Well-Architected Framework Security Pillar, AWS Foundational Technical Review (FTR), ENS (Spanish National Security Scheme) and your custom security frameworks.
| Provider | Checks | Services | [Compliance Frameworks](https://docs.prowler.cloud/en/latest/tutorials/compliance/) | [Categories](https://docs.prowler.cloud/en/latest/tutorials/misc/#categories) |
|---|---|---|---|---|
| AWS | 290 | 56 -> `prowler aws --list-services` | 25 -> `prowler aws --list-compliance` | 5 -> `prowler aws --list-categories` |
| GCP | 73 | 11 -> `prowler gcp --list-services` | 1 -> `prowler gcp --list-compliance` | 2 -> `prowler gcp --list-categories`|
| Azure | 23 | 4 -> `prowler azure --list-services` | CIS soon | 1 -> `prowler azure --list-categories` |
| Kubernetes | Planned | - | - | - |
# 📖 Documentation
The full documentation can now be found at [https://docs.prowler.cloud](https://docs.prowler.cloud)
## Looking for Prowler v2 documentation?
For Prowler v2 Documentation, please go to https://github.com/prowler-cloud/prowler/tree/2.12.1.
# ⚙️ Install
## Pip package
Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-cloud/), thus can be installed using pip with Python >= 3.9:
```console
pip install prowler-cloud
pip install prowler
prowler -v
```
More details at https://docs.prowler.cloud
## Containers
@@ -58,34 +73,29 @@ The available versions of Prowler are the following:
The container images are available here:
- [DockerHub](https://hub.docker.com/r/toniblyx/prowler/tags)
- [AWS Public ECR](https://gallery.ecr.aws/o4g1s5r6/prowler)
- [AWS Public ECR](https://gallery.ecr.aws/prowler-cloud/prowler)
## From Github
Python >= 3.9 is required with pip and pipenv:
Python >= 3.9 is required with pip and poetry:
```
git clone https://github.com/prowler-cloud/prowler
cd prowler
pipenv shell
pipenv install
poetry shell
poetry install
python prowler.py -v
```
# 📖 Documentation
The full documentation now can be found at [https://docs.prowler.cloud](https://docs.prowler.cloud)
# 📐✏️ High level architecture
You can run Prowler from your workstation, an EC2 instance, Fargate or any other container, Codebuild, CloudShell and Cloud9.
![Architecture](https://github.com/prowler-cloud/prowler/blob/62c1ce73bbcdd6b9e5ba03dfcae26dfd165defd9/docs/img/architecture.png?raw=True)
![Architecture](https://github.com/prowler-cloud/prowler/assets/38561120/080261d9-773d-4af1-af79-217a273e3176)
# 📝 Requirements
Prowler has been written in Python using the [AWS SDK (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html#) and [Azure SDK](https://azure.github.io/azure-sdk-for-python/).
Prowler has been written in Python using the [AWS SDK (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html#), [Azure SDK](https://azure.github.io/azure-sdk-for-python/) and [GCP API Python Client](https://github.com/googleapis/google-api-python-client/).
## AWS
Since Prowler uses AWS Credentials under the hood, you can follow any authentication method as described [here](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-precedence).
@@ -105,8 +115,8 @@ Make sure you have properly configured your AWS-CLI with a valid Access Key and
Those credentials must be associated to a user or role with proper permissions to do all checks. To make sure, add the following AWS managed policies to the user or role being used:
- arn:aws:iam::aws:policy/SecurityAudit
- arn:aws:iam::aws:policy/job-function/ViewOnlyAccess
- `arn:aws:iam::aws:policy/SecurityAudit`
- `arn:aws:iam::aws:policy/job-function/ViewOnlyAccess`
> Moreover, some read-only additional permissions are needed for several checks, make sure you attach also the custom policy [prowler-additions-policy.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-additions-policy.json) to the role you are using.
@@ -123,7 +133,7 @@ Those credentials must be associated to a user or role with proper permissions t
### Service Principal authentication
To allow Prowler assume the service principal identity to start the scan it is needed to configure the following environment variables:
To allow Prowler assume the service principal identity to start the scan, it is needed to configure the following environment variables:
```console
export AZURE_CLIENT_ID="XXXXXXXXX"
@@ -134,11 +144,11 @@ export AZURE_CLIENT_SECRET="XXXXXXX"
If you try to execute Prowler with the `--sp-env-auth` flag and those variables are empty or not exported, the execution is going to fail.
### AZ CLI / Browser / Managed Identity authentication
The other three cases does not need additional configuration, `--az-cli-auth` and `--managed-identity-auth` are automated options, `--browser-auth` needs the user to authenticate using the default browser to start the scan.
The other three cases do not need additional configuration, `--az-cli-auth` and `--managed-identity-auth` are automated options, `--browser-auth` needs the user to authenticate using the default browser to start the scan. Also `--browser-auth` needs the tenant id to be specified with `--tenant-id`.
### Permissions
To use each one you need to pass the proper flag to the execution. Prowler for Azure handles two types of permission scopes, which are:
To use each one, you need to pass the proper flag to the execution. Prowler for Azure handles two types of permission scopes, which are:
- **Azure Active Directory permissions**: Used to retrieve metadata from the identity assumed by Prowler and future AAD checks (not mandatory to have access to execute the tool)
- **Subscription scope permissions**: Required to launch the checks against your resources, mandatory to launch the tool.
@@ -160,6 +170,22 @@ Regarding the subscription scope, Prowler by default scans all the subscriptions
- `Reader`
## Google Cloud Platform
Prowler will follow the same credentials search as [Google authentication libraries](https://cloud.google.com/docs/authentication/application-default-credentials#search_order):
1. [GOOGLE_APPLICATION_CREDENTIALS environment variable](https://cloud.google.com/docs/authentication/application-default-credentials#GAC)
2. [User credentials set up by using the Google Cloud CLI](https://cloud.google.com/docs/authentication/application-default-credentials#personal)
3. [The attached service account, returned by the metadata server](https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa)
Those credentials must be associated to a user or service account with proper permissions to do all checks. To make sure, add the following roles to the member associated with the credentials:
- Viewer
- Security Reviewer
- Stackdriver Account Viewer
> By default, `prowler` will scan all accessible GCP Projects, use flag `--project-ids` to specify the projects to be scanned.
# 💻 Basic Usage
To run prowler, you will need to specify the provider (e.g aws or azure):
@@ -234,12 +260,14 @@ prowler azure [--sp-env-auth, --az-cli-auth, --browser-auth, --managed-identity-
```
> By default, `prowler` will scan all Azure subscriptions.
# 🎉 New Features
## Google Cloud Platform
- Python: we got rid of all bash and it is now all in Python.
- Faster: huge performance improvements (same account from 2.5 hours to 4 minutes).
- Developers and community: we have made it easier to contribute with new checks and new compliance frameworks. We also included unit tests.
- Multi-cloud: in addition to AWS, we have added Azure, we plan to include GCP and OCI soon, let us know if you want to contribute!
Optionally, you can provide the location of an application credential JSON file with the following argument:
```console
prowler gcp --credentials-file path
```
> By default, `prowler` will scan all accessible GCP Projects, use flag `--project-ids` to specify the projects to be scanned.
# 📃 License

23
SECURITY.md Normal file
View File

@@ -0,0 +1,23 @@
# Security Policy
## Software Security
As an **AWS Partner** and we have passed the [AWS Foundation Technical Review (FTR)](https://aws.amazon.com/partners/foundational-technical-review/) and we use the following tools and automation to make sure our code is secure and dependencies up-to-dated:
- `bandit` for code security review.
- `safety` and `dependabot` for dependencies.
- `hadolint` and `dockle` for our containers security.
- `snyk` in Docker Hub.
- `clair` in Amazon ECR.
- `vulture`, `flake8`, `black` and `pylint` for formatting and best practices.
## Reporting a Vulnerability
If you would like to report a vulnerability or have a security concern regarding Prowler Open Source or ProwlerPro service, please submit the information by contacting to help@prowler.pro.
The information you share with Verica as part of this process is kept confidential within Verica and the Prowler team. We will only share this information with a third party if the vulnerability you report is found to affect a third-party product, in which case we will share this information with the third-party product's author or manufacturer. Otherwise, we will only share this information as permitted by you.
We will review the submitted report, and assign it a tracking number. We will then respond to you, acknowledging receipt of the report, and outline the next steps in the process.
You will receive a non-automated response to your initial contact within 24 hours, confirming receipt of your reported vulnerability.
We will coordinate public notification of any validated vulnerability with you. Where possible, we prefer that our respective public disclosures be posted simultaneously.

View File

@@ -1,43 +0,0 @@
{
"Provider": "aws",
"CheckID": "ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"CheckTitle": "Ensure no security groups allow ingress from 0.0.0.0/0 or ::/0 to SSH port 22.",
"CheckType": "Data Protection",
"ServiceName": "ec2",
"SubServiceName": "securitygroup",
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"Severity": "low",
"ResourceType": "AwsEc2SecurityGroup",
"Description": "Extended Description",
"Risk": "If Security groups are not properly configured the attack surface is increased.",
"RelatedUrl": "https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html",
"Remediation": {
"Code": {
"CLI": "cli command or URL to the cli command location.",
"NativeIaC": "code or URL to the code location.",
"Other": "cli command or URL to the cli command location.",
"Terraform": "code or URL to the code location."
},
"Recommendation": {
"Text": "Use a Zero Trust approach. Narrow ingress traffic as much as possible. Consider north-south as well as east-west traffic.",
"Url": "https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-best-practices.html"
}
},
"Categories": [
"cat1",
"cat2"
],
"Tags": {
"Tag1Key": "value",
"Tag2Key": "value"
},
"DependsOn": [
"othercheck1",
"othercheck2"
],
"RelatedTo": [
"othercheck3",
"othercheck4"
],
"Notes": "additional information"
}

View File

@@ -0,0 +1,398 @@
---
AWSTemplateFormatVersion: 2010-09-09
Description: Creates a CodeBuild project to audit an AWS account with Prowler Version 2 and stores the html report in a S3 bucket. This will run onece at the beginning and on a schedule afterwards. Partial contribution from https://github.com/stevecjones
Parameters:
ServiceName:
Description: 'Specifies the service name used within component naming'
Type: String
Default: 'prowler'
LogsRetentionInDays:
Description: 'Specifies the number of days you want to retain CodeBuild run log events in the specified log group. Junit reports are kept for 30 days, HTML reports in S3 are not deleted'
Type: Number
Default: 3
AllowedValues: [1, 3, 5, 7, 14, 30, 60, 90, 180, 365]
ProwlerOptions:
Description: 'Options to pass to Prowler command, use -f to filter specific regions, -c for specific checks, -s for specific services, for SecurityHub integration use "-f shub_region -S", for more options see -h. For a complete assessment leave this empty.'
Type: String
# Prowler command below runs a set of checks, configure it base on your needs, no options will run all regions all checks.
Default: -f eu-west-1 -s s3 iam ec2
ProwlerScheduler:
Description: The time when Prowler will run in cron format. Default is daily at 22:00h or 10PM 'cron(0 22 * * ? *)', for every 5 hours also works 'rate(5 hours)'. More info here https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html.
Type: String
Default: 'cron(0 22 * * ? *)'
Resources:
CodeBuildStartBuild:
Type: 'Custom::CodeBuildStartBuild'
DependsOn:
- CodeBuildLogPolicy
- CodeBuildStartLogPolicy
Properties:
Build: !Ref ProwlerCodeBuild
ServiceToken: !GetAtt CodeBuildStartBuildLambda.Arn
CodeBuildStartBuildLambdaRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: !Sub lambda.${AWS::URLSuffix}
Action: 'sts:AssumeRole'
Description: !Sub 'DO NOT DELETE - Used by Lambda. Created by CloudFormation Stack ${AWS::StackId}'
Policies:
- PolicyName: StartBuildInline
PolicyDocument:
Statement:
- Effect: Allow
Action: 'codebuild:StartBuild'
Resource: !GetAtt ProwlerCodeBuild.Arn
CodeBuildStartBuildLambda:
Type: 'AWS::Lambda::Function'
Metadata:
cfn_nag:
rules_to_suppress:
- id: W58
reason: 'This Lambda has permissions to write Logs'
- id: W89
reason: 'VPC is not needed'
- id: W92
reason: 'ReservedConcurrentExecutions not needed'
Properties:
Handler: index.lambda_handler
MemorySize: 128
Role: !Sub ${CodeBuildStartBuildLambdaRole.Arn}
Timeout: 120
Runtime: python3.9
Code:
ZipFile: |
import boto3
import cfnresponse
from botocore.exceptions import ClientError
def lambda_handler(event,context):
props = event['ResourceProperties']
codebuild_client = boto3.client('codebuild')
if (event['RequestType'] == 'Create' or event['RequestType'] == 'Update'):
try:
response = codebuild_client.start_build(projectName=props['Build'])
print(response)
print("Respond: SUCCESS")
cfnresponse.send(event, context, cfnresponse.SUCCESS, {})
except Exception as ex:
print(ex.response['Error']['Message'])
cfnresponse.send(event, context, cfnresponse.FAILED, ex.response)
else:
cfnresponse.send(event, context, cfnresponse.SUCCESS, {})
CodeBuildStartLogGroup:
Type: 'AWS::Logs::LogGroup'
DeletionPolicy: Delete
UpdateReplacePolicy: Delete
Metadata:
cfn_nag:
rules_to_suppress:
- id: W84
reason: 'KMS encryption is not needed.'
Properties:
LogGroupName: !Sub '/aws/lambda/${CodeBuildStartBuildLambda}'
RetentionInDays: !Ref LogsRetentionInDays
CodeBuildStartLogPolicy:
Type: AWS::IAM::Policy
Properties:
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- logs:CreateLogStream
- logs:PutLogEvents
Effect: Allow
Resource: !GetAtt CodeBuildStartLogGroup.Arn
PolicyName: LogGroup
Roles:
- !Ref CodeBuildStartBuildLambdaRole
ArtifactBucket:
Type: AWS::S3::Bucket
Metadata:
cfn_nag:
rules_to_suppress:
- id: W35
reason: 'S3 Access Logging is not needed'
Properties:
Tags:
- Key: Name
Value: !Sub '${ServiceName}-${AWS::AccountId}-S3-Prowler-${AWS::StackName}'
BucketName: !Sub '${ServiceName}-reports-${AWS::Region}-prowler-${AWS::AccountId}'
AccessControl: LogDeliveryWrite
VersioningConfiguration:
Status: Enabled
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
ArtifactBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref ArtifactBucket
PolicyDocument:
Id: Content
Version: '2012-10-17'
Statement:
- Action: '*'
Condition:
Bool:
aws:SecureTransport: false
Effect: Deny
Principal: '*'
Resource: !Sub '${ArtifactBucket.Arn}/*'
Sid: S3ForceSSL
- Action: 's3:PutObject'
Condition:
'Null':
s3:x-amz-server-side-encryption: true
Effect: Deny
Principal: '*'
Resource: !Sub '${ArtifactBucket.Arn}/*'
Sid: DenyUnEncryptedObjectUploads
CodeBuildServiceRole:
Type: AWS::IAM::Role
Metadata:
cfn_nag:
rules_to_suppress:
- id: W11
reason: 'Role complies with the least privilege principle.'
Properties:
Description: !Sub 'DO NOT DELETE - Used by CodeBuild. Created by CloudFormation Stack ${AWS::StackId}'
ManagedPolicyArns:
- !Sub 'arn:${AWS::Partition}:iam::aws:policy/job-function/SupportUser'
- !Sub 'arn:${AWS::Partition}:iam::aws:policy/job-function/ViewOnlyAccess'
- !Sub 'arn:${AWS::Partition}:iam::aws:policy/SecurityAudit'
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Action: 'sts:AssumeRole'
Effect: Allow
Principal:
Service: !Sub codebuild.${AWS::URLSuffix}
Policies:
- PolicyName: S3
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- s3:PutObject
- s3:GetObject
- s3:GetObjectVersion
- s3:GetBucketAcl
- s3:GetBucketLocation
Effect: Allow
Resource: !Sub '${ArtifactBucket.Arn}/*'
- PolicyName: ProwlerAdditions
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- account:Get*
- appstream:Describe*
- codeartifact:List*
- codebuild:BatchGet*
- ds:Get*
- ds:Describe*
- ds:List*
- ec2:GetEbsEncryptionByDefault
- ecr:Describe*
- elasticfilesystem:DescribeBackupPolicy
- glue:GetConnections
- glue:GetSecurityConfiguration*
- glue:SearchTables
- lambda:GetFunction*
- macie2:GetMacieSession
- s3:GetAccountPublicAccessBlock
- s3:GetPublicAccessBlock
- shield:DescribeProtection
- shield:GetSubscriptionState
- securityhub:BatchImportFindings
- securityhub:GetFindings
- ssm:GetDocument
- support:Describe*
- tag:GetTagKeys
Effect: Allow
Resource: '*'
- PolicyName: ProwlerAdditionsApiGW
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- apigateway:GET
Effect: Allow
Resource: 'arn:aws:apigateway:*::/restapis/*'
- PolicyName: CodeBuild
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- codebuild:CreateReportGroup
- codebuild:CreateReport
- codebuild:UpdateReport
- codebuild:BatchPutTestCases
- codebuild:BatchPutCodeCoverages
Effect: Allow
Resource: !Sub 'arn:${AWS::Partition}:codebuild:${AWS::Region}:${AWS::AccountId}:report-group/*'
- PolicyName: SecurityHubBatchImportFindings
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action: securityhub:BatchImportFindings
Effect: Allow
Resource: !Sub 'arn:${AWS::Partition}:securityhub:${AWS::Region}::product/prowler/prowler'
CodeBuildLogPolicy:
Type: AWS::IAM::Policy
Properties:
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- logs:CreateLogStream
- logs:PutLogEvents
Effect: Allow
Resource: !GetAtt ProwlerLogGroup.Arn
PolicyName: LogGroup
Roles:
- !Ref CodeBuildServiceRole
CodeBuildAssumePolicy:
Type: AWS::IAM::Policy
Properties:
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action: 'sts:AssumeRole'
Effect: Allow
Resource: !GetAtt CodeBuildServiceRole.Arn
PolicyName: AssumeRole
Roles:
- !Ref CodeBuildServiceRole
ProwlerCodeBuild:
Type: AWS::CodeBuild::Project
Metadata:
cfn_nag:
rules_to_suppress:
- id: W32
reason: 'KMS encryption is not needed.'
Properties:
Artifacts:
Type: NO_ARTIFACTS
ConcurrentBuildLimit: 1
Source:
Type: NO_SOURCE
BuildSpec: |
version: 0.2
phases:
install:
runtime-versions:
python: 3.9
commands:
- echo "Installing Prowler..."
- pip3 install prowler
build:
commands:
- echo "Running Prowler as prowler $PROWLER_OPTIONS"
- prowler $PROWLER_OPTIONS
post_build:
commands:
- echo "Uploading reports to S3..."
- aws s3 cp --sse AES256 output/ s3://$BUCKET_REPORT/ --recursive
- echo "Done!"
# Currently not supported in Version 3
# reports:
# prowler:
# files:
# - '**/*'
# base-directory: 'junit-reports'
# file-format: JunitXml
Environment:
# AWS CodeBuild free tier includes 100 build minutes of BUILD_GENERAL1_SMALL per month.
# BUILD_GENERAL1_SMALL: Use up to 3 GB memory and 2 vCPUs for builds. $0.005/minute.
# BUILD_GENERAL1_MEDIUM: Use up to 7 GB memory and 4 vCPUs for builds. $0.01/minute.
# BUILD_GENERAL1_LARGE: Use up to 15 GB memory and 8 vCPUs for builds. $0.02/minute.
# BUILD_GENERAL1_2XLARGE: Use up to 144 GB memory and 72 vCPUs for builds. $0.20/minute.
ComputeType: "BUILD_GENERAL1_SMALL"
Image: "aws/codebuild/amazonlinux2-x86_64-standard:3.0"
Type: "LINUX_CONTAINER"
EnvironmentVariables:
- Name: BUCKET_REPORT
Value: !Ref ArtifactBucket
Type: PLAINTEXT
- Name: PROWLER_OPTIONS
Value: !Ref ProwlerOptions
Type: PLAINTEXT
Description: Run Prowler assessment
ServiceRole: !GetAtt CodeBuildServiceRole.Arn
TimeoutInMinutes: 300
ProwlerLogGroup:
Type: 'AWS::Logs::LogGroup'
DeletionPolicy: Delete
UpdateReplacePolicy: Delete
Metadata:
cfn_nag:
rules_to_suppress:
- id: W84
reason: 'KMS encryption is not needed.'
Properties:
LogGroupName: !Sub '/aws/codebuild/${ProwlerCodeBuild}'
RetentionInDays: !Ref LogsRetentionInDays
EventBridgeServiceRole:
Type: AWS::IAM::Role
Properties:
Description: !Sub 'DO NOT DELETE - Used by EventBridge. Created by CloudFormation Stack ${AWS::StackId}'
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Action: 'sts:AssumeRole'
Effect: Allow
Principal:
Service: !Sub events.${AWS::URLSuffix}
Policies:
- PolicyName: CodeBuild
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action: 'codebuild:StartBuild'
Resource: !GetAtt ProwlerCodeBuild.Arn
ProwlerSchedule:
Type: 'AWS::Events::Rule'
Properties:
Description: A schedule to trigger Prowler in CodeBuild
ScheduleExpression: !Ref ProwlerScheduler
State: ENABLED
Targets:
- Arn: !GetAtt ProwlerCodeBuild.Arn
Id: ProwlerSchedule
RoleArn: !GetAtt EventBridgeServiceRole.Arn
Outputs:
ArtifactBucketName:
Description: Artifact Bucket Name
Value: !Ref ArtifactBucket

View File

@@ -1,45 +1,24 @@
# Build command
# docker build --platform=linux/amd64 --no-cache -t prowler:latest .
FROM public.ecr.aws/amazonlinux/amazonlinux:2022
ARG PROWLER_VERSION=latest
ARG PROWLERVER=2.9.0
ARG USERNAME=prowler
ARG USERID=34000
FROM toniblyx/prowler:${PROWLER_VERSION}
# Install Dependencies
RUN \
dnf update -y && \
dnf install -y bash file findutils git jq python3 python3-pip \
python3-setuptools python3-wheel shadow-utils tar unzip which && \
dnf remove -y awscli && \
dnf clean all && \
useradd -l -s /bin/sh -U -u ${USERID} ${USERNAME} && \
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && \
unzip awscliv2.zip && \
./aws/install && \
pip3 install --no-cache-dir --upgrade pip && \
pip3 install --no-cache-dir "git+https://github.com/ibm/detect-secrets.git@master#egg=detect-secrets" && \
rm -rf aws awscliv2.zip /var/cache/dnf
USER 0
# hadolint ignore=DL3018
RUN apk --no-cache add bash aws-cli jq
# Place script and env vars
COPY .awsvariables run-prowler-securityhub.sh /
ARG MULTI_ACCOUNT_SECURITY_HUB_PATH=/home/prowler/multi-account-securityhub
# Installs prowler and change permissions
RUN \
curl -L "https://github.com/prowler-cloud/prowler/archive/refs/tags/${PROWLERVER}.tar.gz" -o "prowler.tar.gz" && \
tar xvzf prowler.tar.gz && \
rm -f prowler.tar.gz && \
mv prowler-${PROWLERVER} prowler && \
chown ${USERNAME}:${USERNAME} /run-prowler-securityhub.sh && \
chmod 500 /run-prowler-securityhub.sh && \
chown ${USERNAME}:${USERNAME} /.awsvariables && \
chmod 400 /.awsvariables && \
chown ${USERNAME}:${USERNAME} -R /prowler && \
chmod +x /prowler/prowler
USER prowler
# Drop to user
USER ${USERNAME}
# Move script and environment variables
RUN mkdir "${MULTI_ACCOUNT_SECURITY_HUB_PATH}"
COPY --chown=prowler:prowler .awsvariables run-prowler-securityhub.sh "${MULTI_ACCOUNT_SECURITY_HUB_PATH}"/
RUN chmod 500 "${MULTI_ACCOUNT_SECURITY_HUB_PATH}"/run-prowler-securityhub.sh & \
chmod 400 "${MULTI_ACCOUNT_SECURITY_HUB_PATH}"/.awsvariables
# Run script
ENTRYPOINT ["/run-prowler-securityhub.sh"]
WORKDIR ${MULTI_ACCOUNT_SECURITY_HUB_PATH}
ENTRYPOINT ["./run-prowler-securityhub.sh"]

View File

@@ -1,20 +1,17 @@
#!/bin/bash
# Run Prowler against All AWS Accounts in an AWS Organization
# Change Directory (rest of the script, assumes you're in the root directory)
cd / || exit
# Show Prowler Version
./prowler/prowler -V
prowler -v
# Source .awsvariables
# shellcheck disable=SC1091
source .awsvariables
# Get Values from Environment Variables
echo "ROLE: $ROLE"
echo "PARALLEL_ACCOUNTS: $PARALLEL_ACCOUNTS"
echo "REGION: $REGION"
echo "ROLE: ${ROLE}"
echo "PARALLEL_ACCOUNTS: ${PARALLEL_ACCOUNTS}"
echo "REGION: ${REGION}"
# Function to unset AWS Profile Variables
unset_aws() {
@@ -24,33 +21,33 @@ unset_aws
# Find THIS Account AWS Number
CALLER_ARN=$(aws sts get-caller-identity --output text --query "Arn")
PARTITION=$(echo "$CALLER_ARN" | cut -d: -f2)
THISACCOUNT=$(echo "$CALLER_ARN" | cut -d: -f5)
echo "THISACCOUNT: $THISACCOUNT"
echo "PARTITION: $PARTITION"
PARTITION=$(echo "${CALLER_ARN}" | cut -d: -f2)
THISACCOUNT=$(echo "${CALLER_ARN}" | cut -d: -f5)
echo "THISACCOUNT: ${THISACCOUNT}"
echo "PARTITION: ${PARTITION}"
# Function to Assume Role to THIS Account & Create Session
this_account_session() {
unset_aws
role_credentials=$(aws sts assume-role --role-arn arn:"$PARTITION":iam::"$THISACCOUNT":role/"$ROLE" --role-session-name ProwlerRun --output json)
AWS_ACCESS_KEY_ID=$(echo "$role_credentials" | jq -r .Credentials.AccessKeyId)
AWS_SECRET_ACCESS_KEY=$(echo "$role_credentials" | jq -r .Credentials.SecretAccessKey)
AWS_SESSION_TOKEN=$(echo "$role_credentials" | jq -r .Credentials.SessionToken)
role_credentials=$(aws sts assume-role --role-arn arn:"${PARTITION}":iam::"${THISACCOUNT}":role/"${ROLE}" --role-session-name ProwlerRun --output json)
AWS_ACCESS_KEY_ID=$(echo "${role_credentials}" | jq -r .Credentials.AccessKeyId)
AWS_SECRET_ACCESS_KEY=$(echo "${role_credentials}" | jq -r .Credentials.SecretAccessKey)
AWS_SESSION_TOKEN=$(echo "${role_credentials}" | jq -r .Credentials.SessionToken)
export AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN
}
# Find AWS Master Account
this_account_session
AWSMASTER=$(aws organizations describe-organization --query Organization.MasterAccountId --output text)
echo "AWSMASTER: $AWSMASTER"
echo "AWSMASTER: ${AWSMASTER}"
# Function to Assume Role to Master Account & Create Session
master_account_session() {
unset_aws
role_credentials=$(aws sts assume-role --role-arn arn:"$PARTITION":iam::"$AWSMASTER":role/"$ROLE" --role-session-name ProwlerRun --output json)
AWS_ACCESS_KEY_ID=$(echo "$role_credentials" | jq -r .Credentials.AccessKeyId)
AWS_SECRET_ACCESS_KEY=$(echo "$role_credentials" | jq -r .Credentials.SecretAccessKey)
AWS_SESSION_TOKEN=$(echo "$role_credentials" | jq -r .Credentials.SessionToken)
role_credentials=$(aws sts assume-role --role-arn arn:"${PARTITION}":iam::"${AWSMASTER}":role/"${ROLE}" --role-session-name ProwlerRun --output json)
AWS_ACCESS_KEY_ID=$(echo "${role_credentials}" | jq -r .Credentials.AccessKeyId)
AWS_SECRET_ACCESS_KEY=$(echo "${role_credentials}" | jq -r .Credentials.SecretAccessKey)
AWS_SESSION_TOKEN=$(echo "${role_credentials}" | jq -r .Credentials.SessionToken)
export AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN
}
@@ -60,20 +57,20 @@ ACCOUNTS_IN_ORGS=$(aws organizations list-accounts --query Accounts[*].Id --outp
# Run Prowler against Accounts in AWS Organization
echo "AWS Accounts in Organization"
echo "$ACCOUNTS_IN_ORGS"
for accountId in $ACCOUNTS_IN_ORGS; do
echo "${ACCOUNTS_IN_ORGS}"
for accountId in ${ACCOUNTS_IN_ORGS}; do
# shellcheck disable=SC2015
test "$(jobs | wc -l)" -ge $PARALLEL_ACCOUNTS && wait -n || true
test "$(jobs | wc -l)" -ge "${PARALLEL_ACCOUNTS}" && wait -n || true
{
START_TIME=$SECONDS
START_TIME=${SECONDS}
# Unset AWS Profile Variables
unset_aws
# Run Prowler
echo -e "Assessing AWS Account: $accountId, using Role: $ROLE on $(date)"
echo -e "Assessing AWS Account: ${accountId}, using Role: ${ROLE} on $(date)"
# Pipe stdout to /dev/null to reduce unnecessary Cloudwatch logs
./prowler/prowler -R "$ROLE" -A "$accountId" -M json-asff -q -S -f "$REGION" > /dev/null
prowler aws -R arn:"${PARTITION}":iam::"${accountId}":role/"${ROLE}" -q -S -f "${REGION}" > /dev/null
TOTAL_SEC=$((SECONDS - START_TIME))
printf "Completed AWS Account: $accountId in %02dh:%02dm:%02ds" $((TOTAL_SEC / 3600)) $((TOTAL_SEC % 3600 / 60)) $((TOTAL_SEC % 60))
printf "Completed AWS Account: ${accountId} in %02dh:%02dm:%02ds" $((TOTAL_SEC / 3600)) $((TOTAL_SEC % 3600 / 60)) $((TOTAL_SEC % 60))
echo ""
} &
done

View File

@@ -1,6 +1,6 @@
# Organizational Prowler with Serverless
Langage: [Korean](README_kr.md)
Language: [Korean](README_kr.md)
This project is created to apply prowler in a multi-account environment within AWS Organizations.
CloudWatch triggers CodeBuild every fixed time.
@@ -18,12 +18,12 @@ For more information on how to use prowler, see [here](https://github.com/prowle
2. **Master Account**
1. Deploy [ProwlerRole.yaml](templates/ProwlerRole.yaml) stack to CloudFormation in a bid to create resources to master account itself.
(The template will be also deployed for other member accounts as a StackSet)
- ProwlerCodeBuildAccount : Audit Acccount ID where CodeBuild resides. (preferably Audit/Security account)
- ProwlerCodeBuildAccount : Audit Account ID where CodeBuild resides. (preferably Audit/Security account)
- ProwlerCodeBuildRole : Role name to use in CodeBuild service
- ProwlerCrossAccountRole : Role name to assume for Cross account
- ProwlerS3 : The S3 bucket name where reports will be put
1. Create **StackSet** with [ProwlerRole.yaml](templates/ProwlerRole.yaml) to deploy Role into member accounts in AWS Organizations.
- ProwlerCodeBuildAccount : Audit Acccount ID where CodeBuild resides. (preferably Audit/Security account)
- ProwlerCodeBuildAccount : Audit Account ID where CodeBuild resides. (preferably Audit/Security account)
- ProwlerCodeBuildRole : Role name to use in CodeBuild service
- ProwlerCrossAccountRole : Role name to assume for Cross account
- ProwlerS3 : The S3 bucket name where reports will be put
@@ -45,4 +45,4 @@ For more information on how to use prowler, see [here](https://github.com/prowle
- ProwlerReportS3Account : The account where the report S3 bucket resides.
1. If you'd like to change the scheduled time,
1. You can change the cron expression of ScheduleExpression within [ProwlerCodeBuildStack.yaml](templates/ProwlerCodeBuildStack.yaml).
2. Alternatively, you can make changes directrly from Events > Rules > ProwlerExecuteRule > Actions > Edit in CloudWatch console.
2. Alternatively, you can make changes directly from Events > Rules > ProwlerExecuteRule > Actions > Edit in CloudWatch console.

View File

@@ -1,6 +1,6 @@
# Organizational Prowler with Serverless
Langage: [English](README.md)
Language: [English](README.md)
이 문서는 AWS Organization 내의 multi account 환경에서 prowler 를 적용하기 위해 작성된 문서입니다.
일정 시간마다 CloudWatch는 CodeBuild 를 트리거합니다.
@@ -22,7 +22,7 @@ prowler 의 자세한 사용방법은 [이 곳](https://github.com/prowler-cloud
[ProwlerRole.yaml](templates/ProwlerRole.yaml)
- ProwlerCodeBuildAccount : CodeBuild 가 있는 Audit Acccount ID
- ProwlerCodeBuildAccount : CodeBuild 가 있는 Audit Account ID
- ProwlerCodeBuildRole : CodeBuild의 생성될 Role 이름
- ProwlerCrossAccountRole : Cross account 용 Assume할 Role 이름
- ProwlerS3 : report 가 저장될 S3 bucket 명
@@ -30,7 +30,7 @@ prowler 의 자세한 사용방법은 [이 곳](https://github.com/prowler-cloud
[ProwlerRole.yaml](templates/ProwlerRole.yaml)
- ProwlerCodeBuildAccount : CodeBuild 가 있는 Audit Acccount
- ProwlerCodeBuildAccount : CodeBuild 가 있는 Audit Account
- ProwlerCodeBuildRole : CodeBuild에서 사용할 Role 이름
- ProwlerCrossAccountRole : Cross account 용 Assume할 Role 이름
- ProwlerS3 : report 가 저장될 S3 bucket 명

View File

@@ -2,7 +2,7 @@
## Introduction
The following demonstartes how to quickly install the resources necessary to perform a security baseline using Prowler. The speed is based on the prebuilt terraform module that can configure all the resources necessuary to run Prowler with the findings being sent to AWS Security Hub.
The following demonstrates how to quickly install the resources necessary to perform a security baseline using Prowler. The speed is based on the prebuilt terraform module that can configure all the resources necessary to run Prowler with the findings being sent to AWS Security Hub.
## Install
@@ -24,7 +24,7 @@ Installing Prowler with Terraform is simple and can be completed in under 1 minu
![Prowler Install](https://prowler-docs.s3.amazonaws.com/Prowler-Terraform-Install.gif)
- It is likely an error will return related to the SecurityHub subscription. This appears to be Terraform related and you can validate the configuration by navigating to the SecurityHub console. Click Integreations and search for Prowler. Take note of the green check where it says *Accepting findings*
- It is likely an error will return related to the SecurityHub subscription. This appears to be Terraform related and you can validate the configuration by navigating to the SecurityHub console. Click Integrations and search for Prowler. Take note of the green check where it says *Accepting findings*
![Prowler Subscription](https://prowler-docs.s3.amazonaws.com/Validate-Prowler-Subscription.gif)

View File

@@ -92,7 +92,7 @@ To make sure rules are working fine, run `/var/ossec/bin/ossec-logtest` and copy
```
You must see 3 phases goin on.
To check if there is any error you can enable the debug mode of `modulesd` setting the `wazuh_modules.debug=0` variable to 2 in `/var/ossec/etc/internal_options.conf` file. Restart wazun-manager and errors should appear in the `/var/ossec/logs/ossec.log` file.
To check if there is any error you can enable the debug mode of `modulesd` setting the `wazuh_modules.debug=0` variable to 2 in `/var/ossec/etc/internal_options.conf` file. Restart wazuh-manager and errors should appear in the `/var/ossec/logs/ossec.log` file.
## Thanks

View File

@@ -5,20 +5,20 @@ hide:
# About
## Author
Prowler was created by **Toni de la Fuente** in 2016.
Prowler was created by **Toni de la Fuente** in 2016.
| ![](/img/toni.png)<br>[![Twitter URL](https://img.shields.io/twitter/url/https/twitter.com/toniblyx.svg?style=social&label=Follow%20%40toniblyx)](https://twitter.com/toniblyx) [![Twitter URL](https://img.shields.io/twitter/url/https/twitter.com/prowlercloud.svg?style=social&label=Follow%20%40prowlercloud)](https://twitter.com/prowlercloud)|
| ![](img/toni.png)<br>[![Twitter URL](https://img.shields.io/twitter/url/https/twitter.com/toniblyx.svg?style=social&label=Follow%20%40toniblyx)](https://twitter.com/toniblyx) [![Twitter URL](https://img.shields.io/twitter/url/https/twitter.com/prowlercloud.svg?style=social&label=Follow%20%40prowlercloud)](https://twitter.com/prowlercloud)|
|:--:|
| <b>Toni de la Fuente </b>|
## Maintainers
Prowler is maintained by the Engineers of the **Prowler Team** :
| ![](/img/nacho.png)[![Twitter URL](https://img.shields.io/twitter/url/https/twitter.com/NachoRivCor.svg?style=social&label=Follow%20%40NachoRivCor)](https://twitter.com/NachoRivCor) | ![](/img/sergio.png)[![Twitter URL](https://img.shields.io/twitter/url/https/twitter.com/sergargar1.svg?style=social&label=Follow%20%40sergargar1)](https://twitter.com/sergargar1) |![](/img/pepe.png)[![Twitter URL](https://img.shields.io/twitter/url/https/twitter.com/jfagoagas.svg?style=social&label=Follow%20%40jfagoagas)](https://twitter.com/jfagoagas) |
| ![](img/nacho.png)[![Twitter URL](https://img.shields.io/twitter/url/https/twitter.com/NachoRivCor.svg?style=social&label=Follow%20%40NachoRivCor)](https://twitter.com/NachoRivCor) | ![](img/sergio.png)[![Twitter URL](https://img.shields.io/twitter/url/https/twitter.com/sergargar1.svg?style=social&label=Follow%20%40sergargar1)](https://twitter.com/sergargar1) |![](img/pepe.png)[![Twitter URL](https://img.shields.io/twitter/url/https/twitter.com/jfagoagas.svg?style=social&label=Follow%20%40jfagoagas)](https://twitter.com/jfagoagas) |
|:--:|:--:|:--:
| <b>Nacho Rivera</b>| <b>Sergio Garcia</b>| <b>Pepe Fagoaga</b>|
## License
Prowler is licensed as **Apache License 2.0** as specified in each file. You may obtain a copy of the License at
<http://www.apache.org/licenses/LICENSE-2.0>
<http://www.apache.org/licenses/LICENSE-2.0>

View File

@@ -0,0 +1,9 @@
# Audit Info
In each Prowler provider we have a Python object called `audit_info` which is in charge of keeping the credentials, the configuration and the state of each audit, and it's passed to each service during the `__init__`.
- AWS: https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/aws/lib/audit_info/models.py#L34-L54
- GCP: https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/aws/lib/audit_info/models.py#L7-L30
- Azure: https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/azure/lib/audit_info/models.py#L17-L31
This `audit_info` object is shared during the Prowler execution and for that reason is important to mock it in each test to isolate them. See the [testing guide](./unit-testing.md) for more information.

View File

@@ -0,0 +1,309 @@
# Create a new Check for a Provider
Here you can find how to create new checks for Prowler.
**To create a check is required to have a Prowler provider service already created, so if the service is not present or the attribute you want to audit is not retrieved by the service, please refer to the [Service](./services.md) documentation.**
## Introduction
To create a new check for a supported Prowler provider, you will need to create a folder with the check name inside the specific service for the selected provider.
We are going to use the `ec2_ami_public` check form the `AWS` provider as an example. So the folder name will `prowler/providers/aws/services/ec2/ec2_ami_public` (following the format `prowler/providers/<provider>/services/<service>/<check_name>`), with the name of check following the pattern: `service_subservice/resource_action`.
Inside that folder, we need to create three files:
- An empty `__init__.py`: to make Python treat this check folder as a package.
- A `check_name.py` with the above format containing the check's logic. Refer to the [check](./checks.md#check)
- A `check_name.metadata.json` containing the check's metadata. Refer to the [check metadata](./checks.md#check-metadata)
## Check
The Prowler's check structure is very simple and following it there is nothing more to do to include a check in a provider's service because the load is done dynamically based on the paths.
The following is the code for the `ec2_ami_public` check:
```python title="Check Class"
# At the top of the file we need to import the following:
# - Check class which is in charge of the following:
# - Retrieve the check metadata and expose the `metadata()`
# to return a JSON representation of the metadata,
# read more at Check Metadata Model down below.
# - Enforce that each check requires to have the `execute()` function
from prowler.lib.check.models import Check, Check_Report_AWS
# Then you have to import the provider service client
# read more at the Service documentation.
from prowler.providers.aws.services.ec2.ec2_client import ec2_client
# For each check we need to create a python class called the same as the
# file which inherits from the Check class.
class ec2_ami_public(Check):
"""ec2_ami_public verifies if an EC2 AMI is publicly shared"""
# Then, within the check's class we need to create the "execute(self)"
# function, which is enforce by the "Check" class to implement
# the Check's interface and let Prowler to run this check.
def execute(self):
# Inside the execute(self) function we need to create
# the list of findings initialised to an empty list []
findings = []
# Then, using the service client we need to iterate by the resource we
# want to check, in this case EC2 AMIs stored in the
# "ec2_client.images" object.
for image in ec2_client.images:
# Once iterating for the images, we have to intialise
# the Check_Report_AWS class passing the check's metadata
# using the "metadata" function explained above.
report = Check_Report_AWS(self.metadata())
# For each Prowler check we MUST fill the following
# Check_Report_AWS fields:
# - region
# - resource_id
# - resource_arn
# - resource_tags
# - status
# - status_extended
report.region = image.region
report.resource_id = image.id
report.resource_arn = image.arn
# The resource_tags should be filled if the resource has the ability
# of having tags, please check the service first.
report.resource_tags = image.tags
# Then we need to create the business logic for the check
# which always should be simple because the Prowler service
# must do the heavy lifting and the check should be in charge
# of parsing the data provided
report.status = "PASS"
report.status_extended = f"EC2 AMI {image.id} is not public."
# In this example each "image" object has a boolean attribute
# called "public" to set if the AMI is publicly shared
if image.public:
report.status = "FAIL"
report.status_extended = (
f"EC2 AMI {image.id} is currently public."
)
# Then at the same level as the "report"
# object we need to append it to the findings list.
findings.append(report)
# Last thing to do is to return the findings list to Prowler
return findings
```
### Check Status
All the checks MUST fill the `report.status` and `report.status_extended` with the following criteria:
- Status -- `report.status`
- `PASS` --> If the check is passing against the configured value.
- `FAIL` --> If the check is passing against the configured value.
- `INFO` --> This value cannot be used unless a manual operation is required in order to determine if the `report.status` is whether `PASS` or `FAIL`.
- Status Extended -- `report.status_extended`
- MUST end in a dot `.`
- MUST include the service audited with the resource and a brief explanation of the result generated, e.g.: `EC2 AMI ami-0123456789 is not public.`
### Resource ID, Name and ARN
All the hecks must fill the `report.resource_id` and `report.resource_arn` with the following criteria:
- AWS
- Resource ID -- `report.resource_id`
- AWS Account --> Account Number `123456789012`
- AWS Resource --> Resource ID / Name
- Root resource --> `<root_account>`
- Resource ARN -- `report.resource_arn`
- AWS Account --> Root ARN `arn:aws:iam::123456789012:root`
- AWS Resource --> Resource ARN
- Root resource --> Root ARN `arn:aws:iam::123456789012:root`
- GCP
- Resource ID -- `report.resource_id`
- GCP Resource --> Resource ID
- Resource Name -- `report.resource_name`
- GCP Resource --> Resource Name
- Azure
- Resource ID -- `report.resource_id`
- Azure Resource --> Resource ID
- Resource Name -- `report.resource_name`
- Azure Resource --> Resource Name
### Python Model
The following is the Python model for the check's class.
As per August 5th 2023 the `Check_Metadata_Model` can be found [here](https://github.com/prowler-cloud/prowler/blob/master/prowler/lib/check/models.py#L59-L80).
```python
class Check(ABC, Check_Metadata_Model):
"""Prowler Check"""
def __init__(self, **data):
"""Check's init function. Calls the CheckMetadataModel init."""
# Parse the Check's metadata file
metadata_file = (
os.path.abspath(sys.modules[self.__module__].__file__)[:-3]
+ ".metadata.json"
)
# Store it to validate them with Pydantic
data = Check_Metadata_Model.parse_file(metadata_file).dict()
# Calls parents init function
super().__init__(**data)
def metadata(self) -> dict:
"""Return the JSON representation of the check's metadata"""
return self.json()
@abstractmethod
def execute(self):
"""Execute the check's logic"""
```
### Using the audit config
Prowler has a [configuration file](../tutorials/configuration_file.md) which is used to pass certain configuration values to the checks, like the following:
```python title="ec2_securitygroup_with_many_ingress_egress_rules.py"
class ec2_securitygroup_with_many_ingress_egress_rules(Check):
def execute(self):
findings = []
# max_security_group_rules, default: 50
max_security_group_rules = ec2_client.audit_config.get(
"max_security_group_rules", 50
)
for security_group in ec2_client.security_groups:
```
```yaml title="config.yaml"
# AWS Configuration
aws:
# AWS EC2 Configuration
# aws.ec2_securitygroup_with_many_ingress_egress_rules
# The default value is 50 rules
max_security_group_rules: 50
```
As you can see in the above code, within the service client, in this case the `ec2_client`, there is an object called `audit_config` which is a Python dictionary containing the values read from the configuration file.
In order to use it, you have to check first if the value is present in the configuration file. If the value is not present, you can create it in the `config.yaml` file and then, read it from the check.
> It is mandatory to always use the `dictionary.get(value, default)` syntax to set a default value in the case the configuration value is not present.
## Check Metadata
Each Prowler check has metadata associated which is stored at the same level of the check's folder in a file called A `check_name.metadata.json` containing the check's metadata.
> We are going to include comments in this example metadata JSON but they cannot be included because the JSON format does not allow comments.
```json
{
# Provider holds the Prowler provider which the checks belongs to
"Provider": "aws",
# CheckID holds check name
"CheckID": "ec2_ami_public",
# CheckTitle holds the title of the check
"CheckTitle": "Ensure there are no EC2 AMIs set as Public.",
# CheckType holds Software and Configuration Checks, check more here
# https://docs.aws.amazon.com/securityhub/latest/userguide/asff-required-attributes.html#Types
"CheckType": [
"Infrastructure Security"
],
# ServiceName holds the provider service name
"ServiceName": "ec2",
# SubServiceName holds the service's subservice or resource used by the check
"SubServiceName": "ami",
# ResourceIdTemplate holds the unique ID for the resource used by the check
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
# Severity holds the check's severity, always in lowercase (critical, high, medium, low or informational)
"Severity": "critical",
# ResourceType only for AWS, holds the type from here
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html
"ResourceType": "Other",
# Description holds the title of the check, for now is the same as CheckTitle
"Description": "Ensure there are no EC2 AMIs set as Public.",
# Risk holds the check risk if the result is FAIL
"Risk": "When your AMIs are publicly accessible, they are available in the Community AMIs where everyone with an AWS account can use them to launch EC2 instances. Your AMIs could contain snapshots of your applications (including their data), therefore exposing your snapshots in this manner is not advised.",
# RelatedUrl holds an URL with more information about the check purpose
"RelatedUrl": "",
# Remediation holds the information to help the practitioner to fix the issue in the case of the check raise a FAIL
"Remediation": {
# Code holds different methods to remediate the FAIL finding
"Code": {
# CLI holds the command in the provider native CLI to remediate it
"CLI": "https://docs.bridgecrew.io/docs/public_8#cli-command",
# NativeIaC holds the native IaC code to remediate it, use "https://docs.bridgecrew.io/docs"
"NativeIaC": "",
# Other holds the other commands, scripts or code to remediate it, use "https://www.trendmicro.com/cloudoneconformity"
"Other": "https://docs.bridgecrew.io/docs/public_8#aws-console",
# Terraform holds the Terraform code to remediate it, use "https://docs.bridgecrew.io/docs"
"Terraform": ""
},
# Recommendation holds the recommendation for this check with a description and a related URL
"Recommendation": {
"Text": "We recommend your EC2 AMIs are not publicly accessible, or generally available in the Community AMIs.",
"Url": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/cancel-sharing-an-AMI.html"
}
},
# Categories holds the category or categories where the check can be included, if applied
"Categories": [
"internet-exposed"
],
# DependsOn is not actively used for the moment but it will hold other
# checks wich this check is dependant to
"DependsOn": [],
# RelatedTo is not actively used for the moment but it will hold other
# checks wich this check is related to
"RelatedTo": [],
# Notes holds additional information not covered in this file
"Notes": ""
}
```
### Remediation Code
For the Remediation Code we use the following knowledge base to fill it:
- Official documentation for the provider
- https://docs.bridgecrew.io
- https://www.trendmicro.com/cloudoneconformity
- https://github.com/cloudmatos/matos/tree/master/remediations
### RelatedURL and Recommendation
The RelatedURL field must be filled with an URL from the provider's official documentation like https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharingamis-intro.html
Also, if not present you can use the Risk and Recommendation texts from the TrendMicro [CloudConformity](https://www.trendmicro.com/cloudoneconformity) guide.
### Python Model
The following is the Python model for the check's metadata model. We use the Pydantic's [BaseModel](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel) as the parent class.
As per August 5th 2023 the `Check_Metadata_Model` can be found [here](https://github.com/prowler-cloud/prowler/blob/master/prowler/lib/check/models.py#L34-L56).
```python
class Check_Metadata_Model(BaseModel):
"""Check Metadata Model"""
Provider: str
CheckID: str
CheckTitle: str
CheckType: list[str]
ServiceName: str
SubServiceName: str
ResourceIdTemplate: str
Severity: str
ResourceType: str
Description: str
Risk: str
RelatedUrl: str
Remediation: Remediation
Categories: list[str]
DependsOn: list[str]
RelatedTo: list[str]
Notes: str
# We set the compliance to None to
# store the compliance later if supplied
Compliance: list = None
```

View File

@@ -0,0 +1,8 @@
## Contribute with documentation
We use `mkdocs` to build this Prowler documentation site so you can easily contribute back with new docs or improving them.
1. Install `mkdocs` with your favorite package manager.
2. Inside the `prowler` repository folder run `mkdocs serve` and point your browser to `http://localhost:8000` and you will see live changes to your local copy of this documentation site.
3. Make all needed changes to docs or add new documents. To do so just edit existing md files inside `prowler/docs` and if you are adding a new section or file please make sure you add it to `mkdocs.yaml` file in the root folder of the Prowler repo.
4. Once you are done with changes, please send a pull request to us for review and merge. Thank you in advance!

View File

@@ -0,0 +1,3 @@
# Integration Tests
Coming soon ...

View File

@@ -0,0 +1,3 @@
# Create a new integration
Coming soon ...

View File

@@ -0,0 +1,47 @@
# Developer Guide
You can extend Prowler in many different ways, in most cases you will want to create your own checks and compliance security frameworks, here is where you can learn about how to get started with it. We also include how to create custom outputs, integrations and more.
## Get the code and install all dependencies
First of all, you need a version of Python 3.9 or higher and also pip installed to be able to install all dependencies required. Once that is satisfied go a head and clone the repo:
```
git clone https://github.com/prowler-cloud/prowler
cd prowler
```
For isolation and avoid conflicts with other environments, we recommend usage of `poetry`:
```
pip install poetry
```
Then install all dependencies including the ones for developers:
```
poetry install
poetry shell
```
## Contributing with your code or fixes to Prowler
This repo has git pre-commit hooks managed via the [pre-commit](https://pre-commit.com/) tool. [Install](https://pre-commit.com/#install) it how ever you like, then in the root of this repo run:
```shell
pre-commit install
```
You should get an output like the following:
```shell
pre-commit installed at .git/hooks/pre-commit
```
Before we merge any of your pull requests we pass checks to the code, we use the following tools and automation to make sure the code is secure and dependencies up-to-dated (these should have been already installed if you ran `pipenv install -d`):
- [`bandit`](https://pypi.org/project/bandit/) for code security review.
- [`safety`](https://pypi.org/project/safety/) and [`dependabot`](https://github.com/features/security) for dependencies.
- [`hadolint`](https://github.com/hadolint/hadolint) and [`dockle`](https://github.com/goodwithtech/dockle) for our containers security.
- [`Snyk`](https://docs.snyk.io/integrations/snyk-container-integrations/container-security-with-docker-hub-integration) in Docker Hub.
- [`clair`](https://github.com/quay/clair) in Amazon ECR.
- [`vulture`](https://pypi.org/project/vulture/), [`flake8`](https://pypi.org/project/flake8/), [`black`](https://pypi.org/project/black/) and [`pylint`](https://pypi.org/project/pylint/) for formatting and best practices.
You can see all dependencies in file `pyproject.toml`.
## Want some swag as appreciation for your contribution?
If you are like us and you love swag, we are happy to thank you for your contribution with some laptop stickers or whatever other swag we may have at that time. Please, tell us more details and your pull request link in our [Slack workspace here](https://join.slack.com/t/prowler-workspace/shared_invite/zt-1hix76xsl-2uq222JIXrC7Q8It~9ZNog). You can also reach out to Toni de la Fuente on Twitter [here](https://twitter.com/ToniBlyx), his DMs are open.

View File

@@ -0,0 +1,3 @@
# Create a custom output format
Coming soon ...

View File

@@ -0,0 +1,41 @@
# Create a new security compliance framework
## Introduction
If you want to create or contribute with your own security frameworks or add public ones to Prowler you need to make sure the checks are available if not you have to create your own. Then create a compliance file per provider like in `prowler/compliance/<provider>/` and name it as `<framework>_<version>_<provider>.json` then follow the following format to create yours.
## Compliance Framework
Each file version of a framework will have the following structure at high level with the case that each framework needs to be generally identified, one requirement can be also called one control but one requirement can be linked to multiple prowler checks.:
- `Framework`: string. Distinguish name of the framework, like CIS
- `Provider`: string. Provider where the framework applies, such as AWS, Azure, OCI,...
- `Version`: string. Version of the framework itself, like 1.4 for CIS.
- `Requirements`: array of objects. Include all requirements or controls with the mapping to Prowler.
- `Requirements_Id`: string. Unique identifier per each requirement in the specific framework
- `Requirements_Description`: string. Description as in the framework.
- `Requirements_Attributes`: array of objects. Includes all needed attributes per each requirement, like levels, sections, etc. Whatever helps to create a dedicated report with the result of the findings. Attributes would be taken as closely as possible from the framework's own terminology directly.
- `Requirements_Checks`: array. Prowler checks that are needed to prove this requirement. It can be one or multiple checks. In case of no automation possible this can be empty.
```
{
"Framework": "<framework>-<provider>",
"Version": "<version>",
"Requirements": [
{
"Id": "<unique-id>",
"Description": "Requiemente full description",
"Checks": [
"Here is the prowler check or checks that is going to be executed"
],
"Attributes": [
{
<Add here your custom attributes.>
}
]
},
...
]
}
```
Finally, to have a proper output file for your reports, your framework data model has to be created in `prowler/lib/outputs/models.py` and also the CLI table output in `prowler/lib/outputs/compliance.py`.

View File

@@ -0,0 +1,235 @@
# Create a new Provider Service
Here you can find how to create a new service, or to complement an existing one, for a Prowler Provider.
## Introduction
To create a new service, you will need to create a folder inside the specific provider, i.e. `prowler/providers/<provider>/services/<service>/`.
Inside that folder, you MUST create the following files:
- An empty `__init__.py`: to make Python treat this service folder as a package.
- A `<service>_service.py`, containing all the service's logic and API calls.
- A `<service>_client_.py`, containing the initialization of the service's class we have just created so the checks's checks can use it.
## Service
The Prowler's service structure is the following and the way to initialise it is just by importing the service client in a check.
## Service Base Class
All the Prowler provider's services inherits from a base class depending on the provider used.
- [AWS Service Base Class](https://github.com/prowler-cloud/prowler/blob/22f8855ad7dad2e976dabff78611b643e234beaf/prowler/providers/aws/lib/service/service.py)
- [GCP Service Base Class](https://github.com/prowler-cloud/prowler/blob/22f8855ad7dad2e976dabff78611b643e234beaf/prowler/providers/gcp/lib/service/service.py)
- [Azure Service Base Class](https://github.com/prowler-cloud/prowler/blob/22f8855ad7dad2e976dabff78611b643e234beaf/prowler/providers/azure/lib/service/service.py)
Each class is used to initialize the credentials and the API's clients to be used in the service. If some threading is used it must be coded there.
## Service Class
Due to the complexity and differencies of each provider API we are going to use an example service to guide you in how can it be created.
The following is the `<service>_service.py` file:
```python title="Service Class"
from datetime import datetime
from typing import Optional
# The following is just for the AWS provider
from botocore.client import ClientError
# To use the Pydantic's BaseModel
from pydantic import BaseModel
# Prowler logging library
from prowler.lib.logger import logger
# Prowler resource filter, only for the AWS provider
from prowler.lib.scan_filters.scan_filters import is_resource_filtered
# Provider parent class
from prowler.providers.<provider>.lib.service.service import ServiceParentClass
# Create a class for the Service
################## <Service>
class <Service>(ServiceParentClass):
def __init__(self, audit_info):
# Call Service Parent Class __init__
# We use the __class__.__name__ to get it automatically
# from the Service Class name but you can pass a custom
# string if the provider's API service name is different
super().__init__(__class__.__name__, audit_info)
# Create an empty dictionary of items to be gathered,
# using the unique ID as the dictionary key
# e.g., instances
self.<items> = {}
# If you can parallelize by regions or locations
# you can use the __threading_call__ function
# available in the Service Parent Class
self.__threading_call__(self.__describe_<items>__)
# Optionally you can create another function to retrieve
# more data about each item without parallel
self.__describe_<item>__()
def __describe_<items>__(self, regional_client):
"""Get ALL <Service> <Items>"""
logger.info("<Service> - Describing <Items>...")
# We MUST include a try/except block in each function
try:
# Call to the provider API to retrieve the data we want
describe_<items>_paginator = regional_client.get_paginator("describe_<items>")
# Paginator to get every item
for page in describe_<items>_paginator.paginate():
# Another try/except within the loop for to continue looping
# if something unexpected happens
try:
for <item> in page["<Items>"]:
# For the AWS provider we MUST include the following lines to retrieve
# or not data for the resource passed as argument using the --resource-arn
if not self.audit_resources or (
is_resource_filtered(<item>["<item_arn>"], self.audit_resources)
):
# Then we have to include the retrieved resource in the object
# previously created
self.<items>[<item_unique_id>] =
<Item>(
arn=stack["<item_arn>"],
name=stack["<item_name>"],
tags=stack.get("Tags", []),
region=regional_client.region,
)
except Exception as error:
logger.error(
f"{<provider_specific_field>} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
# In the except part we have to use the following code to log the errors
except Exception as error:
# Depending on each provider we can use the following fields in the logger:
# - AWS: regional_client.region or self.region
# - GCP: project_id and location
# - Azure: subscription
logger.error(
f"{<provider_specific_field>} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __describe_<item>__(self):
"""Get Details for a <Service> <Item>"""
logger.info("<Service> - Describing <Item> to get specific details...")
# We MUST include a try/except block in each function
try:
# Loop over the items retrieved in the previous function
for <item> in self.<items>:
# When we perform calls to the Provider API within a for loop we have
# to include another try/except block because in the cloud there are
# ephemeral resources that can be deleted at the time we are checking them
try:
<item>_details = self.regional_clients[<item>.region].describe_<item>(
<Attribute>=<item>.name
)
# For example, check if item is Public. Here is important if we are
# getting values from a dictionary we have to use the "dict.get()"
# function with a default value in the case this value is not present
<item>.public = <item>_details.get("Public", False)
# In this except block, for example for the AWS Provider we can use
# the botocore.ClientError exception and check for a specific error code
# to raise a WARNING instead of an ERROR if some resource is not present.
except ClientError as error:
if error.response["Error"]["Code"] == "InvalidInstanceID.NotFound":
logger.warning(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{<provider_specific_field>} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
continue
# In the except part we have to use the following code to log the errors
except Exception as error:
# Depending on each provider we can use the following fields in the logger:
# - AWS: regional_client.region or self.region
# - GCP: project_id and location
# - Azure: subscription
logger.error(
f"{<item>.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
```
### Service Models
For each class object we need to model we use the Pydantic's [BaseModel](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel) to take advantage of the data validation.
```python title="Service Model"
# In each service class we have to create some classes using
# the Pydantic's Basemodel for the resources we want to audit.
class <Item>(BaseModel):
"""<Item> holds a <Service> <Item>"""
arn: str
"""<Items>[].arn"""
name: str
"""<Items>[].name"""
region: str
"""<Items>[].region"""
public: bool
"""<Items>[].public"""
# We can create Optional attributes set to None by default
tags: Optional[list]
"""<Items>[].tags"""
```
### Service Objects
In the service each group of resources should be created as a Python [dictionary](https://docs.python.org/3/tutorial/datastructures.html#dictionaries). This is because we are performing lookups all the time and the Python dictionary lookup has [O(1) complexity](https://en.wikipedia.org/wiki/Big_O_notation#Orders_of_common_functions).
We MUST set as the dictionary key a unique ID, like the resource Unique ID or ARN.
Example:
```python
self.vpcs = {}
self.vpcs["vpc-01234567890abcdef"] = VPC_Object_Class()
```
## Service Client
Each Prowler service requires a service client to use the service in the checks.
The following is the `<service>_client.py` containing the initialization of the service's class we have just created so the service's checks can use them:
```python
from prowler.providers.<provider>.lib.audit_info.audit_info import audit_info
from prowler.providers.<provider>.services.<service>.<service>_service import <Service>
<service>_client = <Service>(audit_info)
```
## Permissions
It is really important to check if the current Prowler's permissions for each provider are enough to implement a new service. If we need to include more please refer to the following documentaion and update it:
- AWS: https://docs.prowler.cloud/en/latest/getting-started/requirements/#aws-authentication
- Azure: https://docs.prowler.cloud/en/latest/getting-started/requirements/#permissions
- GCP: https://docs.prowler.cloud/en/latest/getting-started/requirements/#gcp-authentication

View File

@@ -0,0 +1,590 @@
# Unit Tests
The unit tests for the Prowler checks varies between each provider supported.
Here we left some good reads about unit testing and things we've learnt through all the process.
**Python Testing**
- https://docs.python-guide.org/writing/tests/
**Where to patch**
- https://docs.python.org/3/library/unittest.mock.html#where-to-patch
- https://stackoverflow.com/questions/893333/multiple-variables-in-a-with-statement
- https://docs.python.org/3/reference/compound_stmts.html#the-with-statement
**Utils to trace mocking and test execution**
- https://news.ycombinator.com/item?id=36054868
- https://docs.python.org/3/library/sys.html#sys.settrace
- https://github.com/kunalb/panopticon
## General Recommendations
When creating tests for some provider's checks we follow these guidelines trying to cover as much test scenarios as possible:
1. Create a test without resource to generate 0 findings, because Prowler will generate 0 findings if a service does not contain the resources the check is looking for audit.
2. Create test to generate both a `PASS` and a `FAIL` result.
3. Create tests with more than 1 resource to evaluate how the check behaves and if the number of findings is right.
## How to run Prowler tests
To run the Prowler test suite you need to install the testing dependencies already included in the `pyproject.toml` file. If you didn't install it yet please read the developer guide introduction [here](./introduction.md#get-the-code-and-install-all-dependencies).
Then in the project's root path execute `pytest -n auto -vvv -s -x` or use the `Makefile` with `make test`.
Other commands to run tests:
- Run tests for a provider: `pytest -n auto -vvv -s -x tests/providers/<provider>/services`
- Run tests for a provider service: `pytest -n auto -vvv -s -x tests/providers/<provider>/services/<service>`
- Run tests for a provider check: `pytest -n auto -vvv -s -x tests/providers/<provider>/services/<service>/<check>`
> Refer to the [pytest documentation](https://docs.pytest.org/en/7.1.x/getting-started.html) documentation for more information.
## AWS
For the AWS provider we have ways to test a Prowler check based on the following criteria:
> Note: We use and contribute to the [Moto](https://github.com/getmoto/moto) library which allows us to easily mock out tests based on AWS infrastructure. **It's awesome!**
- AWS API calls covered by [Moto](https://github.com/getmoto/moto):
- Service tests with `@mock_<service>`
- Checks tests with `@mock_<service>`
- AWS API calls not covered by Moto:
- Service test with `mock_make_api_call`
- Checks tests with [MagicMock](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.MagicMock)
- AWS API calls partially covered by Moto:
- Service test with `@mock_<service>` and `mock_make_api_call`
- Checks tests with `@mock_<service>` and `mock_make_api_call`
In the following section we are going to explain all of the above scenarios with examples based on if the [Moto](https://github.com/getmoto/moto) library covers the AWS API calls made by the service. You can check the covered API calls [here](https://github.com/getmoto/moto/blob/master/IMPLEMENTATION_COVERAGE.md).
An important point for the AWS testing is that in each check we MUST have a unique `audit_info` which is the key object during the AWS execution to isolate the test execution.
Check the [Audit Info](./audit-info.md) section to get more details.
```python
# We need to import the AWS_Audit_Info and the Audit_Metadata
# to set the audit_info to call AWS APIs
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.common.models import Audit_Metadata
AWS_ACCOUNT_NUMBER = "123456789012"
def set_mocked_audit_info(self):
audit_info = AWS_Audit_Info(
session_config=None,
original_session=None,
audit_session=session.Session(
profile_name=None,
botocore_session=None,
),
audit_config=None,
audited_account=AWS_ACCOUNT_NUMBER,
audited_account_arn=f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root",
audited_user_id=None,
audited_partition="aws",
audited_identity_arn=None,
profile=None,
profile_region=None,
credentials=None,
assumed_role_info=None,
audited_regions=["us-east-1", "eu-west-1"],
organizations_metadata=None,
audit_resources=None,
mfa_enabled=False,
audit_metadata=Audit_Metadata(
services_scanned=0,
expected_checks=[],
completed_checks=0,
audit_progress=0,
),
)
return audit_info
```
### Checks
For the AWS tests examples we are going to use the tests for the `iam_password_policy_uppercase` check.
This section is going to be divided based on the API coverage of the [Moto](https://github.com/getmoto/moto) library.
#### API calls covered
If the [Moto](https://github.com/getmoto/moto) library covers the API calls we want to test we can use the `@mock_<service>` decorator which will mocked out all the API calls made to AWS keeping the state within the code decorated, in this case the test function.
```python
# We need to import the unittest.mock to allow us to patch some objects
# not to use shared ones between test, hence to isolate the test
from unittest import mock
# Boto3 client and session to call the AWS APIs
from boto3 import client, session
# Moto decorator for the IAM service we want to mock
from moto import mock_iam
# Constants used
AWS_ACCOUNT_NUMBER = "123456789012"
AWS_REGION = "us-east-1"
# We always name the test classes like Test_<check_name>
class Test_iam_password_policy_uppercase:
# We include the Moto decorator for the service we want to use
# You can include more than one if two or more services are
# involved in test
@mock_iam
# We name the tests with test_<service>_<check_name>_<test_action>
def test_iam_password_policy_no_uppercase_flag(self):
# First, we have to create an IAM client
iam_client = client("iam", region_name=AWS_REGION)
# Then, since all the AWS accounts have a password
# policy we want to set to False the RequireUppercaseCharacters
iam_client.update_account_password_policy(RequireUppercaseCharacters=False)
# We set a mocked audit_info for AWS not to share the same audit state
# between checks
current_audit_info = self.set_mocked_audit_info()
# The Prowler service import MUST be made within the decorated
# code not to make real API calls to the AWS service.
from prowler.providers.aws.services.iam.iam_service import IAM
# Prowler for AWS uses a shared object called `current_audit_info` where it stores
# the audit's state, credentials and configuration.
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
new=current_audit_info,
),
# We have to mock also the iam_client from the check to enforce that the iam_client used is the one
# created within this check because patch != import, and if you execute tests in parallel some objects
# can be already initialised hence the check won't be isolated
mock.patch(
"prowler.providers.aws.services.iam.iam_password_policy_uppercase.iam_password_policy_uppercase.iam_client",
new=IAM(current_audit_info),
):
# We import the check within the two mocks not to initialise the iam_client with some shared information from
# the current_audit_info or the IAM service.
from prowler.providers.aws.services.iam.iam_password_policy_uppercase.iam_password_policy_uppercase import (
iam_password_policy_uppercase,
)
# Once imported, we only need to instantiate the check's class
check = iam_password_policy_uppercase()
# And then, call the execute() function to run the check
# against the IAM client we've set up.
result = check.execute()
# Last but not least, we need to assert all the fields
# from the check's results
assert len(results) == 1
assert result[0].status == "FAIL"
assert result[0].status_extended == "IAM password policy does not require at least one uppercase letter."
assert result[0].resource_arn == f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root"
assert result[0].resource_id == AWS_ACCOUNT_NUMBER
assert result[0].resource_tags == []
assert result[0].region == AWS_REGION
```
#### API calls not covered
If the IAM service for the check's we want to test is not covered by Moto we have to inject the objects in the service client using [MagicMock](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.MagicMock) because we cannot instantiate the service since it will make real calls to the AWS APIs.
> The following example uses the IAM GetAccountPasswordPolicy which is covered by Moto but this is only for demonstration purposes.
The following code shows how to use MagicMock to create the service objects.
```python
# We need to import the unittest.mock to allow us to patch some objects
# not to use shared ones between test, hence to isolate the test
from unittest import mock
# Constants used
AWS_ACCOUNT_NUMBER = "123456789012"
AWS_REGION = "us-east-1"
# We always name the test classes like Test_<check_name>
class Test_iam_password_policy_uppercase:
# We name the tests with test_<service>_<check_name>_<test_action>
def test_iam_password_policy_no_uppercase_flag(self):
# Mocked client with MagicMock
mocked_iam_client = mock.MagicMock
# Since the IAM Password Policy has their own model we have to import it
from prowler.providers.aws.services.iam.iam_service import PasswordPolicy
# Create the mock PasswordPolicy object
mocked_iam_client.password_policy = PasswordPolicy(
length=5,
symbols=True,
numbers=True,
# We set the value to False to test the check
uppercase=False,
lowercase=True,
allow_change=False,
expiration=True,
)
# We set a mocked audit_info for AWS not to share the same audit state
# between checks
current_audit_info = self.set_mocked_audit_info()
# In this scenario we have to mock also the IAM service and the iam_client from the check to enforce that the iam_client used is the one created within this check because patch != import, and if you execute tests in parallel some objects can be already initialised hence the check won't be isolated.
# In this case we don't use the Moto decorator, we use the mocked IAM client for both objects
with mock.patch(
"prowler.providers.aws.services.iam.iam_service.IAM",
new=mocked_iam_client,
), mock.patch(
"prowler.providers.aws.services.iam.iam_client.iam_client",
new=mocked_iam_client,
):
# We import the check within the two mocks not to initialise the iam_client with some shared information from
# the current_audit_info or the IAM service.
from prowler.providers.aws.services.iam.iam_password_policy_uppercase.iam_password_policy_uppercase import (
iam_password_policy_uppercase,
)
# Once imported, we only need to instantiate the check's class
check = iam_password_policy_uppercase()
# And then, call the execute() function to run the check
# against the IAM client we've set up.
result = check.execute()
# Last but not least, we need to assert all the fields
# from the check's results
assert len(results) == 1
assert result[0].status == "FAIL"
assert result[0].status_extended == "IAM password policy does not require at least one uppercase letter."
assert result[0].resource_arn == f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root"
assert result[0].resource_id == AWS_ACCOUNT_NUMBER
assert result[0].resource_tags == []
assert result[0].region == AWS_REGION
```
#### API calls partially covered
If the API calls we want to use in the service are partially covered by the Moto decorator we have to create our own mocked API calls to use it in combination.
To do so, you need to mock the `botocore.client.BaseClient._make_api_call` function, which is the Boto3 function in charge of making the real API call to the AWS APIs, using `mock.patch <https://docs.python.org/3/library/unittest.mock.html#patch>`:
```python
import boto3
import botocore
from unittest.mock import patch
from moto import mock_iam
# Original botocore _make_api_call function
orig = botocore.client.BaseClient._make_api_call
# Mocked botocore _make_api_call function
def mock_make_api_call(self, operation_name, kwarg):
# As you can see the operation_name has the get_account_password_policy snake_case form but
# we are using the GetAccountPasswordPolicy form.
# Rationale -> https://github.com/boto/botocore/blob/develop/botocore/client.py#L810:L816
if operation_name == 'GetAccountPasswordPolicy':
return {
'PasswordPolicy': {
'MinimumPasswordLength': 123,
'RequireSymbols': True|False,
'RequireNumbers': True|False,
'RequireUppercaseCharacters': True|False,
'RequireLowercaseCharacters': True|False,
'AllowUsersToChangePassword': True|False,
'ExpirePasswords': True|False,
'MaxPasswordAge': 123,
'PasswordReusePrevention': 123,
'HardExpiry': True|False
}
}
# If we don't want to patch the API call
return orig(self, operation_name, kwarg)
# We always name the test classes like Test_<check_name>
class Test_iam_password_policy_uppercase:
# We include the custom API call mock decorator for the service we want to use
@patch("botocore.client.BaseClient._make_api_call", new=mock_make_api_call)
# We include also the IAM Moto decorator for the API calls supported
@mock_iam
# We name the tests with test_<service>_<check_name>_<test_action>
def test_iam_password_policy_no_uppercase_flag(self):
# Check the previous section to see the check test since is the same
```
Note that this does not use Moto, to keep it simple, but if you use any `moto`-decorators in addition to the patch, the call to `orig(self, operation_name, kwarg)` will be intercepted by Moto.
> The above code comes from here https://docs.getmoto.org/en/latest/docs/services/patching_other_services.html
#### Mocking more than one service
If the test your are creating belongs to a check that uses more than one provider service, you should mock each of the services used. For example, the check `cloudtrail_logs_s3_bucket_access_logging_enabled` requires the CloudTrail and the S3 client, hence the service's mock part of the test will be as follows:
```python
with mock.patch(
"prowler.providers.aws.lib.audit_info.audit_info.current_audit_info",
new=mock_audit_info,
), mock.patch(
"prowler.providers.aws.services.cloudtrail.cloudtrail_logs_s3_bucket_access_logging_enabled.cloudtrail_logs_s3_bucket_access_logging_enabled.cloudtrail_client",
new=Cloudtrail(mock_audit_info),
), mock.patch(
"prowler.providers.aws.services.cloudtrail.cloudtrail_logs_s3_bucket_access_logging_enabled.cloudtrail_logs_s3_bucket_access_logging_enabled.s3_client",
new=S3(mock_audit_info),
):
```
As you can see in the above code, it is required to mock the AWS audit info and both services used.
#### Patching vs. Importing
This is an important topic within the Prowler check's unit testing. Due to the dynamic nature of the check's load, the process of importing the service client from a check is the following:
1. `<check>.py`:
```python
from prowler.providers.<provider>.services.<service>.<service>_client import <service>_client
```
2. `<service>_client.py`:
```python
from prowler.providers.<provider>.lib.audit_info.audit_info import audit_info
from prowler.providers.<provider>.services.<service>.<service>_service import <SERVICE>
<service>_client = <SERVICE>(audit_info)
```
Due to the above import path it's not the same to patch the following objects because if you run a bunch of tests, either in parallel or not, some clients can be already instantiated by another check, hence your test exection will be using another test's service instance:
- `<service>_client` imported at `<check>.py`
- `<service>_client` initialised at `<service>_client.py`
- `<SERVICE>` imported at `<service>_client.py`
A useful read about this topic can be found in the following article: https://stackoverflow.com/questions/8658043/how-to-mock-an-import
#### Different ways to mock the service client
##### Mocking the service client at the service client level
Mocking a service client using the following code ...
```python title="Mocking the service_client"
with mock.patch(
"prowler.providers.<provider>.lib.audit_info.audit_info.audit_info",
new=audit_info,
), mock.patch(
"prowler.providers.aws.services.<service>.<check>.<check>.<service>_client",
new=<SERVICE>(audit_info),
):
```
will cause that the service will be initialised twice:
1. When the `<SERVICE>(audit_info)` is mocked out using `mock.patch` to have the object ready for the patching.
2. At the `<service>_client.py` when we are patching it since the `mock.patch` needs to go to that object an initialise it, hence the `<SERVICE>(audit_info)` will be called again.
Then, when we import the `<service>_client.py` at `<check>.py`, since we are mocking where the object is used, Python will use the mocked one.
In the [next section](./unit-testing.md#mocking-the-service-and-the-service-client-at-the-service-client-level) you will see an improved version to mock objects.
##### Mocking the service and the service client at the service client level
Mocking a service client using the following code ...
```python title="Mocking the service and the service_client"
with mock.patch(
"prowler.providers.<provider>.lib.audit_info.audit_info.audit_info",
new=audit_info,
), mock.patch(
"prowler.providers.aws.services.<service>.<SERVICE>",
return_value=<SERVICE>(audit_info),
) as service_client, mock.patch(
"prowler.providers.aws.services.<service>.<service>_client.<service>_client",
new=service_client,
):
```
will cause that the service will be initialised once, just when the `<SERVICE>(audit_info)` is mocked out using `mock.patch`.
Then, at the check_level when Python tries to import the client with `from prowler.providers.<provider>.services.<service>.<service>_client`, since it is already mocked out, the execution will continue using the `service_client` without getting into the `<service>_client.py`.
### Services
For testing the AWS services we have to follow the same logic as with the AWS checks, we have to check if the AWS API calls made by the service are covered by Moto and we have to test the service `__init__` to verifiy that the information is being correctly retrieved.
The service tests could act as *Integration Tests* since we test how the service retrieves the information from the provider, but since Moto or the custom mock objects mocks that calls this test will fall into *Unit Tests*.
Please refer to the [AWS checks tests](./unit-testing.md#checks) for more information on how to create tests and check the existing services tests [here](https://github.com/prowler-cloud/prowler/tree/master/tests/providers/aws/services).
## GCP
### Checks
For the GCP Provider we don't have any library to mock out the API calls we use. So in this scenario we inject the objects in the service client using [MagicMock](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.MagicMock).
The following code shows how to use MagicMock to create the service objects for a GCP check test.
```python
# We need to import the unittest.mock to allow us to patch some objects
# not to use shared ones between test, hence to isolate the test
from unittest import mock
# GCP Constants
GCP_PROJECT_ID = "123456789012"
# We are going to create a test for the compute_firewall_rdp_access_from_the_internet_allowed check
class Test_compute_firewall_rdp_access_from_the_internet_allowed:
# We name the tests with test_<service>_<check_name>_<test_action>
def test_compute_compute_firewall_rdp_access_from_the_internet_allowed_one_compliant_rule_with_valid_port(self):
# Mocked client with MagicMock
compute_client = mock.MagicMock
# Assign GCP client configuration
compute_client.project_ids = [GCP_PROJECT_ID]
compute_client.region = "global"
# Import the service resource model to create the mocked object
from prowler.providers.gcp.services.compute.compute_service import Firewall
# Create the custom Firewall object to be tested
firewall = Firewall(
name="test",
id="1234567890",
source_ranges=["0.0.0.0/0"],
direction="INGRESS",
allowed_rules=[{"IPProtocol": "tcp", "ports": ["443"]}],
project_id=GCP_PROJECT_ID,
)
compute_client.firewalls = [firewall]
# In this scenario we have to mock also the Compute service and the compute_client from the check to enforce that the compute_client used is the one created within this check because patch != import, and if you execute tests in parallel some objects can be already initialised hence the check won't be isolated.
# In this case we don't use the Moto decorator, we use the mocked Compute client for both objects
with mock.patch(
"prowler.providers.gcp.services.compute.compute_service.Compute",
new=defender_client,
), mock.patch(
"prowler.providers.gcp.services.compute.compute_client.compute_client",
new=defender_client,
):
# We import the check within the two mocks not to initialise the iam_client with some shared information from
# the current_audit_info or the Compute service.
from prowler.providers.gcp.services.compute.compute_firewall_rdp_access_from_the_internet_allowed.compute_firewall_rdp_access_from_the_internet_allowed import (
compute_firewall_rdp_access_from_the_internet_allowed,
)
# Once imported, we only need to instantiate the check's class
check = compute_firewall_rdp_access_from_the_internet_allowed()
# And then, call the execute() function to run the check
# against the IAM client we've set up.
result = check.execute()
# Last but not least, we need to assert all the fields
# from the check's results
assert len(result) == 1
assert result[0].status == "PASS"
assert result[0].status_extended == f"Firewall {firewall.name} does not expose port 3389 (RDP) to the internet."
assert result[0].resource_name = firewall.name
assert result[0].resource_id == firewall.id
assert result[0].project_id = GCP_PROJECT_ID
assert result[0].location = compute_client.region
```
### Services
Coming soon ...
## Azure
### Checks
For the Azure Provider we don't have any library to mock out the API calls we use. So in this scenario we inject the objects in the service client using [MagicMock](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.MagicMock).
The following code shows how to use MagicMock to create the service objects for a Azure check test.
```python
# We need to import the unittest.mock to allow us to patch some objects
# not to use shared ones between test, hence to isolate the test
from unittest import mock
from uuid import uuid4
# Azure Constants
AZURE_SUSCRIPTION = str(uuid4())
# We are going to create a test for the Test_defender_ensure_defender_for_arm_is_on check
class Test_defender_ensure_defender_for_arm_is_on:
# We name the tests with test_<service>_<check_name>_<test_action>
def test_defender_defender_ensure_defender_for_arm_is_on_arm_pricing_tier_not_standard(self):
resource_id = str(uuid4())
# Mocked client with MagicMock
defender_client = mock.MagicMock
# Import the service resource model to create the mocked object
from prowler.providers.azure.services.defender.defender_service import Defender_Pricing
# Create the custom Defender object to be tested
defender_client.pricings = {
AZURE_SUSCRIPTION: {
"Arm": Defender_Pricing(
resource_id=resource_id,
pricing_tier="Not Standard",
free_trial_remaining_time=0,
)
}
}
# In this scenario we have to mock also the Defender service and the defender_client from the check to enforce that the defender_client used is the one created within this check because patch != import, and if you execute tests in parallel some objects can be already initialised hence the check won't be isolated.
# In this case we don't use the Moto decorator, we use the mocked Defender client for both objects
with mock.patch(
"prowler.providers.azure.services.defender.defender_service.Defender",
new=defender_client,
), mock.patch(
"prowler.providers.azure.services.defender.defender_client.defender_client",
new=defender_client,
):
# We import the check within the two mocks not to initialise the iam_client with some shared information from
# the current_audit_info or the Defender service.
from prowler.providers.azure.services.defender.defender_ensure_defender_for_arm_is_on.defender_ensure_defender_for_arm_is_on import (
defender_ensure_defender_for_arm_is_on,
)
# Once imported, we only need to instantiate the check's class
check = defender_ensure_defender_for_arm_is_on()
# And then, call the execute() function to run the check
# against the IAM client we've set up.
result = check.execute()
# Last but not least, we need to assert all the fields
# from the check's results
assert len(result) == 1
assert result[0].status == "FAIL"
assert (
result[0].status_extended
== f"Defender plan Defender for ARM from subscription {AZURE_SUSCRIPTION} is set to OFF (pricing tier not standard)"
)
assert result[0].subscription == AZURE_SUSCRIPTION
assert result[0].resource_name == "Defender plan ARM"
assert result[0].resource_id == resource_id
```
### Services
Coming soon ...

View File

@@ -1,6 +1,6 @@
# Requirements
Prowler has been written in Python using the [AWS SDK (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html#) and [Azure SDK](https://learn.microsoft.com/en-us/python/api/overview/azure/?view=azure-python).
Prowler has been written in Python using the [AWS SDK (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html#), [Azure SDK](https://azure.github.io/azure-sdk-for-python/) and [GCP API Python Client](https://github.com/googleapis/google-api-python-client/).
## AWS
Since Prowler uses AWS Credentials under the hood, you can follow any authentication method as described [here](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-precedence).
@@ -23,12 +23,19 @@ export AWS_SESSION_TOKEN="XXXXXXXXX"
Those credentials must be associated to a user or role with proper permissions to do all checks. To make sure, add the following AWS managed policies to the user or role being used:
- arn:aws:iam::aws:policy/SecurityAudit
- arn:aws:iam::aws:policy/job-function/ViewOnlyAccess
- `arn:aws:iam::aws:policy/SecurityAudit`
- `arn:aws:iam::aws:policy/job-function/ViewOnlyAccess`
> Moreover, some read-only additional permissions are needed for several checks, make sure you attach also the custom policy [prowler-additions-policy.json](https://github.com/prowler-cloud/prowler/blob/master/iam/prowler-additions-policy.json) to the role you are using.
> Moreover, some read-only additional permissions are needed for several checks, make sure you attach also the custom policy [prowler-additions-policy.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-additions-policy.json) to the role you are using.
> If you want Prowler to send findings to [AWS Security Hub](https://aws.amazon.com/security-hub), make sure you also attach the custom policy [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/iam/prowler-security-hub.json).
> If you want Prowler to send findings to [AWS Security Hub](https://aws.amazon.com/security-hub), make sure you also attach the custom policy [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-security-hub.json).
### Multi-Factor Authentication
If your IAM entity enforces MFA you can use `--mfa` and Prowler will ask you to input the following values to get a new session:
- ARN of your MFA device
- TOTP (Time-Based One-Time Password)
## Azure
@@ -52,7 +59,7 @@ export AZURE_CLIENT_SECRET="XXXXXXX"
If you try to execute Prowler with the `--sp-env-auth` flag and those variables are empty or not exported, the execution is going to fail.
### AZ CLI / Browser / Managed Identity authentication
The other three cases does not need additional configuration, `--az-cli-auth` and `--managed-identity-auth` are automated options, `--browser-auth` needs the user to authenticate using the default browser to start the scan.
The other three cases does not need additional configuration, `--az-cli-auth` and `--managed-identity-auth` are automated options. To use `--browser-auth` the user needs to authenticate against Azure using the default browser to start the scan, also `tenant-id` is required.
### Permissions
@@ -79,3 +86,21 @@ Regarding the subscription scope, Prowler by default scans all the subscriptions
- `Security Reader`
- `Reader`
## Google Cloud
### GCP Authentication
Prowler will follow the same credentials search as [Google authentication libraries](https://cloud.google.com/docs/authentication/application-default-credentials#search_order):
1. [GOOGLE_APPLICATION_CREDENTIALS environment variable](https://cloud.google.com/docs/authentication/application-default-credentials#GAC)
2. [User credentials set up by using the Google Cloud CLI](https://cloud.google.com/docs/authentication/application-default-credentials#personal)
3. [The attached service account, returned by the metadata server](https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa)
Those credentials must be associated to a user or service account with proper permissions to do all checks. To make sure, add the following roles to the member associated with the credentials:
- Viewer
- Security Reviewer
- Stackdriver Account Viewer
> By default, `prowler` will scan all accessible GCP Projects, use flag `--project-ids` to specify the projects to be scanned.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 258 KiB

After

Width:  |  Height:  |  Size: 283 KiB

BIN
docs/img/output-html.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 631 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 11 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 320 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 220 KiB

View File

@@ -5,18 +5,18 @@
# Prowler Documentation
**Welcome to [Prowler Open Source v3](https://github.com/prowler-cloud/prowler/) Documentation!** 📄
**Welcome to [Prowler Open Source v3](https://github.com/prowler-cloud/prowler/) Documentation!** 📄
For **Prowler v2 Documentation**, please go [here](https://github.com/prowler-cloud/prowler/tree/2.12.0) to the branch and its README.md.
- You are currently in the **Getting Started** section where you can find general information and requirements to help you start with the tool.
- In the [Tutorials](tutorials/overview) section you will see how to take advantage of all the features in Prowler.
- In the [Contact Us](contact) section you can find how to reach us out in case of technical issues.
- In the [About](about) section you will find more information about the Prowler team and license.
- In the [Tutorials](./tutorials/misc.md) section you will see how to take advantage of all the features in Prowler.
- In the [Contact Us](./contact.md) section you can find how to reach us out in case of technical issues.
- In the [About](./about.md) section you will find more information about the Prowler team and license.
## About Prowler
**Prowler** is an Open Source security tool to perform AWS and Azure security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness.
**Prowler** is an Open Source security tool to perform AWS, Azure and Google Cloud security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness.
It contains hundreds of controls covering CIS, PCI-DSS, ISO27001, GDPR, HIPAA, FFIEC, SOC2, AWS FTR, ENS and custom security frameworks.
@@ -40,12 +40,12 @@ Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-clo
* `Python >= 3.9`
* `Python pip >= 3.9`
* AWS and/or Azure credentials
* AWS, GCP and/or Azure credentials
_Commands_:
``` bash
pip install prowler-cloud
pip install prowler
prowler -v
```
@@ -54,7 +54,7 @@ Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-clo
_Requirements_:
* Have `docker` installed: https://docs.docker.com/get-docker/.
* AWS and/or Azure credentials
* AWS, GCP and/or Azure credentials
* In the command below, change `-v` to your local directory path in order to access the reports.
_Commands_:
@@ -71,7 +71,7 @@ Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-clo
_Requirements for Ubuntu 20.04.3 LTS_:
* AWS and/or Azure credentials
* AWS, GCP and/or Azure credentials
* Install python 3.9 with: `sudo apt-get install python3.9`
* Remove python 3.8 to avoid conflicts if you can: `sudo apt-get remove python3.8`
* Make sure you have the python3 distutils package installed: `sudo apt-get install python3-distutils`
@@ -82,27 +82,58 @@ Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-clo
_Commands_:
```
pip3.9 install prowler-cloud
pip3.9 install prowler
export PATH=$PATH:/home/$HOME/.local/bin/
prowler -v
```
=== "GitHub"
_Requirements for Developers_:
* AWS, GCP and/or Azure credentials
* `git`, `Python >= 3.9`, `pip` and `poetry` installed (`pip install poetry`)
_Commands_:
```
git clone https://github.com/prowler-cloud/prowler
cd prowler
poetry shell
poetry install
python prowler.py -v
```
=== "Amazon Linux 2"
_Requirements_:
* AWS and/or Azure credentials
* Latest Amazon Linux 2 should come with Python 3.9 already installed however it may need pip. Install Python pip 3.9 with: `sudo dnf install -y python3-pip`.
* AWS, GCP and/or Azure credentials
* Latest Amazon Linux 2 should come with Python 3.9 already installed however it may need pip. Install Python pip 3.9 with: `sudo yum install -y python3-pip`.
* Make sure setuptools for python is already installed with: `pip3 install setuptools`
_Commands_:
```
pip3.9 install prowler-cloud
pip3.9 install prowler
export PATH=$PATH:/home/$HOME/.local/bin/
prowler -v
```
=== "Brew"
_Requirements_:
* `Brew` installed in your Mac or Linux
* AWS, GCP and/or Azure credentials
_Commands_:
``` bash
brew install prowler
prowler -v
```
=== "AWS CloudShell"
Prowler can be easely executed in AWS CloudShell but it has some prerequsites to be able to to so. AWS CloudShell is a container running with `Amazon Linux release 2 (Karoo)` that comes with Python 3.7, since Prowler requires Python >= 3.9 we need to first install a newer version of Python. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
@@ -118,13 +149,13 @@ Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-clo
./configure --enable-optimizations
sudo make altinstall
python3.9 --version
cd
cd
```
_Commands_:
* Once Python 3.9 is available we can install Prowler from pip:
```
pip3.9 install prowler-cloud
pip3.9 install prowler
prowler -v
```
@@ -139,7 +170,7 @@ Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-clo
_Commands_:
```
pip install prowler-cloud
pip install prowler
prowler -v
```
@@ -154,7 +185,7 @@ The available versions of Prowler are the following:
The container images are available here:
- [DockerHub](https://hub.docker.com/r/toniblyx/prowler/tags)
- [AWS Public ECR](https://gallery.ecr.aws/o4g1s5r6/prowler)
- [AWS Public ECR](https://gallery.ecr.aws/prowler-cloud/prowler)
## High level architecture
@@ -163,16 +194,16 @@ You can run Prowler from your workstation, an EC2 instance, Fargate or any other
![Architecture](img/architecture.png)
## Basic Usage
To run Prowler, you will need to specify the provider (e.g aws or azure):
To run Prowler, you will need to specify the provider (e.g aws, gcp or azure):
> If no provider specified, AWS will be used for backward compatibility with most of v2 options.
```console
prowler <provider>
```
![Prowler Execution](img/short-display.png)
> Running the `prowler` command without options will use your environment variable credentials, see [Requirements](getting-started/requirements/) section to review the credentials settings.
> Running the `prowler` command without options will use your environment variable credentials, see [Requirements](./getting-started/requirements.md) section to review the credentials settings.
If you miss the former output you can use `--verbose` but Prowler v3 is smoking fast so you won't see much ;)
If you miss the former output you can use `--verbose` but Prowler v3 is smoking fast, so you won't see much ;)
By default, Prowler will generate a CSV, JSON and HTML reports, however you can generate a JSON-ASFF (used by AWS Security Hub) report with `-M` or `--output-modes`:
@@ -195,6 +226,7 @@ For executing specific checks or services you can use options `-c`/`checks` or `
```console
prowler azure --checks storage_blob_public_access_level_is_disabled
prowler aws --services s3 ec2
prowler gcp --services iam compute
```
Also, checks and services can be excluded with options `-e`/`--excluded-checks` or `--excluded-services`:
@@ -202,9 +234,10 @@ Also, checks and services can be excluded with options `-e`/`--excluded-checks`
```console
prowler aws --excluded-checks s3_bucket_public_access
prowler azure --excluded-services defender iam
prowler gcp --excluded-services kms
```
More options and executions methods that will save your time in [Miscelaneous](tutorials/misc.md).
More options and executions methods that will save your time in [Miscellaneous](tutorials/misc.md).
You can always use `-h`/`--help` to access to the usage information and all the possible options:
@@ -221,6 +254,8 @@ prowler aws --profile custom-profile -f us-east-1 eu-south-2
```
> By default, `prowler` will scan all AWS regions.
See more details about AWS Authentication in [Requirements](getting-started/requirements.md)
### Azure
With Azure you need to specify which auth method is going to be used:
@@ -233,15 +268,37 @@ prowler azure --sp-env-auth
prowler azure --az-cli-auth
# To use browser authentication
prowler azure --browser-auth
prowler azure --browser-auth --tenant-id "XXXXXXXX"
# To use managed identity auth
prowler azure --managed-identity-auth
```
More details in [Requirements](getting-started/requirements.md)
See more details about Azure Authentication in [Requirements](getting-started/requirements.md)
Prowler by default scans all the subscriptions that is allowed to scan, if you want to scan a single subscription or various concrete subscriptions you can use the following flag (using az cli auth as example):
Prowler by default scans all the subscriptions that is allowed to scan, if you want to scan a single subscription or various specific subscriptions you can use the following flag (using az cli auth as example):
```console
prowler azure --az-cli-auth --subscription-ids <subscription ID 1> <subscription ID 2> ... <subscription ID N>
```
### Google Cloud
Prowler will use by default your User Account credentials, you can configure it using:
- `gcloud init` to use a new account
- `gcloud config set account <account>` to use an existing account
Then, obtain your access credentials using: `gcloud auth application-default login`
Otherwise, you can generate and download Service Account keys in JSON format (refer to https://cloud.google.com/iam/docs/creating-managing-service-account-keys) and provide the location of the file with the following argument:
```console
prowler gcp --credentials-file path
```
Prowler by default scans all the GCP Projects that is allowed to scan, if you want to scan a single project or various specific projects you can use the following flag:
```console
prowler gcp --project-ids <Project ID 1> <Project ID 2> ... <Project ID N>
```
See more details about GCP Authentication in [Requirements](getting-started/requirements.md)

24
docs/security.md Normal file
View File

@@ -0,0 +1,24 @@
# Security
## Software Security
As an **AWS Partner** and we have passed the [AWS Foundation Technical Review (FTR)](https://aws.amazon.com/partners/foundational-technical-review/) and we use the following tools and automation to make sure our code is secure and dependencies up-to-dated:
- `bandit` for code security review.
- `safety` and `dependabot` for dependencies.
- `hadolint` and `dockle` for our containers security.
- `snyk` in Docker Hub.
- `clair` in Amazon ECR.
- `vulture`, `flake8`, `black` and `pylint` for formatting and best practices.
## Reporting Vulnerabilities
If you would like to report a vulnerability or have a security concern regarding Prowler Open Source or ProwlerPro service, please submit the information by contacting to help@prowler.pro.
The information you share with Verica as part of this process is kept confidential within Verica and the Prowler team. We will only share this information with a third party if the vulnerability you report is found to affect a third-party product, in which case we will share this information with the third-party product's author or manufacturer. Otherwise, we will only share this information as permitted by you.
We will review the submitted report, and assign it a tracking number. We will then respond to you, acknowledging receipt of the report, and outline the next steps in the process.
You will receive a non-automated response to your initial contact within 24 hours, confirming receipt of your reported vulnerability.
We will coordinate public notification of any validated vulnerability with you. Where possible, we prefer that our respective public disclosures be posted simultaneously.

View File

@@ -1,8 +1,14 @@
# Troubleshooting
- Running `prowler` I get `[File: utils.py:15] [Module: utils] CRITICAL: path/redacted: OSError[13]`:
- **Running `prowler` I get `[File: utils.py:15] [Module: utils] CRITICAL: path/redacted: OSError[13]`**:
That is an error related to file descriptors or opened files allowed by your operating system, with `ulimit -n 1000` you solve the issue. We have seen this issue in some macOS Ventura.
That is an error related to file descriptors or opened files allowed by your operating system.
In macOS Ventura, the default value for the `file descriptors` is `256`. With the following command `ulimit -n 1000` you'll increase that value and solve the issue.
If you have a different OS and you are experiencing the same, please increase the value of your `file descriptors`. You can check it running `ulimit -a | grep "file descriptors"`.
This error is also related with a lack of system requirements. To improve performance, Prowler stores information in memory so it may need to be run in a system with more than 1GB of memory.
See section [Logging](/tutorials/logging/) for further information or [conctact us](/contact/).
See section [Logging](./tutorials/logging.md) for further information or [contact us](./contact.md).

View File

@@ -7,35 +7,80 @@ You can use `-w`/`--allowlist-file` with the path of your allowlist yaml file, b
## Allowlist Yaml File Syntax
### Account, Check and/or Region can be * to apply for all the cases
### Resources is a list that can have either Regex or Keywords:
### Account, Check and/or Region can be * to apply for all the cases.
### Resources and tags are lists that can have either Regex or Keywords.
### Tags is an optional list that matches on tuples of 'key=value' and are "ANDed" together.
### Use an alternation Regex to match one of multiple tags with "ORed" logic.
### For each check you can except Accounts, Regions, Resources and/or Tags.
########################### ALLOWLIST EXAMPLE ###########################
Allowlist:
Accounts:
Accounts:
"123456789012":
Checks:
Checks:
"iam_user_hardware_mfa_enabled":
Regions:
Regions:
- "us-east-1"
Resources:
Resources:
- "user-1" # Will ignore user-1 in check iam_user_hardware_mfa_enabled
- "user-2" # Will ignore user-2 in check iam_user_hardware_mfa_enabled
"*":
Regions:
"ec2_*":
Regions:
- "*"
Resources:
- "test" # Will ignore every resource containing the string "test" in every account and region
Resources:
- "*" # Will ignore every EC2 check in every account and region
"*":
Regions:
- "*"
Resources:
- "test"
Tags:
- "test=test" # Will ignore every resource containing the string "test" and the tags 'test=test' and
- "project=test|project=stage" # either of ('project=test' OR project=stage) in account 123456789012 and every region
"*":
Checks:
Checks:
"s3_bucket_object_versioning":
Regions:
Regions:
- "eu-west-1"
- "us-east-1"
Resources:
Resources:
- "ci-logs" # Will ignore bucket "ci-logs" AND ALSO bucket "ci-logs-replica" in specified check and regions
- "logs" # Will ignore EVERY BUCKET containing the string "logs" in specified check and regions
- "[[:alnum:]]+-logs" # Will ignore all buckets containing the terms ci-logs, qa-logs, etc. in specified check and regions
- ".+-logs" # Will ignore all buckets containing the terms ci-logs, qa-logs, etc. in specified check and regions
"*":
Regions:
- "*"
Resources:
- "*"
Tags:
- "environment=dev" # Will ignore every resource containing the tag 'environment=dev' in every account and region
"*":
Checks:
"ecs_task_definitions_no_environment_secrets":
Regions:
- "*"
Resources:
- "*"
Exceptions:
Accounts:
- "0123456789012"
Regions:
- "eu-west-1"
- "eu-south-2" # Will ignore every resource in check ecs_task_definitions_no_environment_secrets except the ones in account 0123456789012 located in eu-south-2 or eu-west-1
"123456789012":
Checks:
"*":
Regions:
- "*"
Resources:
- "*"
Exceptions:
Resources:
- "test"
Tags:
- "environment=prod" # Will ignore every resource except in account 123456789012 except the ones containing the string "test" and tag environment=prod
## Supported Allowlist Locations
@@ -63,14 +108,50 @@ prowler aws -w arn:aws:dynamodb:<region_name>:<account_id>:table/<table_name>
```
1. The DynamoDB Table must have the following String keys:
<img src="/img/allowlist-keys.png"/>
<img src="../img/allowlist-keys.png"/>
- The Allowlist Table must have the following columns:
- Accounts (String): This field can contain either an Account ID or an `*` (which applies to all the accounts that use this table as an allowlist).
- Checks (String): This field can contain either a Prowler Check Name or an `*` (which applies to all the scanned checks).
- Regions (List): This field contains a list of regions where this allowlist rule is applied (it can also contains an `*` to apply all scanned regions).
- Resources (List): This field contains a list of regex expressions that applies to the resources that are wanted to be allowlisted.
- Tags (List): -Optional- This field contains a list of tuples in the form of 'key=value' that applies to the resources tags that are wanted to be allowlisted.
- Exceptions (Map): -Optional- This field contains a map of lists of accounts/regions/resources/tags that are wanted to be excepted in the allowlist.
<img src="/img/allowlist-row.png"/>
The following example will allowlist all resources in all accounts for the EC2 checks in the regions `eu-west-1` and `us-east-1` with the tags `environment=dev` and `environment=prod`, except the resources containing the string `test` in the account `012345678912` and region `eu-west-1` with the tag `environment=prod`:
<img src="../img/allowlist-row.png"/>
> Make sure that the used AWS credentials have `dynamodb:PartiQLSelect` permissions in the table.
### AWS Lambda ARN
You will need to pass the AWS Lambda Function ARN:
```
prowler aws -w arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME
```
Make sure that the credentials that Prowler uses can invoke the Lambda Function:
```
- PolicyName: GetAllowList
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action: 'lambda:InvokeFunction'
Effect: Allow
Resource: arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME
```
The Lambda Function can then generate an Allowlist dynamically. Here is the code an example Python Lambda Function that
generates an Allowlist:
```
def handler(event, context):
checks = {}
checks["vpc_flow_logs_enabled"] = { "Regions": [ "*" ], "Resources": [ "" ], Optional("Tags"): [ "key:value" ] }
al = { "Allowlist": { "Accounts": { "*": { "Checks": checks } } } }
return al
```

View File

@@ -0,0 +1,35 @@
# AWS Authentication
Make sure you have properly configured your AWS-CLI with a valid Access Key and Region or declare AWS variables properly (or instance profile/role):
```console
aws configure
```
or
```console
export AWS_ACCESS_KEY_ID="ASXXXXXXX"
export AWS_SECRET_ACCESS_KEY="XXXXXXXXX"
export AWS_SESSION_TOKEN="XXXXXXXXX"
```
Those credentials must be associated to a user or role with proper permissions to do all checks. To make sure, add the following AWS managed policies to the user or role being used:
- `arn:aws:iam::aws:policy/SecurityAudit`
- `arn:aws:iam::aws:policy/job-function/ViewOnlyAccess`
> Moreover, some read-only additional permissions are needed for several checks, make sure you attach also the custom policy [prowler-additions-policy.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-additions-policy.json) to the role you are using.
> If you want Prowler to send findings to [AWS Security Hub](https://aws.amazon.com/security-hub), make sure you also attach the custom policy [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-security-hub.json).
## Multi-Factor Authentication
If your IAM entity enforces MFA you can use `--mfa` and Prowler will ask you to input the following values to get a new session:
- ARN of your MFA device
- TOTP (Time-Based One-Time Password)
## STS Endpoint Region
If you are using Prowler in AWS regions that are not enabled by default you need to use the argument `--sts-endpoint-region` to point the AWS STS API calls `assume-role` and `get-caller-identity` to the non-default region, e.g.: `prowler aws --sts-endpoint-region eu-south-2`.

View File

@@ -0,0 +1,34 @@
# Boto3 Retrier Configuration
Prowler's AWS Provider uses the Boto3 [Standard](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/retries.html) retry mode to assist in retrying client calls to AWS services when these kinds of errors or exceptions are experienced. This mode includes the following behaviours:
- A default value of 3 for maximum retry attempts. This can be overwritten with the `--aws-retries-max-attempts 5` argument.
- Retry attempts for an expanded list of errors/exceptions:
```
# Transient errors/exceptions
RequestTimeout
RequestTimeoutException
PriorRequestNotComplete
ConnectionError
HTTPClientError
# Service-side throttling/limit errors and exceptions
Throttling
ThrottlingException
ThrottledException
RequestThrottledException
TooManyRequestsException
ProvisionedThroughputExceededException
TransactionInProgressException
RequestLimitExceeded
BandwidthLimitExceeded
LimitExceededException
RequestThrottled
SlowDown
EC2ThrottledException
```
- Retry attempts on nondescriptive, transient error codes. Specifically, these HTTP status codes: 500, 502, 503, 504.
- Any retry attempt will include an exponential backoff by a base factor of 2 for a maximum backoff time of 20 seconds.

View File

@@ -1,6 +1,6 @@
# AWS CloudShell
Prowler can be easely executed in AWS CloudShell but it has some prerequsites to be able to to so. AWS CloudShell is a container running with `Amazon Linux release 2 (Karoo)` that comes with Python 3.7, since Prowler requires Python >= 3.9 we need to first install a newer version of Python. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
Prowler can be easily executed in AWS CloudShell but it has some prerequisites to be able to to so. AWS CloudShell is a container running with `Amazon Linux release 2 (Karoo)` that comes with Python 3.7, since Prowler requires Python >= 3.9 we need to first install a newer version of Python. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
- First install all dependences and then Python, in this case we need to compile it because there is not a package available at the time this document is written:
```
@@ -15,7 +15,7 @@ cd
```
- Once Python 3.9 is available we can install Prowler from pip:
```
pip3.9 install prowler-cloud
pip3.9 install prowler
```
- Now enjoy Prowler:
```

View File

@@ -1,6 +1,6 @@
# Scan Multiple AWS Accounts
Prowler can scan multiple accounts when it is ejecuted from one account that can assume a role in those given accounts to scan using [Assume Role feature](role-assumption.md) and [AWS Organizations integration feature](organizations.md).
Prowler can scan multiple accounts when it is executed from one account that can assume a role in those given accounts to scan using [Assume Role feature](role-assumption.md) and [AWS Organizations integration feature](organizations.md).
## Scan multiple specific accounts sequentially
@@ -41,7 +41,7 @@ for accountId in $ACCOUNTS_LIST; do
done
```
## Scan mutiple accounts from AWS Organizations in parallel
## Scan multiple accounts from AWS Organizations in parallel
- Declare a variable with all the accounts to scan. To do so, get the list of your AWS accounts in your AWS Organization by running the following command (will create a variable with all your ACTIVE accounts). Remember to run that command with the permissions needed to get that information in your AWS Organizations Management account.

View File

@@ -1,18 +1,19 @@
# AWS Organizations
## Get AWS Account details from your AWS Organization:
## Get AWS Account details from your AWS Organization
Prowler allows you to get additional information of the scanned account in CSV and JSON outputs. When scanning a single account you get the Account ID as part of the output.
If you have AWS Organizations Prowler can get your account details like Account Name, Email, ARN, Organization ID and Tags and you will have them next to every finding in the CSV and JSON outputs.
- In order to do that you can use the option `-O`/`--organizations-role <organizations_role_arn>`. See the following sample command:
In order to do that you can use the option `-O`/`--organizations-role <organizations_role_arn>`. See the following sample command:
```shell
prowler aws \
-O arn:aws:iam::<management_organizations_account_id>:role/<role_name>
```
prowler aws -O arn:aws:iam::<management_organizations_account_id>:role/<role_name>
```
> Make sure the role in your AWS Organizatiosn management account has the permissions `organizations:ListAccounts*` and `organizations:ListTagsForResource`.
> Make sure the role in your AWS Organizations management account has the permissions `organizations:ListAccounts*` and `organizations:ListTagsForResource`.
- In that command Prowler will scan the account and getting the account details from the AWS Organizations management account assuming a role and creating two reports with those details in JSON and CSV.
In that command Prowler will scan the account and getting the account details from the AWS Organizations management account assuming a role and creating two reports with those details in JSON and CSV.
In the JSON output below (redacted) you can see tags coded in base64 to prevent breaking CSV or JSON due to its format:
@@ -30,20 +31,28 @@ The additional fields in CSV header output are as follow:
ACCOUNT_DETAILS_EMAIL,ACCOUNT_DETAILS_NAME,ACCOUNT_DETAILS_ARN,ACCOUNT_DETAILS_ORG,ACCOUNT_DETAILS_TAGS
```
## Assume Role and across all accounts in AWS Organizations or just a list of accounts:
## Extra: run Prowler across all accounts in AWS Organizations by assuming roles
If you want to run Prowler across all accounts of AWS Organizations you can do this:
- First get a list of accounts that are not suspended:
1. First get a list of accounts that are not suspended:
```
ACCOUNTS_IN_ORGS=$(aws organizations list-accounts --query Accounts[?Status==`ACTIVE`].Id --output text)
```
```shell
ACCOUNTS_IN_ORGS=$(aws organizations list-accounts \
--query "Accounts[?Status=='ACTIVE'].Id" \
--output text \
)
```
- Then run Prowler to assume a role (same in all members) per each account, in this example it is just running one particular check:
2. Then run Prowler to assume a role (same in all members) per each account:
```
for accountId in $ACCOUNTS_IN_ORGS; do prowler aws -O arn:aws:iam::<management_organizations_account_id>:role/<role_name>; done
```
```shell
for accountId in $ACCOUNTS_IN_ORGS;
do
prowler aws \
-O arn:aws:iam::<management_organizations_account_id>:role/<role_name> \
-R arn:aws:iam::"${accountId}":role/<role_name>;
done
```
- Using the same for loop it can be scanned a list of accounts with a variable like `ACCOUNTS_LIST='11111111111 2222222222 333333333'`
> Using the same for loop it can be scanned a list of accounts with a variable like `ACCOUNTS_LIST='11111111111 2222222222 333333333'`

View File

@@ -0,0 +1,81 @@
# AWS Regions and Partitions
By default Prowler is able to scan the following AWS partitions:
- Commercial: `aws`
- China: `aws-cn`
- GovCloud (US): `aws-us-gov`
> To check the available regions for each partition and service please refer to the following document [aws_regions_by_service.json](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/aws/aws_regions_by_service.json)
It is important to take into consideration that to scan the China (`aws-cn`) or GovCloud (`aws-us-gov`) partitions it is either required to have a valid region for that partition in your AWS credentials or to specify the regions you want to audit for that partition using the `-f/--region` flag.
> Please, refer to https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials for more information about the AWS credentials configuration.
You can get more information about the available partitions and regions in the following [Botocore](https://github.com/boto/botocore) [file](https://github.com/boto/botocore/blob/22a19ea7c4c2c4dd7df4ab8c32733cba0c7597a4/botocore/data/partitions.json).
## AWS China
To scan your AWS account in the China partition (`aws-cn`):
- Using the `-f/--region` flag:
```
prowler aws --region cn-north-1 cn-northwest-1
```
- Using the region configured in your AWS profile at `~/.aws/credentials` or `~/.aws/config`:
```
[default]
aws_access_key_id = XXXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXX
region = cn-north-1
```
> With this option all the partition regions will be scanned without the need of use the `-f/--region` flag
## AWS GovCloud (US)
To scan your AWS account in the GovCloud (US) partition (`aws-us-gov`):
- Using the `-f/--region` flag:
```
prowler aws --region us-gov-east-1 us-gov-west-1
```
- Using the region configured in your AWS profile at `~/.aws/credentials` or `~/.aws/config`:
```
[default]
aws_access_key_id = XXXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXX
region = us-gov-east-1
```
> With this option all the partition regions will be scanned without the need of use the `-f/--region` flag
## AWS ISO (US & Europe)
For the AWS ISO partitions, which are known as "secret partitions" and are air-gapped from the Internet, there is no builtin way to scan it. If you want to audit an AWS account in one of the AWS ISO partitions you should manually update the [aws_regions_by_service.json](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/aws/aws_regions_by_service.json) and include the partition, region and services, e.g.:
```json
"iam": {
"regions": {
"aws": [
"eu-west-1",
"us-east-1",
],
"aws-cn": [
"cn-north-1",
"cn-northwest-1"
],
"aws-us-gov": [
"us-gov-east-1",
"us-gov-west-1"
],
"aws-iso": [
"aws-iso-global",
"us-iso-east-1",
"us-iso-west-1"
],
"aws-iso-b": [
"aws-iso-b-global",
"us-isob-east-1"
],
"aws-iso-e": [],
}
},
```

View File

@@ -0,0 +1,9 @@
# Resource ARNs based Scan
Prowler allows you to scan only the resources with specific AWS Resource ARNs. This can be done with the flag `--resource-arn` followed by one or more [Amazon Resource Names (ARNs)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) separated by space:
```
prowler aws --resource-arn arn:aws:iam::012345678910:user/test arn:aws:ec2:us-east-1:123456789012:vpc/vpc-12345678
```
This example will only scan the two resources with those ARNs.

View File

@@ -5,6 +5,13 @@ Prowler uses the AWS SDK (Boto3) underneath so it uses the same authentication m
However, there are few ways to run Prowler against multiple accounts using IAM Assume Role feature depending on each use case:
1. You can just set up your custom profile inside `~/.aws/config` with all needed information about the role to assume then call it with `prowler aws -p/--profile your-custom-profile`.
- An example profile that performs role-chaining is given below. The `credential_source` can either be set to `Environment`, `Ec2InstanceMetadata`, or `EcsContainer`.
- Alternatively, you could use the `source_profile` instead of `credential_source` to specify a separate named profile that contains IAM user credentials with permission to assume the target the role. More information can be found [here](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html).
```
[profile crossaccountrole]
role_arn = arn:aws:iam::234567890123:role/SomeRole
credential_source = EcsContainer
```
2. You can use `-R`/`--role <role_arn>` and Prowler will get those temporary credentials using `Boto3` and run against that given account.
```sh
@@ -16,10 +23,21 @@ prowler aws -R arn:aws:iam::<account_id>:role/<role_name>
prowler aws -T/--session-duration <seconds> -I/--external-id <external_id> -R arn:aws:iam::<account_id>:role/<role_name>
```
## STS Endpoint Region
If you are using Prowler in AWS regions that are not enabled by default you need to use the argument `--sts-endpoint-region` to point the AWS STS API calls `assume-role` and `get-caller-identity` to the non-default region, e.g.: `prowler aws --sts-endpoint-region eu-south-2`.
## Role MFA
If your IAM Role has MFA configured you can use `--mfa` along with `-R`/`--role <role_arn>` and Prowler will ask you to input the following values to get a new temporary session for the IAM Role provided:
- ARN of your MFA device
- TOTP (Time-Based One-Time Password)
## Create Role
To create a role to be assumed in one or multiple accounts you can use either as CloudFormation Stack or StackSet the following [template](https://github.com/prowler-cloud/prowler/blob/master/permissions/create_role_to_assume_cfn.yaml) and adapt it.
> _NOTE 1 about Session Duration_: Depending on the mount of checks you run and the size of your infrastructure, Prowler may require more than 1 hour to finish. Use option `-T <seconds>` to allow up to 12h (43200 seconds). To allow more than 1h you need to modify _"Maximum CLI/API session duration"_ for that particular role, read more [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session).
> _NOTE 1 about Session Duration_: Depending on the amount of checks you run and the size of your infrastructure, Prowler may require more than 1 hour to finish. Use option `-T <seconds>` to allow up to 12h (43200 seconds). To allow more than 1h you need to modify _"Maximum CLI/API session duration"_ for that particular role, read more [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session).
> _NOTE 2 about Session Duration_: Bear in mind that if you are using roles assumed by role chaining there is a hard limit of 1 hour so consider not using role chaining if possible, read more about that, in foot note 1 below the table [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html).

26
docs/tutorials/aws/s3.md Normal file
View File

@@ -0,0 +1,26 @@
# Send report to AWS S3 Bucket
To save your report in an S3 bucket, use `-B`/`--output-bucket`.
```sh
prowler <provider> -B my-bucket
```
If you can use a custom folder and/or filename, use `-o`/`--output-directory` and/or `-F`/`--output-filename`.
```sh
prowler <provider> \
-B my-bucket \
--output-directory test-folder \
--output-filename output-filename
```
By default Prowler sends HTML, JSON and CSV output formats, if you want to send a custom output format or a single one of the defaults you can specify it with the `-M`/`--output-modes` flag.
```sh
prowler <provider> -M csv -B my-bucket
```
> In the case you do not want to use the assumed role credentials but the initial credentials to put the reports into the S3 bucket, use `-D`/`--output-bucket-no-assume` instead of `-B`/`--output-bucket`.
> Make sure that the used credentials have `s3:PutObject` permissions in the S3 path where the reports are going to be uploaded.

View File

@@ -13,26 +13,55 @@ Before sending findings to Prowler, you will need to perform next steps:
- Using the AWS Management Console:
![Screenshot 2020-10-29 at 10 26 02 PM](https://user-images.githubusercontent.com/3985464/97634660-5ade3400-1a36-11eb-9a92-4a45cc98c158.png)
3. Allow Prowler to import its findings to AWS Security Hub by adding the policy below to the role or user running Prowler:
- [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/iam/prowler-security-hub.json)
- [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-security-hub.json)
Once it is enabled, it is as simple as running the command below (for all regions):
```sh
./prowler aws -S
prowler aws -S
```
or for only one filtered region like eu-west-1:
```sh
./prowler -S -f eu-west-1
prowler -S -f eu-west-1
```
> **Note 1**: It is recommended to send only fails to Security Hub and that is possible adding `-q` to the command.
> **Note 2**: Since Prowler perform checks to all regions by defauls you may need to filter by region when runing Security Hub integration, as shown in the example above. Remember to enable Security Hub in the region or regions you need by calling `aws securityhub enable-security-hub --region <region>` and run Prowler with the option `-f <region>` (if no region is used it will try to push findings in all regions hubs).
> **Note 2**: Since Prowler perform checks to all regions by default you may need to filter by region when running Security Hub integration, as shown in the example above. Remember to enable Security Hub in the region or regions you need by calling `aws securityhub enable-security-hub --region <region>` and run Prowler with the option `-f <region>` (if no region is used it will try to push findings in all regions hubs). Prowler will send findings to the Security Hub on the region where the scanned resource is located.
> **Note 3** to have updated findings in Security Hub you have to run Prowler periodically. Once a day or every certain amount of hours.
> **Note 3**: To have updated findings in Security Hub you have to run Prowler periodically. Once a day or every certain amount of hours.
Once you run findings for first time you will be able to see Prowler findings in Findings section:
![Screenshot 2020-10-29 at 10 29 05 PM](https://user-images.githubusercontent.com/3985464/97634676-66c9f600-1a36-11eb-9341-70feb06f6331.png)
## Send findings to Security Hub assuming an IAM Role
When you are auditing a multi-account AWS environment, you can send findings to a Security Hub of another account by assuming an IAM role from that account using the `-R` flag in the Prowler command:
```sh
prowler -S -R arn:aws:iam::123456789012:role/ProwlerExecRole
```
> Remember that the used role needs to have permissions to send findings to Security Hub. To get more information about the permissions required, please refer to the following IAM policy [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-security-hub.json)
## Send only failed findings to Security Hub
When using Security Hub it is recommended to send only the failed findings generated. To follow that recommendation you could add the `-q` flag to the Prowler command:
```sh
prowler -S -q
```
## Skip sending updates of findings to Security Hub
By default, Prowler archives all its findings in Security Hub that have not appeared in the last scan.
You can skip this logic by using the option `--skip-sh-update` so Prowler will not archive older findings:
```sh
prowler -S --skip-sh-update
```

View File

@@ -0,0 +1,9 @@
# Tags-based Scan
Prowler allows you to scan only the resources that contain specific tags. This can be done with the flag `--resource-tags` followed by the tags `Key=Value` separated by space:
```
prowler aws --resource-tags Environment=dev Project=prowler
```
This example will only scan the resources that contains both tags.

View File

@@ -0,0 +1,255 @@
# Check mapping between Prowler v3 and v2
Prowler v3 comes with different identifiers but we maintained the same checks that were implemented in v2. The reason for this change is because in previous versions of Prowler, check names were mostly based on CIS Benchmark for AWS. In v3 all checks are independent from any security framework and they have its own name and ID.
If you need more information about how new compliance implementation works in Prowler v3 see [Compliance](../compliance.md) section.
```
checks_v3_to_v2_mapping = {
"accessanalyzer_enabled_without_findings": "extra769",
"account_maintain_current_contact_details": "check117",
"account_security_contact_information_is_registered": "check118",
"account_security_questions_are_registered_in_the_aws_account": "check115",
"acm_certificates_expiration_check": "extra730",
"acm_certificates_transparency_logs_enabled": "extra724",
"apigateway_authorizers_enabled": "extra746",
"apigateway_client_certificate_enabled": "extra743",
"apigateway_endpoint_public": "extra745",
"apigateway_logging_enabled": "extra722",
"apigateway_waf_acl_attached": "extra744",
"apigatewayv2_access_logging_enabled": "extra7156",
"apigatewayv2_authorizers_enabled": "extra7157",
"appstream_fleet_default_internet_access_disabled": "extra7193",
"appstream_fleet_maximum_session_duration": "extra7190",
"appstream_fleet_session_disconnect_timeout": "extra7191",
"appstream_fleet_session_idle_disconnect_timeout": "extra7192",
"autoscaling_find_secrets_ec2_launch_configuration": "extra775",
"awslambda_function_invoke_api_operations_cloudtrail_logging_enabled": "extra720",
"awslambda_function_no_secrets_in_code": "extra760",
"awslambda_function_no_secrets_in_variables": "extra759",
"awslambda_function_not_publicly_accessible": "extra798",
"awslambda_function_url_cors_policy": "extra7180",
"awslambda_function_url_public": "extra7179",
"awslambda_function_using_supported_runtimes": "extra762",
"cloudformation_stack_outputs_find_secrets": "extra742",
"cloudformation_stacks_termination_protection_enabled": "extra7154",
"cloudfront_distributions_field_level_encryption_enabled": "extra767",
"cloudfront_distributions_geo_restrictions_enabled": "extra732",
"cloudfront_distributions_https_enabled": "extra738",
"cloudfront_distributions_logging_enabled": "extra714",
"cloudfront_distributions_using_deprecated_ssl_protocols": "extra791",
"cloudfront_distributions_using_waf": "extra773",
"cloudtrail_cloudwatch_logging_enabled": "check24",
"cloudtrail_kms_encryption_enabled": "check27",
"cloudtrail_log_file_validation_enabled": "check22",
"cloudtrail_logs_s3_bucket_access_logging_enabled": "check26",
"cloudtrail_logs_s3_bucket_is_not_publicly_accessible": "check23",
"cloudtrail_multi_region_enabled": "check21",
"cloudtrail_s3_dataevents_read_enabled": "extra7196",
"cloudtrail_s3_dataevents_write_enabled": "extra725",
"cloudwatch_changes_to_network_acls_alarm_configured": "check311",
"cloudwatch_changes_to_network_gateways_alarm_configured": "check312",
"cloudwatch_changes_to_network_route_tables_alarm_configured": "check313",
"cloudwatch_changes_to_vpcs_alarm_configured": "check314",
"cloudwatch_cross_account_sharing_disabled": "extra7144",
"cloudwatch_log_group_kms_encryption_enabled": "extra7164",
"cloudwatch_log_group_retention_policy_specific_days_enabled": "extra7162",
"cloudwatch_log_metric_filter_and_alarm_for_aws_config_configuration_changes_enabled": "check39",
"cloudwatch_log_metric_filter_and_alarm_for_cloudtrail_configuration_changes_enabled": "check35",
"cloudwatch_log_metric_filter_authentication_failures": "check36",
"cloudwatch_log_metric_filter_aws_organizations_changes": "extra7197",
"cloudwatch_log_metric_filter_disable_or_scheduled_deletion_of_kms_cmk": "check37",
"cloudwatch_log_metric_filter_for_s3_bucket_policy_changes": "check38",
"cloudwatch_log_metric_filter_policy_changes": "check34",
"cloudwatch_log_metric_filter_root_usage": "check33",
"cloudwatch_log_metric_filter_security_group_changes": "check310",
"cloudwatch_log_metric_filter_sign_in_without_mfa": "check32",
"cloudwatch_log_metric_filter_unauthorized_api_calls": "check31",
"codeartifact_packages_external_public_publishing_disabled": "extra7195",
"codebuild_project_older_90_days": "extra7174",
"codebuild_project_user_controlled_buildspec": "extra7175",
"config_recorder_all_regions_enabled": "check25",
"directoryservice_directory_log_forwarding_enabled": "extra7181",
"directoryservice_directory_monitor_notifications": "extra7182",
"directoryservice_directory_snapshots_limit": "extra7184",
"directoryservice_ldap_certificate_expiration": "extra7183",
"directoryservice_radius_server_security_protocol": "extra7188",
"directoryservice_supported_mfa_radius_enabled": "extra7189",
"dynamodb_accelerator_cluster_encryption_enabled": "extra7165",
"dynamodb_tables_kms_cmk_encryption_enabled": "extra7128",
"dynamodb_tables_pitr_enabled": "extra7151",
"ec2_ami_public": "extra76",
"ec2_ebs_default_encryption": "extra761",
"ec2_ebs_public_snapshot": "extra72",
"ec2_ebs_snapshots_encrypted": "extra740",
"ec2_ebs_volume_encryption": "extra729",
"ec2_elastic_ip_shodan": "extra7102",
"ec2_elastic_ip_unassgined": "extra7146",
"ec2_instance_imdsv2_enabled": "extra786",
"ec2_instance_internet_facing_with_instance_profile": "extra770",
"ec2_instance_managed_by_ssm": "extra7124",
"ec2_instance_older_than_specific_days": "extra758",
"ec2_instance_profile_attached": "check119",
"ec2_instance_public_ip": "extra710",
"ec2_instance_secrets_user_data": "extra741",
"ec2_networkacl_allow_ingress_any_port": "extra7138",
"ec2_networkacl_allow_ingress_tcp_port_22": "check45",
"ec2_networkacl_allow_ingress_tcp_port_3389": "check46",
"ec2_securitygroup_allow_ingress_from_internet_to_any_port": "extra748",
"ec2_securitygroup_allow_ingress_from_internet_to_port_mongodb_27017_27018": "extra753",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_ftp_port_20_21": "extra7134",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22": "check41",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_3389": "check42",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_cassandra_7199_9160_8888": "extra754",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_elasticsearch_kibana_9200_9300_5601": "extra779",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_kafka_9092": "extra7135",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_memcached_11211": "extra755",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_mysql_3306": "extra750",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_oracle_1521_2483": "extra749",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_postgres_5432": "extra751",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_redis_6379": "extra752",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_sql_server_1433_1434": "extra7137",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_telnet_23": "extra7136",
"ec2_securitygroup_allow_wide_open_public_ipv4": "extra778",
"ec2_securitygroup_default_restrict_traffic": "check43",
"ec2_securitygroup_from_launch_wizard": "extra7173",
"ec2_securitygroup_not_used": "extra75",
"ec2_securitygroup_with_many_ingress_egress_rules": "extra777",
"ecr_repositories_lifecycle_policy_enabled": "extra7194",
"ecr_repositories_not_publicly_accessible": "extra77",
"ecr_repositories_scan_images_on_push_enabled": "extra765",
"ecr_repositories_scan_vulnerabilities_in_latest_image": "extra776",
"ecs_task_definitions_no_environment_secrets": "extra768",
"efs_encryption_at_rest_enabled": "extra7161",
"efs_have_backup_enabled": "extra7148",
"efs_not_publicly_accessible": "extra7143",
"eks_cluster_kms_cmk_encryption_in_secrets_enabled": "extra797",
"eks_control_plane_endpoint_access_restricted": "extra796",
"eks_control_plane_logging_all_types_enabled": "extra794",
"eks_endpoints_not_publicly_accessible": "extra795",
"elb_insecure_ssl_ciphers": "extra792",
"elb_internet_facing": "extra79",
"elb_logging_enabled": "extra717",
"elb_ssl_listeners": "extra793",
"elbv2_deletion_protection": "extra7150",
"elbv2_desync_mitigation_mode": "extra7155",
"elbv2_insecure_ssl_ciphers": "extra792",
"elbv2_internet_facing": "extra79",
"elbv2_listeners_underneath": "extra7158",
"elbv2_logging_enabled": "extra717",
"elbv2_ssl_listeners": "extra793",
"elbv2_waf_acl_attached": "extra7129",
"emr_cluster_account_public_block_enabled": "extra7178",
"emr_cluster_master_nodes_no_public_ip": "extra7176",
"emr_cluster_publicly_accesible": "extra7177",
"glacier_vaults_policy_public_access": "extra7147",
"glue_data_catalogs_connection_passwords_encryption_enabled": "extra7117",
"glue_data_catalogs_metadata_encryption_enabled": "extra7116",
"glue_database_connections_ssl_enabled": "extra7115",
"glue_development_endpoints_cloudwatch_logs_encryption_enabled": "extra7119",
"glue_development_endpoints_job_bookmark_encryption_enabled": "extra7121",
"glue_development_endpoints_s3_encryption_enabled": "extra7114",
"glue_etl_jobs_amazon_s3_encryption_enabled": "extra7118",
"glue_etl_jobs_cloudwatch_logs_encryption_enabled": "extra7120",
"glue_etl_jobs_job_bookmark_encryption_enabled": "extra7122",
"guardduty_is_enabled": "extra713",
"guardduty_no_high_severity_findings": "extra7139",
"iam_administrator_access_with_mfa": "extra71",
"iam_avoid_root_usage": "check11",
"iam_check_saml_providers_sts": "extra733",
"iam_disable_30_days_credentials": "extra774",
"iam_disable_45_days_credentials": "extra7198",
"iam_disable_90_days_credentials": "check13",
"iam_no_custom_policy_permissive_role_assumption": "extra7100",
"iam_no_expired_server_certificates_stored": "extra7199",
"iam_no_root_access_key": "check112",
"iam_password_policy_expires_passwords_within_90_days_or_less": "check111",
"iam_password_policy_lowercase": "check16",
"iam_password_policy_minimum_length_14": "check19",
"iam_password_policy_number": "check18",
"iam_password_policy_reuse_24": "check110",
"iam_password_policy_symbol": "check17",
"iam_password_policy_uppercase": "check15",
"iam_policy_allows_privilege_escalation": "extra7185",
"iam_policy_attached_only_to_group_or_roles": "check116",
"iam_policy_no_administrative_privileges": "check122",
"iam_root_hardware_mfa_enabled": "check114",
"iam_root_mfa_enabled": "check113",
"iam_rotate_access_key_90_days": "check14",
"iam_support_role_created": "check120",
"iam_user_hardware_mfa_enabled": "extra7125",
"iam_user_mfa_enabled_console_access": "check12",
"iam_user_no_setup_initial_access_key": "check121",
"iam_user_two_active_access_key": "extra7123",
"iam_role_cross_service_confused_deputy_prevention": "extra7201",
"kms_cmk_are_used": "extra7126",
"kms_cmk_rotation_enabled": "check28",
"kms_key_not_publicly_accessible": "extra736",
"macie_is_enabled": "extra712",
"opensearch_service_domains_audit_logging_enabled": "extra7101",
"opensearch_service_domains_cloudwatch_logging_enabled": "extra715",
"opensearch_service_domains_encryption_at_rest_enabled": "extra781",
"opensearch_service_domains_https_communications_enforced": "extra783",
"opensearch_service_domains_internal_user_database_enabled": "extra784",
"opensearch_service_domains_node_to_node_encryption_enabled": "extra782",
"opensearch_service_domains_not_publicly_accessible": "extra716",
"opensearch_service_domains_updated_to_the_latest_service_software_version": "extra785",
"opensearch_service_domains_use_cognito_authentication_for_kibana": "extra780",
"rds_instance_backup_enabled": "extra739",
"rds_instance_deletion_protection": "extra7113",
"rds_instance_enhanced_monitoring_enabled": "extra7132",
"rds_instance_integration_cloudwatch_logs": "extra747",
"rds_instance_minor_version_upgrade_enabled": "extra7131",
"rds_instance_multi_az": "extra7133",
"rds_instance_no_public_access": "extra78",
"rds_instance_storage_encrypted": "extra735",
"rds_snapshots_public_access": "extra723",
"redshift_cluster_audit_logging": "extra721",
"redshift_cluster_automated_snapshot": "extra7149",
"redshift_cluster_automatic_upgrades": "extra7160",
"redshift_cluster_public_access": "extra711",
"route53_domains_privacy_protection_enabled": "extra7152",
"route53_domains_transferlock_enabled": "extra7153",
"route53_public_hosted_zones_cloudwatch_logging_enabled": "extra719",
"s3_account_level_public_access_blocks": "extra7186",
"s3_bucket_acl_prohibited": "extra7172",
"s3_bucket_default_encryption": "extra734",
"s3_bucket_no_mfa_delete": "extra7200",
"s3_bucket_object_versioning": "extra763",
"s3_bucket_policy_public_write_access": "extra771",
"s3_bucket_public_access": "extra73",
"s3_bucket_secure_transport_policy": "extra764",
"s3_bucket_server_access_logging_enabled": "extra718",
"sagemaker_models_network_isolation_enabled": "extra7105",
"sagemaker_models_vpc_settings_configured": "extra7106",
"sagemaker_notebook_instance_encryption_enabled": "extra7112",
"sagemaker_notebook_instance_root_access_disabled": "extra7103",
"sagemaker_notebook_instance_vpc_settings_configured": "extra7104",
"sagemaker_notebook_instance_without_direct_internet_access_configured": "extra7111",
"sagemaker_training_jobs_intercontainer_encryption_enabled": "extra7107",
"sagemaker_training_jobs_network_isolation_enabled": "extra7109",
"sagemaker_training_jobs_volume_and_output_encryption_enabled": "extra7108",
"sagemaker_training_jobs_vpc_settings_configured": "extra7110",
"secretsmanager_automatic_rotation_enabled": "extra7163",
"securityhub_enabled": "extra799",
"shield_advanced_protection_in_associated_elastic_ips": "extra7166",
"shield_advanced_protection_in_classic_load_balancers": "extra7171",
"shield_advanced_protection_in_cloudfront_distributions": "extra7167",
"shield_advanced_protection_in_global_accelerators": "extra7169",
"shield_advanced_protection_in_internet_facing_load_balancers": "extra7170",
"shield_advanced_protection_in_route53_hosted_zones": "extra7168",
"sns_topics_kms_encryption_at_rest_enabled": "extra7130",
"sns_topics_not_publicly_accessible": "extra731",
"sqs_queues_not_publicly_accessible": "extra727",
"sqs_queues_server_side_encryption_enabled": "extra728",
"ssm_document_secrets": "extra7141",
"ssm_documents_set_as_public": "extra7140",
"ssm_managed_compliant_patching": "extra7127",
"trustedadvisor_errors_and_warnings": "extra726",
"vpc_endpoint_connections_trust_boundaries": "extra789",
"vpc_endpoint_services_allowed_principals_trust_boundaries": "extra790",
"vpc_flow_logs_enabled": "check29",
"vpc_peering_routing_tables_with_least_privilege": "check44",
"workspaces_volume_encryption_enabled": "extra7187",
}
```

View File

@@ -18,10 +18,10 @@ prowler azure --sp-env-auth
prowler azure --az-cli-auth
# To use browser authentication
prowler azure --browser-auth
prowler azure --browser-auth --tenant-id "XXXXXXXX"
# To use managed identity auth
prowler azure --managed-identity-auth
```
To use Prowler you need to set up also the permissions required to access your resources in your Azure account, to more details refer to [Requirements](/getting-started/requirements)
To use Prowler you need to set up also the permissions required to access your resources in your Azure account, to more details refer to [Requirements](../../getting-started/requirements.md)

View File

@@ -1,10 +1,10 @@
# Azure subscriptions scope
By default Prowler is multisubscription, which means that is going to scan all the subscriptions is able to list. If you only assign permissions to one subscription it is going to scan a single one.
By default, Prowler is multisubscription, which means that is going to scan all the subscriptions is able to list. If you only assign permissions to one subscription, it is going to scan a single one.
Prowler also has the ability to limit the subscriptions to scan to a set passed as input argument, to do so:
```console
prowler azure --az-cli-auth --subscription-ids <subscription ID 1> <subscription ID 2> ... <subscription ID N>
```
Where you can pass from 1 up to N subscriptions to be scanned.
Where you can pass from 1 up to N subscriptions to be scanned.

View File

@@ -11,6 +11,25 @@ Currently, the available frameworks are:
- `cis_1.4_aws`
- `cis_1.5_aws`
- `ens_rd2022_aws`
- `aws_audit_manager_control_tower_guardrails_aws`
- `aws_foundational_security_best_practices_aws`
- `aws_well_architected_framework_security_pillar_aws`
- `cisa_aws`
- `fedramp_low_revision_4_aws`
- `fedramp_moderate_revision_4_aws`
- `ffiec_aws`
- `gdpr_aws`
- `gxp_eu_annex_11_aws`
- `gxp_21_cfr_part_11_aws`
- `hipaa_aws`
- `nist_800_53_revision_4_aws`
- `nist_800_53_revision_5_aws`
- `nist_800_171_revision_2_aws`
- `nist_csf_1.1_aws`
- `pci_3.2.1_aws`
- `rbi_cyber_security_framework_aws`
- `soc2_aws`
## List Requirements of Compliance Frameworks
For each compliance framework, you can use option `--list-compliance-requirements` to list its requirements:
@@ -59,40 +78,8 @@ prowler <provider> --compliance <compliance_framework>
```
Standard results will be shown and additionally the framework information as the sample below for CIS AWS 1.5. For details a CSV file has been generated as well.
<img src="/img/compliance-cis-sample1.png"/>
<img src="../img/compliance-cis-sample1.png"/>
## Create and contribute adding other Security Frameworks
If you want to create or contribute with your own security frameworks or add public ones to Prowler you need to make sure the checks are available if not you have to create your own. Then create a compliance file per provider like in `prowler/compliance/aws/` and name it as `<framework>_<version>_<provider>.json` then follow the following format to create yours.
Each file version of a framework will have the following structure at high level with the case that each framework needs to be generally identified), one requirement can be also called one control but one requirement can be linked to multiple prowler checks.:
- `Framework`: string. Indistiguish name of the framework, like CIS
- `Provider`: string. Provider where the framework applies, such as AWS, Azure, OCI,...
- `Version`: string. Version of the framework itself, like 1.4 for CIS.
- `Requirements`: array of objects. Include all requirements or controls with the mapping to Prowler.
- `Requirements_Id`: string. Unique identifier per each requirement in the specific framework
- `Requirements_Description`: string. Description as in the framework.
- `Requirements_Attributes`: array of objects. Includes all needed attributes per each requirement, like levels, sections, etc. Whatever helps to create a dedicated report with the result of the findings. Attributes would be taken as closely as possible from the framework's own terminology directly.
- `Requirements_Checks`: array. Prowler checks that are needed to prove this requirement. It can be one or multiple checks. In case of no automation possible this can be empty.
```
{
"Framework": "<framework>-<provider>",
"Version": "<version>",
"Requirements": [
{
"Id": "<unique-id>",
"Description": "Requiemente full description",
"Checks": [
"Here is the prowler check or checks that is going to be executed"
],
"Attributes": [
{
<Add here your custom attributes.>
}
]
}
```
Finally, to have a proper output file for your reports, your framework data model has to be created in `prowler/lib/outputs/models.py` and also the CLI table output in `prowler/lib/outputs/compliance.py`.
This information is part of the Developer Guide and can be found here: https://docs.prowler.cloud/en/latest/tutorials/developer-guide/.

View File

@@ -1,76 +1,111 @@
# Configuration File
Several Prowler's checks have user configurable variables that can be modified in a common **configuration file**.
This file can be found in the following path:
Several Prowler's checks have user configurable variables that can be modified in a common **configuration file**. This file can be found in the following [path](https://github.com/prowler-cloud/prowler/blob/master/prowler/config/config.yaml):
```
prowler/config/config.yaml
```
## Configurable Checks
The following list includes all the checks with configurable variables that can be changed in the mentioned configuration yaml file:
Also you can input a custom configuration file using the `--config-file` argument.
1. aws.ec2_elastic_ip_shodan
- shodan_api_key (String)
- aws.ec2_securitygroup_with_many_ingress_egress_rules
- max_security_group_rules (Integer)
- aws.ec2_instance_older_than_specific_days
- max_ec2_instance_age_in_days (Integer)
- aws.vpc_endpoint_connections_trust_boundaries
- trusted_account_ids (List of Strings)
- aws.vpc_endpoint_services_allowed_principals_trust_boundaries
- trusted_account_ids (List of Strings)
- aws.cloudwatch_log_group_retention_policy_specific_days_enabled
- log_group_retention_days (Integer)
- aws.appstream_fleet_session_idle_disconnect_timeout
- max_idle_disconnect_timeout_in_seconds (Integer)
- aws.appstream_fleet_session_disconnect_timeout
- max_disconnect_timeout_in_seconds (Integer)
- aws.appstream_fleet_maximum_session_duration
- max_session_duration_seconds (Integer)
- aws.awslambda_function_using_supported_runtimes
- obsolete_lambda_runtimes (List of Strings)
## AWS
## Config Yaml File
### Configurable Checks
The following list includes all the AWS checks with configurable variables that can be changed in the configuration yaml file:
# AWS EC2 Configuration
# aws.ec2_elastic_ip_shodan
shodan_api_key: null
# aws.ec2_securitygroup_with_many_ingress_egress_rules --> by default is 50 rules
max_security_group_rules: 50
# aws.ec2_instance_older_than_specific_days --> by default is 6 months (180 days)
max_ec2_instance_age_in_days: 180
| Check Name | Value | Type |
|---|---|---|
| `ec2_elastic_ip_shodan` | `shodan_api_key` | String |
| `ec2_securitygroup_with_many_ingress_egress_rules` | `max_security_group_rules` | Integer |
| `ec2_instance_older_than_specific_days` | `max_ec2_instance_age_in_days` | Integer |
| `vpc_endpoint_connections_trust_boundaries` | `trusted_account_ids` | List of Strings |
| `vpc_endpoint_services_allowed_principals_trust_boundaries` | `trusted_account_ids` | List of Strings |
| `cloudwatch_log_group_retention_policy_specific_days_enabled` | `log_group_retention_days` | Integer |
| `appstream_fleet_session_idle_disconnect_timeout` | `max_idle_disconnect_timeout_in_seconds` | Integer |
| `appstream_fleet_session_disconnect_timeout` | `max_disconnect_timeout_in_seconds` | Integer |
| `appstream_fleet_maximum_session_duration` | `max_session_duration_seconds` | Integer |
| `awslambda_function_using_supported_runtimes` | `obsolete_lambda_runtimes` | Integer |
| `organizations_scp_check_deny_regions` | `organizations_enabled_regions` | List of Strings |
| `organizations_delegated_administrators` | `organizations_trusted_delegated_administrators` | List of Strings |
| `ecr_repositories_scan_vulnerabilities_in_latest_image` | `ecr_repository_vulnerability_minimum_severity` | String |
# AWS VPC Configuration (vpc_endpoint_connections_trust_boundaries, vpc_endpoint_services_allowed_principals_trust_boundaries)
# Single account environment: No action required. The AWS account number will be automatically added by the checks.
# Multi account environment: Any additional trusted account number should be added as a space separated list, e.g.
# trusted_account_ids : ["123456789012", "098765432109", "678901234567"]
trusted_account_ids: []
## Azure
# AWS Cloudwatch Configuration
# aws.cloudwatch_log_group_retention_policy_specific_days_enabled --> by default is 365 days
log_group_retention_days: 365
### Configurable Checks
# AWS AppStream Session Configuration
# aws.appstream_fleet_session_idle_disconnect_timeout
max_idle_disconnect_timeout_in_seconds: 600 # 10 Minutes
# aws.appstream_fleet_session_disconnect_timeout
max_disconnect_timeout_in_seconds: 300 # 5 Minutes
# aws.appstream_fleet_maximum_session_duration
max_session_duration_seconds: 36000 # 10 Hours
## GCP
# AWS Lambda Configuration
# aws.awslambda_function_using_supported_runtimes
obsolete_lambda_runtimes:
### Configurable Checks
## Config YAML File Structure
> This is the new Prowler configuration file format. The old one without provider keys is still compatible just for the AWS provider.
```yaml title="config.yaml"
# AWS Configuration
aws:
# AWS EC2 Configuration
# aws.ec2_elastic_ip_shodan
shodan_api_key: null
# aws.ec2_securitygroup_with_many_ingress_egress_rules --> by default is 50 rules
max_security_group_rules: 50
# aws.ec2_instance_older_than_specific_days --> by default is 6 months (180 days)
max_ec2_instance_age_in_days: 180
# AWS VPC Configuration (vpc_endpoint_connections_trust_boundaries, vpc_endpoint_services_allowed_principals_trust_boundaries)
# Single account environment: No action required. The AWS account number will be automatically added by the checks.
# Multi account environment: Any additional trusted account number should be added as a space separated list, e.g.
# trusted_account_ids : ["123456789012", "098765432109", "678901234567"]
trusted_account_ids: []
# AWS Cloudwatch Configuration
# aws.cloudwatch_log_group_retention_policy_specific_days_enabled --> by default is 365 days
log_group_retention_days: 365
# AWS AppStream Session Configuration
# aws.appstream_fleet_session_idle_disconnect_timeout
max_idle_disconnect_timeout_in_seconds: 600 # 10 Minutes
# aws.appstream_fleet_session_disconnect_timeout
max_disconnect_timeout_in_seconds: 300 # 5 Minutes
# aws.appstream_fleet_maximum_session_duration
max_session_duration_seconds: 36000 # 10 Hours
# AWS Lambda Configuration
# aws.awslambda_function_using_supported_runtimes
obsolete_lambda_runtimes:
[
"python3.6",
"python2.7",
"nodejs4.3",
"nodejs4.3-edge",
"nodejs6.10",
"nodejs",
"nodejs8.10",
"nodejs10.x",
"dotnetcore1.0",
"dotnetcore2.0",
"dotnetcore2.1",
"ruby2.5",
"python3.6",
"python2.7",
"nodejs4.3",
"nodejs4.3-edge",
"nodejs6.10",
"nodejs",
"nodejs8.10",
"nodejs10.x",
"dotnetcore1.0",
"dotnetcore2.0",
"dotnetcore2.1",
"ruby2.5",
]
# AWS Organizations
# organizations_scp_check_deny_regions
# organizations_enabled_regions: [
# 'eu-central-1',
# 'eu-west-1',
# "us-east-1"
# ]
organizations_enabled_regions: []
organizations_trusted_delegated_administrators: []
# AWS ECR
# ecr_repositories_scan_vulnerabilities_in_latest_image
# CRITICAL
# HIGH
# MEDIUM
ecr_repository_vulnerability_minimum_severity: "MEDIUM"
# Azure Configuration
azure:
# GCP Configuration
gcp:
```

View File

@@ -0,0 +1,29 @@
# GCP authentication
Prowler will use by default your User Account credentials, you can configure it using:
- `gcloud init` to use a new account
- `gcloud config set account <account>` to use an existing account
Then, obtain your access credentials using: `gcloud auth application-default login`
Otherwise, you can generate and download Service Account keys in JSON format (refer to https://cloud.google.com/iam/docs/creating-managing-service-account-keys) and provide the location of the file with the following argument:
```console
prowler gcp --credentials-file path
```
> `prowler` will scan the GCP project associated with the credentials.
Prowler will follow the same credentials search as [Google authentication libraries](https://cloud.google.com/docs/authentication/application-default-credentials#search_order):
1. [GOOGLE_APPLICATION_CREDENTIALS environment variable](https://cloud.google.com/docs/authentication/application-default-credentials#GAC)
2. [User credentials set up by using the Google Cloud CLI](https://cloud.google.com/docs/authentication/application-default-credentials#personal)
3. [The attached service account, returned by the metadata server](https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa)
Those credentials must be associated to a user or service account with proper permissions to do all checks. To make sure, add the following roles to the member associated with the credentials:
- Viewer
- Security Reviewer
- Stackdriver Account Viewer

View File

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

View File

Before

Width:  |  Height:  |  Size: 141 KiB

After

Width:  |  Height:  |  Size: 141 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 200 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 456 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

View File

@@ -0,0 +1,36 @@
# Integrations
## Slack
Prowler can be integrated with [Slack](https://slack.com/) to send a summary of the execution having configured a Slack APP in your channel with the following command:
```sh
prowler <provider> --slack
```
![Prowler Slack Message](img/slack-prowler-message.png)
> Slack integration needs SLACK_API_TOKEN and SLACK_CHANNEL_ID environment variables.
### Configuration
To configure the Slack Integration, follow the next steps:
1. Create a Slack Application:
- Go to [Slack API page](https://api.slack.com/tutorials/tracks/getting-a-token), scroll down to the *Create app* button and select your workspace:
![Create Slack App](img/create-slack-app.png)
- Install the application in your selected workspaces:
![Install Slack App in Workspace](img/install-in-slack-workspace.png)
- Get the *Slack App OAuth Token* that Prowler needs to send the message:
![Slack App OAuth Token](img/slack-app-token.png)
2. Optionally, create a Slack Channel (you can use an existing one)
3. Integrate the created Slack App to your Slack channel:
- Click on the channel, go to the Integrations tab, and Add an App.
![Slack App Channel Integration](img/integrate-slack-app.png)
4. Set the following environment variables that Prowler will read:
- `SLACK_API_TOKEN`: the *Slack App OAuth Token* that was previously get.
- `SLACK_CHANNEL_ID`: the name of your Slack Channel where Prowler will send the message.

View File

@@ -1,16 +1,16 @@
# Logging
Prowler has a logging feature to be as transparent as possible so you can see every action that is going on will the tool is been executing.
Prowler has a logging feature to be as transparent as possible, so that you can see every action that is being performed whilst the tool is being executing.
## Set Log Level
## Set Log Level
There are different log levels depending on the logging information that is desired to be displayed:
- **DEBUG**: it will show low-level logs of Python.
- **INFO**: it will show all the API Calls that are being used in the provider.
- **WARNING**: it will show the resources that are being **allowlisted**.
- **ERROR**: it will show the errors, e.g., not authorized actions.
- **CRITICAL**: default log level, if a critical log appears, it will **exit** Prowlers execution.
- **DEBUG**: It will show low-level logs from Python.
- **INFO**: It will show all the API calls that are being invoked by the provider.
- **WARNING**: It will show all resources that are being **allowlisted**.
- **ERROR**: It will show any errors, e.g., not authorized actions.
- **CRITICAL**: The default log level. If a critical log appears, it will **exit** Prowlers execution.
You can establish the log level of Prowler with `--log-level` option:
@@ -20,9 +20,9 @@ prowler <provider> --log-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}
> By default, Prowler will run with the `CRITICAL` log level, since critical errors will abort the execution.
## Export Logs to File
## Export Logs to File
Prowler allows you to export the logs in json format with `--log-file` option:
Prowler allows you to export the logs in json format with the `--log-file` option:
```console
prowler <provider> --log-level {DEBUG,INFO,WARNING,ERROR,CRITICAL} --log-file <file_name>.json
@@ -45,4 +45,4 @@ An example of a log file will be the following:
"message": "eu-west-2 -- ClientError[124]: An error occurred (UnauthorizedOperation) when calling the DescribeNetworkAcls operation: You are not authorized to perform this operation."
}
> NOTE: Each finding is a `json` object.
> NOTE: Each finding is represented as a `json` object.

View File

@@ -51,15 +51,36 @@ prowler <provider> -e/--excluded-checks ec2 rds
```console
prowler <provider> -C/--checks-file <checks_list>.json
```
## Severities
Each check of Prowler has a severity, there are options related with it:
- List the available checks in the provider:
## Custom Checks
Prowler allows you to include your custom checks with the flag:
```console
prowler <provider> --list-severities
prowler <provider> -x/--checks-folder <custom_checks_folder>
```
- Execute specific severity(s):
> S3 URIs are also supported as folders for custom checks, e.g. s3://bucket/prefix/checks_folder/. Make sure that the used credentials have s3:GetObject permissions in the S3 path where the custom checks are located.
The custom checks folder must contain one subfolder per check, each subfolder must be named as the check and must contain:
- An empty `__init__.py`: to make Python treat this check folder as a package.
- A `check_name.py` containing the check's logic.
- A `check_name.metadata.json` containing the check's metadata.
>The check name must start with the service name followed by an underscore (e.g., ec2_instance_public_ip).
To see more information about how to write checks see the [Developer Guide](../developer-guide/checks.md#create-a-new-check-for-a-provider).
> If you want to run ONLY your custom check(s), import it with -x (--checks-folder) and then run it with -c (--checks), e.g.:
```console
prowler aws -x s3://bucket/prowler/providers/aws/services/s3/s3_bucket_policy/ -c s3_bucket_policy
```
## Severities
Each of Prowler's checks has a severity, which can be:
- informational
- low
- medium
- high
- critical
To execute specific severity(s):
```console
prowler <provider> --severity critical high
```

View File

@@ -6,12 +6,12 @@ Prowler has some checks that analyse pentesting risks (Secrets, Internet Exposed
Prowler uses `detect-secrets` library to search for any secrets that are stores in plaintext within your environment.
The actual checks that have this funcionality are:
The actual checks that have this functionality are:
1. autoscaling_find_secrets_ec2_launch_configuration
- awslambda_function_no_secrets_in_code
- awslambda_function_no_secrets_in_variables
- cloudformation_outputs_find_secrets
- cloudformation_stack_outputs_find_secrets
- ec2_instance_secrets_user_data
- ecs_task_definitions_no_environment_secrets
- ssm_document_secrets
@@ -33,9 +33,8 @@ Several checks analyse resources that are exposed to the Internet, these are:
- ec2_instance_internet_facing_with_instance_profile
- ec2_instance_public_ip
- ec2_networkacl_allow_ingress_any_port
- ec2_securitygroup_allow_ingress_from_internet_to_any_port
- ec2_securitygroup_allow_wide_open_public_ipv4
- ec2_securitygroup_in_use_without_ingress_filtering
- ec2_securitygroup_allow_ingress_from_internet_to_any_port
- ecr_repositories_not_publicly_accessible
- eks_control_plane_endpoint_access_restricted
- eks_endpoints_not_publicly_accessible

View File

@@ -14,4 +14,7 @@ prowler <provider> -i
- Also, it creates by default a CSV and JSON to see detailed information about the resources extracted.
![Quick Inventory Example](../img/quick-inventory.png)
![Quick Inventory Example](../img/quick-inventory.jpg)
## Objections
The inventorying process is done with `resourcegroupstaggingapi` calls which means that only resources they have or have had tags will appear (except for the IAM and S3 resources which are done with Boto3 API calls).

View File

@@ -1,9 +1,9 @@
# Reporting
By default, Prowler will generate a CSV, JSON and a HTML report, however you could generate a JSON-ASFF (used by AWS Security Hub) report with `-M` or `--output-modes`:
By default, Prowler will generate a CSV, JSON, JSON-OCSF and a HTML report, however you could generate a JSON-ASFF (used by AWS Security Hub) report with `-M` or `--output-modes`:
```console
prowler <provider> -M csv json json-asff html
prowler <provider> -M csv json json-ocsf json-asff html
```
## Custom Output Flags
@@ -19,21 +19,11 @@ prowler <provider> -M csv json json-asff html -F <custom_report_name>
```console
prowler <provider> -M csv json json-asff html -o <custom_report_directory>
```
> Both flags can be used simultainously to provide a custom directory and filename.
> Both flags can be used simultaneously to provide a custom directory and filename.
```console
prowler <provider> -M csv json json-asff html -F <custom_report_name> -o <custom_report_directory>
prowler <provider> -M csv json json-asff html \
-F <custom_report_name> -o <custom_report_directory>
```
## Send report to AWS S3 Bucket
To save your report in an S3 bucket, use `-B`/`--output-bucket` to define a custom output bucket along with `-M` to define the output format that is going to be uploaded to S3:
```sh
prowler <provider> -M csv -B my-bucket/folder/
```
> In the case you do not want to use the assumed role credentials but the initial credentials to put the reports into the S3 bucket, use `-D`/`--output-bucket-no-assume` instead of `-B`/`--output-bucket.
> Make sure that the used credentials have s3:PutObject permissions in the S3 path where the reports are going to be uploaded.
## Output Formats
@@ -41,18 +31,63 @@ Prowler supports natively the following output formats:
- CSV
- JSON
- JSON-OCSF
- JSON-ASFF
- HTML
Hereunder is the structure for each of the supported report formats by Prowler:
### HTML
![HTML Output](../img/output-html.png)
### CSV
| ASSESSMENT_START_TIME | FINDING_UNIQUE_ID | PROVIDER | PROFILE | ACCOUNT_ID | ACCOUNT_NAME | ACCOUNT_EMAIL | ACCOUNT_ARN | ACCOUNT_ORG | ACCOUNT_TAGS | REGION | CHECK_ID | CHECK_TITLE | CHECK_TYPE | STATUS | STATUS_EXTENDED | SERVICE_NAME | SUBSERVICE_NAME | SEVERITY | RESOURCE_ID | RESOURCE_ARN | RESOURCE_TYPE | RESOURCE_DETAILS | RESOURCE_TAGS | DESCRIPTION | RISK | RELATED_URL | REMEDIATION_RECOMMENDATION_TEXT | REMEDIATION_RECOMMENDATION_URL | REMEDIATION_RECOMMENDATION_CODE_NATIVEIAC | REMEDIATION_RECOMMENDATION_CODE_TERRAFORM | REMEDIATION_RECOMMENDATION_CODE_CLI | REMEDIATION_RECOMMENDATION_CODE_OTHER | CATEGORIES | DEPENDS_ON | RELATED_TO | NOTES |
| ------- | ----------- | ------ | -------- | ------------ | ----------- | ---------- | ---------- | --------------------- | -------------------------- | -------------- | ----------------- | ------------------------ | --------------- | ---------- | ----------------- | --------- | -------------- | ----------------- | ------------------ | --------------------- | -------------------- | ------------------- | ------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- |
The following are the columns present in the CSV format:
- ASSESSMENT_START_TIME
- FINDING_UNIQUE_ID
- PROVIDER
- PROFILE
- ACCOUNT_ID
- ACCOUNT_NAME
- ACCOUNT_EMAIL
- ACCOUNT_ARN
- ACCOUNT_ORG
- ACCOUNT_TAGS
- REGION
- CHECK_ID
- CHECK_TITLE
- CHECK_TYPE
- STATUS
- STATUS_EXTENDED
- SERVICE_NAME
- SUBSERVICE_NAME
- SEVERITY
- RESOURCE_ID
- RESOURCE_ARN
- RESOURCE_TYPE
- RESOURCE_DETAILS
- RESOURCE_TAGS
- DESCRIPTION
- COMPLIANCE
- RISK
- RELATED_URL
- REMEDIATION_RECOMMENDATION_TEXT
- REMEDIATION_RECOMMENDATION_URL
- REMEDIATION_RECOMMENDATION_CODE_NATIVEIAC
- REMEDIATION_RECOMMENDATION_CODE_TERRAFORM
- REMEDIATION_RECOMMENDATION_CODE_CLI
- REMEDIATION_RECOMMENDATION_CODE_OTHER
- CATEGORIES
- DEPENDS_ON
- RELATED_TO
- NOTES
> Since Prowler v3 the CSV column delimiter is the semicolon (`;`)
### JSON
```
The following code is an example output of the JSON format:
```json
[{
"AssessmentStartTime": "2022-12-01T14:16:57.354413",
"FindingUniqueId": "",
@@ -71,6 +106,10 @@ Hereunder is the structure for each of the supported report formats by Prowler:
"Severity": "low",
"ResourceId": "rds-instance-id",
"ResourceArn": "",
"ResourceTags": {
"test": "test",
"enironment": "dev"
},
"ResourceType": "AwsRdsDbInstance",
"ResourceDetails": "",
"Description": "Ensure RDS instances have minor version upgrade enabled.",
@@ -89,8 +128,17 @@ Hereunder is the structure for each of the supported report formats by Prowler:
}
},
"Categories": [],
"Notes": ""
},{
"Notes": "",
"Compliance": {
"CIS-1.4": [
"1.20"
],
"CIS-1.5": [
"1.20"
]
}
},
{
"AssessmentStartTime": "2022-12-01T14:16:57.354413",
"FindingUniqueId": "",
"Provider": "aws",
@@ -109,7 +157,7 @@ Hereunder is the structure for each of the supported report formats by Prowler:
"ResourceId": "rds-instance-id",
"ResourceArn": "",
"ResourceType": "AwsRdsDbInstance",
"ResourceDetails": "",
"ResourceTags": {},
"Description": "Ensure RDS instances have minor version upgrade enabled.",
"Risk": "Auto Minor Version Upgrade is a feature that you can enable to have your database automatically upgraded when a new minor database engine version is available. Minor version upgrades often patch security vulnerabilities and fix bugs and therefore should be applied.",
"RelatedUrl": "https://aws.amazon.com/blogs/database/best-practices-for-upgrading-amazon-rds-to-major-and-minor-versions-of-postgresql/",
@@ -126,15 +174,278 @@ Hereunder is the structure for each of the supported report formats by Prowler:
}
},
"Categories": [],
"Notes": ""
"Notes": "",
"Compliance": {}
}]
```
> NOTE: Each finding is a `json` object within a list. This has changed in v3 since in v2 the format used was [ndjson](http://ndjson.org/).
### JSON-OCSF
Based on [Open Cybersecurity Schema Framework Security Finding v1.0.0-rc.3](https://schema.ocsf.io/1.0.0-rc.3/classes/security_finding?extensions=)
```json
[{
"finding": {
"title": "Check if ACM Certificates are about to expire in specific days or less",
"desc": "Check if ACM Certificates are about to expire in specific days or less",
"supporting_data": {
"Risk": "Expired certificates can impact service availability.",
"Notes": ""
},
"remediation": {
"kb_articles": [
"https://docs.aws.amazon.com/config/latest/developerguide/acm-certificate-expiration-check.html"
],
"desc": "Monitor certificate expiration and take automated action to renew; replace or remove. Having shorter TTL for any security artifact is a general recommendation; but requires additional automation in place. If not longer required delete certificate. Use AWS config using the managed rule: acm-certificate-expiration-check."
},
"types": [
"Data Protection"
],
"src_url": "https://docs.aws.amazon.com/config/latest/developerguide/acm-certificate-expiration-check.html",
"uid": "prowler-aws-acm_certificates_expiration_check-012345678912-eu-west-1-*.xxxxxxxxxxxxxx",
"related_events": []
},
"resources": [
{
"group": {
"name": "acm"
},
"region": "eu-west-1",
"name": "xxxxxxxxxxxxxx",
"uid": "arn:aws:acm:eu-west-1:012345678912:certificate/xxxxxxxxxxxxxx",
"labels": [
{
"Key": "project",
"Value": "prowler-pro"
},
{
"Key": "environment",
"Value": "dev"
},
{
"Key": "terraform",
"Value": "true"
},
{
"Key": "terraform_state",
"Value": "aws"
}
],
"type": "AwsCertificateManagerCertificate",
"details": ""
}
],
"status_detail": "ACM Certificate for xxxxxxxxxxxxxx expires in 111 days.",
"compliance": {
"status": "Success",
"requirements": [
"CISA: ['your-data-2']",
"SOC2: ['cc_6_7']",
"MITRE-ATTACK: ['T1040']",
"GDPR: ['article_32']",
"HIPAA: ['164_308_a_4_ii_a', '164_312_e_1']",
"AWS-Well-Architected-Framework-Security-Pillar: ['SEC09-BP01']",
"NIST-800-171-Revision-2: ['3_13_1', '3_13_2', '3_13_8', '3_13_11']",
"NIST-800-53-Revision-4: ['ac_4', 'ac_17_2', 'sc_12']",
"NIST-800-53-Revision-5: ['sc_7_12', 'sc_7_16']",
"NIST-CSF-1.1: ['ac_5', 'ds_2']",
"RBI-Cyber-Security-Framework: ['annex_i_1_3']",
"FFIEC: ['d3-pc-im-b-1']",
"FedRamp-Moderate-Revision-4: ['ac-4', 'ac-17-2', 'sc-12']",
"FedRAMP-Low-Revision-4: ['ac-17', 'sc-12']"
],
"status_detail": "ACM Certificate for xxxxxxxxxxxxxx expires in 111 days."
},
"message": "ACM Certificate for xxxxxxxxxxxxxx expires in 111 days.",
"severity_id": 4,
"severity": "High",
"cloud": {
"account": {
"name": "",
"uid": "012345678912"
},
"region": "eu-west-1",
"org": {
"uid": "",
"name": ""
},
"provider": "aws",
"project_uid": ""
},
"time": "2023-06-30 10:28:55.297615",
"metadata": {
"original_time": "2023-06-30T10:28:55.297615",
"profiles": [
"dev"
],
"product": {
"language": "en",
"name": "Prowler",
"version": "3.6.1",
"vendor_name": "Prowler/ProwlerPro",
"feature": {
"name": "acm_certificates_expiration_check",
"uid": "acm_certificates_expiration_check",
"version": "3.6.1"
}
},
"version": "1.0.0-rc.3"
},
"state_id": 0,
"state": "New",
"status_id": 1,
"status": "Success",
"type_uid": 200101,
"type_name": "Security Finding: Create",
"impact_id": 0,
"impact": "Unknown",
"confidence_id": 0,
"confidence": "Unknown",
"activity_id": 1,
"activity_name": "Create",
"category_uid": 2,
"category_name": "Findings",
"class_uid": 2001,
"class_name": "Security Finding"
},{
"finding": {
"title": "Check if ACM Certificates are about to expire in specific days or less",
"desc": "Check if ACM Certificates are about to expire in specific days or less",
"supporting_data": {
"Risk": "Expired certificates can impact service availability.",
"Notes": ""
},
"remediation": {
"kb_articles": [
"https://docs.aws.amazon.com/config/latest/developerguide/acm-certificate-expiration-check.html"
],
"desc": "Monitor certificate expiration and take automated action to renew; replace or remove. Having shorter TTL for any security artifact is a general recommendation; but requires additional automation in place. If not longer required delete certificate. Use AWS config using the managed rule: acm-certificate-expiration-check."
},
"types": [
"Data Protection"
],
"src_url": "https://docs.aws.amazon.com/config/latest/developerguide/acm-certificate-expiration-check.html",
"uid": "prowler-aws-acm_certificates_expiration_check-012345678912-eu-west-1-xxxxxxxxxxxxx",
"related_events": []
},
"resources": [
{
"group": {
"name": "acm"
},
"region": "eu-west-1",
"name": "xxxxxxxxxxxxx",
"uid": "arn:aws:acm:eu-west-1:012345678912:certificate/3ea965a0-368d-4d13-95eb-5042a994edc4",
"labels": [
{
"Key": "name",
"Value": "prowler-pro-saas-dev-acm-internal-wildcard"
},
{
"Key": "project",
"Value": "prowler-pro-saas"
},
{
"Key": "environment",
"Value": "dev"
},
{
"Key": "terraform",
"Value": "true"
},
{
"Key": "terraform_state",
"Value": "aws/saas/base"
}
],
"type": "AwsCertificateManagerCertificate",
"details": ""
}
],
"status_detail": "ACM Certificate for xxxxxxxxxxxxx expires in 119 days.",
"compliance": {
"status": "Success",
"requirements": [
"CISA: ['your-data-2']",
"SOC2: ['cc_6_7']",
"MITRE-ATTACK: ['T1040']",
"GDPR: ['article_32']",
"HIPAA: ['164_308_a_4_ii_a', '164_312_e_1']",
"AWS-Well-Architected-Framework-Security-Pillar: ['SEC09-BP01']",
"NIST-800-171-Revision-2: ['3_13_1', '3_13_2', '3_13_8', '3_13_11']",
"NIST-800-53-Revision-4: ['ac_4', 'ac_17_2', 'sc_12']",
"NIST-800-53-Revision-5: ['sc_7_12', 'sc_7_16']",
"NIST-CSF-1.1: ['ac_5', 'ds_2']",
"RBI-Cyber-Security-Framework: ['annex_i_1_3']",
"FFIEC: ['d3-pc-im-b-1']",
"FedRamp-Moderate-Revision-4: ['ac-4', 'ac-17-2', 'sc-12']",
"FedRAMP-Low-Revision-4: ['ac-17', 'sc-12']"
],
"status_detail": "ACM Certificate for xxxxxxxxxxxxx expires in 119 days."
},
"message": "ACM Certificate for xxxxxxxxxxxxx expires in 119 days.",
"severity_id": 4,
"severity": "High",
"cloud": {
"account": {
"name": "",
"uid": "012345678912"
},
"region": "eu-west-1",
"org": {
"uid": "",
"name": ""
},
"provider": "aws",
"project_uid": ""
},
"time": "2023-06-30 10:28:55.297615",
"metadata": {
"original_time": "2023-06-30T10:28:55.297615",
"profiles": [
"dev"
],
"product": {
"language": "en",
"name": "Prowler",
"version": "3.6.1",
"vendor_name": "Prowler/ProwlerPro",
"feature": {
"name": "acm_certificates_expiration_check",
"uid": "acm_certificates_expiration_check",
"version": "3.6.1"
}
},
"version": "1.0.0-rc.3"
},
"state_id": 0,
"state": "New",
"status_id": 1,
"status": "Success",
"type_uid": 200101,
"type_name": "Security Finding: Create",
"impact_id": 0,
"impact": "Unknown",
"confidence_id": 0,
"confidence": "Unknown",
"activity_id": 1,
"activity_name": "Create",
"category_uid": 2,
"category_name": "Findings",
"class_uid": 2001,
"class_name": "Security Finding"
}]
```
> NOTE: Each finding is a `json` object.
### JSON-ASFF
```
The following code is an example output of the [JSON-ASFF](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format-syntax.html) format:
```json
[{
"SchemaVersion": "2018-10-08",
"Id": "prowler-rds_instance_minor_version_upgrade_enabled-ACCOUNT_ID-eu-west-1-b1ade474a",
@@ -166,7 +477,30 @@ Hereunder is the structure for each of the supported report formats by Prowler:
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": []
"RelatedRequirements": [
"CISA your-systems-2 booting-up-thing-to-do-first-3",
"CIS-1.5 2.3.2",
"AWS-Foundational-Security-Best-Practices rds",
"RBI-Cyber-Security-Framework annex_i_6",
"FFIEC d3-cc-pm-b-1 d3-cc-pm-b-3"
],
"AssociatedStandards": [
{
"StandardsId": "CISA"
},
{
"StandardsId": "CIS-1.5"
},
{
"StandardsId": "AWS-Foundational-Security-Best-Practices"
},
{
"StandardsId": "RBI-Cyber-Security-Framework"
},
{
"StandardsId": "FFIEC"
}
]
},
"Remediation": {
"Recommendation": {
@@ -205,7 +539,30 @@ Hereunder is the structure for each of the supported report formats by Prowler:
],
"Compliance": {
"Status": "PASSED",
"RelatedRequirements": []
"RelatedRequirements": [
"CISA your-systems-2 booting-up-thing-to-do-first-3",
"CIS-1.5 2.3.2",
"AWS-Foundational-Security-Best-Practices rds",
"RBI-Cyber-Security-Framework annex_i_6",
"FFIEC d3-cc-pm-b-1 d3-cc-pm-b-3"
],
"AssociatedStandards": [
{
"StandardsId": "CISA"
},
{
"StandardsId": "CIS-1.5"
},
{
"StandardsId": "AWS-Foundational-Security-Best-Practices"
},
{
"StandardsId": "RBI-Cyber-Security-Framework"
},
{
"StandardsId": "FFIEC"
}
]
},
"Remediation": {
"Recommendation": {
@@ -216,4 +573,4 @@ Hereunder is the structure for each of the supported report formats by Prowler:
}]
```
> NOTE: Each finding is a `json` object.
> NOTE: Each finding is a `json` object within a list.

View File

@@ -9,7 +9,7 @@ theme:
language: en
logo: img/prowler-logo.png
name: material
favicon: img/ProwlerPro-icon.svg
favicon: img/prowler-icon.svg
features:
- navigation.tabs
- navigation.tabs.sticky
@@ -25,31 +25,55 @@ repo_url: https://github.com/prowler-cloud/prowler/
repo_name: prowler-cloud/prowler
nav:
- Getting Started:
- Overview: index.md
- Requirements: getting-started/requirements.md
- Tutorials:
- Miscellaneous: tutorials/misc.md
- Reporting: tutorials/reporting.md
- Compliance: tutorials/compliance.md
- Quick Inventory: tutorials/quick-inventory.md
- Configuration File: tutorials/configuration_file.md
- Logging: tutorials/logging.md
- Allowlist: tutorials/allowlist.md
- Pentesting: tutorials/pentesting.md
- AWS:
- Getting Started:
- Overview: index.md
- Requirements: getting-started/requirements.md
- Tutorials:
- Miscellaneous: tutorials/misc.md
- Reporting: tutorials/reporting.md
- Compliance: tutorials/compliance.md
- Quick Inventory: tutorials/quick-inventory.md
- Slack Integration: tutorials/integrations.md
- Configuration File: tutorials/configuration_file.md
- Logging: tutorials/logging.md
- Allowlist: tutorials/allowlist.md
- Pentesting: tutorials/pentesting.md
- Developer Guide: developer-guide/introduction.md
- AWS:
- Authentication: tutorials/aws/authentication.md
- Assume Role: tutorials/aws/role-assumption.md
- AWS Security Hub: tutorials/aws/securityhub.md
- AWS Organizations: tutorials/aws/organizations.md
- AWS Regions and Partitions: tutorials/aws/regions-and-partitions.md
- Scan Multiple AWS Accounts: tutorials/aws/multiaccount.md
- Send reports to AWS S3: tutorials/aws/s3.md
- AWS CloudShell: tutorials/aws/cloudshell.md
- Azure:
- Checks v2 to v3 Mapping: tutorials/aws/v2_to_v3_checks_mapping.md
- Tag-based Scan: tutorials/aws/tag-based-scan.md
- Resource ARNs based Scan: tutorials/aws/resource-arn-based-scan.md
- Boto3 Configuration: tutorials/aws/boto3-configuration.md
- Azure:
- Authentication: tutorials/azure/authentication.md
- Subscriptions: tutorials/azure/subscriptions.md
- Contact Us: contact.md
- Troubleshooting: troubleshooting.md
- About: about.md
- ProwlerPro: https://prowler.pro
- Google Cloud:
- Authentication: tutorials/gcp/authentication.md
- Developer Guide:
- Introduction: developer-guide/introduction.md
- Audit Info: developer-guide/audit-info.md
- Services: developer-guide/services.md
- Checks: developer-guide/checks.md
- Documentation: developer-guide/documentation.md
- Compliance: developer-guide/security-compliance-framework.md
- Outputs: developer-guide/outputs.md
- Integrations: developer-guide/integrations.md
- Testing:
- Unit Tests: developer-guide/unit-testing.md
- Integration Tests: developer-guide/integration-testing.md
- Security: security.md
- Contact Us: contact.md
- Troubleshooting: troubleshooting.md
- About: about.md
- ProwlerPro: https://prowler.pro
# Customization
extra:
consent:
@@ -73,7 +97,7 @@ extra:
link: https://twitter.com/prowlercloud
# Copyright
copyright: Copyright &copy; 2022 Toni de la Fuente, Maintained by the Prowler Team at Verica, Inc.</a>.
copyright: Copyright &copy; 2022 Toni de la Fuente, Maintained by the Prowler Team at Verica, Inc.</a>
markdown_extensions:
- abbr
@@ -112,4 +136,4 @@ markdown_extensions:
alternate_style: true
- pymdownx.tasklist:
custom_checkbox: true
- pymdownx.tilde
- pymdownx.tilde

View File

@@ -4,7 +4,7 @@ AWSTemplateFormatVersion: '2010-09-09'
# aws cloudformation create-stack \
# --capabilities CAPABILITY_IAM --capabilities CAPABILITY_NAMED_IAM \
# --template-body "file://create_role_to_assume_cfn.yaml" \
# --stack-name "ProwlerExecRole" \
# --stack-name "ProwlerScanRole" \
# --parameters "ParameterKey=AuthorisedARN,ParameterValue=arn:aws:iam::123456789012:root"
#
Description: |
@@ -13,7 +13,7 @@ Description: |
account to assume that role. The role name and the ARN of the trusted user can all be passed
to the CloudFormation stack as parameters. Then you can run Prowler to perform a security
assessment with a command like:
./prowler -A <THIS_ACCOUNT_ID> -R ProwlerExecRole
prowler --role ProwlerScanRole.ARN
Parameters:
AuthorisedARN:
Description: |
@@ -22,12 +22,12 @@ Parameters:
Type: String
ProwlerRoleName:
Description: |
Name of the IAM role that will have these policies attached. Default: ProwlerExecRole
Name of the IAM role that will have these policies attached. Default: ProwlerScanRole
Type: String
Default: 'ProwlerExecRole'
Default: 'ProwlerScanRole'
Resources:
ProwlerExecRole:
ProwlerScanRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
@@ -42,31 +42,49 @@ Resources:
# Bool:
# 'aws:MultiFactorAuthPresent': true
# This is 12h that is maximum allowed, Minimum is 3600 = 1h
# to take advantage of this use -T like in './prowler -A <ACCOUNT_ID_TO_ASSUME> -R ProwlerExecRole -T 43200 -M text,html'
# to take advantage of this use -T like in './prowler --role ProwlerScanRole.ARN -T 43200'
MaxSessionDuration: 43200
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/SecurityAudit'
- 'arn:aws:iam::aws:policy/job-function/ViewOnlyAccess'
RoleName: !Sub ${ProwlerRoleName}
Policies:
- PolicyName: ProwlerExecRoleAdditionalViewPrivileges
- PolicyName: ProwlerScanRoleAdditionalViewPrivileges
PolicyDocument:
Version : '2012-10-17'
Statement:
- Effect: Allow
Action:
- 'ds:ListAuthorizedApplications'
- 'account:Get*'
- 'appstream:Describe*'
- 'appstream:List*'
- 'codeartifact:List*'
- 'codebuild:BatchGet*'
- 'ds:Get*'
- 'ds:Describe*'
- 'ds:List*'
- 'ec2:GetEbsEncryptionByDefault'
- 'ecr:Describe*'
- 'elasticfilesystem:DescribeBackupPolicy'
- 'glue:GetConnections'
- 'glue:GetSecurityConfiguration'
- 'glue:GetSecurityConfiguration*'
- 'glue:SearchTables'
- 'lambda:GetFunction'
- 'lambda:GetFunction*'
- 'macie2:GetMacieSession'
- 's3:GetAccountPublicAccessBlock'
- 'shield:DescribeProtection'
- 'shield:GetSubscriptionState'
- 'securityhub:BatchImportFindings'
- 'securityhub:GetFindings'
- 'ssm:GetDocument'
- 'support:Describe*'
- 'tag:GetTagKeys'
Resource: '*'
- PolicyName: ProwlerScanRoleAdditionalViewPrivilegesApiGateway
PolicyDocument:
Version : '2012-10-17'
Statement:
- Effect: Allow
Action:
- 'apigateway:GET'
Resource: 'arn:aws:apigateway:*::/restapis/*'

View File

@@ -3,24 +3,51 @@
"Statement": [
{
"Action": [
"ds:ListAuthorizedApplications",
"account:Get*",
"appstream:Describe*",
"appstream:List*",
"backup:List*",
"cloudtrail:GetInsightSelectors",
"codeartifact:List*",
"codebuild:BatchGet*",
"drs:Describe*",
"ds:Get*",
"ds:Describe*",
"ds:List*",
"ec2:GetEbsEncryptionByDefault",
"ecr:Describe*",
"ecr:GetRegistryScanningConfiguration",
"elasticfilesystem:DescribeBackupPolicy",
"glue:GetConnections",
"glue:GetSecurityConfiguration",
"glue:GetSecurityConfiguration*",
"glue:SearchTables",
"lambda:GetFunction",
"lambda:GetFunction*",
"logs:FilterLogEvents",
"macie2:GetMacieSession",
"s3:GetAccountPublicAccessBlock",
"shield:DescribeProtection",
"shield:GetSubscriptionState",
"securityhub:BatchImportFindings",
"securityhub:GetFindings",
"ssm:GetDocument",
"ssm-incidents:List*",
"support:Describe*",
"tag:GetTagKeys"
"tag:GetTagKeys",
"wellarchitected:List*"
],
"Resource": "*",
"Effect": "Allow",
"Sid": "AllowMoreReadForProwler"
},
{
"Effect": "Allow",
"Action": [
"apigateway:GET"
],
"Resource": [
"arn:aws:apigateway:*::/restapis/*",
"arn:aws:apigateway:*::/apis/*"
]
}
]
}

3166
poetry.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
import sys
from prowler.lib.banner import print_banner
@@ -11,32 +12,37 @@ from prowler.lib.check.check import (
exclude_services_to_run,
execute_checks,
list_categories,
list_checks_json,
list_services,
parse_checks_from_folder,
print_categories,
print_checks,
print_compliance_frameworks,
print_compliance_requirements,
print_services,
remove_custom_checks_module,
)
from prowler.lib.check.checks_loader import load_checks_to_execute
from prowler.lib.check.compliance import update_checks_metadata_with_compliance
from prowler.lib.cli.parser import ProwlerArgumentParser
from prowler.lib.logger import logger, set_logging_config
from prowler.lib.outputs.outputs import (
extract_findings_statistics,
send_to_s3_bucket,
)
from prowler.lib.outputs.compliance import display_compliance_table
from prowler.lib.outputs.html import add_html_footer, fill_html_overview_statistics
from prowler.lib.outputs.json import close_json
from prowler.lib.outputs.outputs import extract_findings_statistics
from prowler.lib.outputs.slack import send_slack_message
from prowler.lib.outputs.summary_table import display_summary_table
from prowler.providers.aws.lib.allowlist.allowlist import parse_allowlist_file
from prowler.providers.aws.lib.quick_inventory.quick_inventory import quick_inventory
from prowler.providers.aws.lib.s3.s3 import send_to_s3_bucket
from prowler.providers.aws.lib.security_hub.security_hub import (
resolve_security_hub_previous_findings,
)
from prowler.providers.common.audit_info import set_provider_audit_info
from prowler.providers.common.allowlist import set_provider_allowlist
from prowler.providers.common.audit_info import (
set_provider_audit_info,
set_provider_execution_parameters,
)
from prowler.providers.common.outputs import set_provider_output_options
from prowler.providers.common.quick_inventory import run_provider_quick_inventory
def prowler():
@@ -52,18 +58,19 @@ def prowler():
services = args.services
categories = args.categories
checks_file = args.checks_file
checks_folder = args.checks_folder
severities = args.severity
compliance_framework = args.compliance
if not args.no_banner:
print_banner(args)
# We treat the compliance framework as another output format
if compliance_framework:
args.output_modes.extend(compliance_framework)
# Set Logger configuration
set_logging_config(args.log_file, args.log_level)
if args.no_banner:
print_banner(args)
set_logging_config(args.log_level, args.log_file, args.only_logs)
if args.list_services:
print_services(list_services(provider))
@@ -74,33 +81,26 @@ def prowler():
bulk_checks_metadata = bulk_load_checks_metadata(provider)
if args.list_categories:
print_categories(list_categories(provider, bulk_checks_metadata))
print_categories(list_categories(bulk_checks_metadata))
sys.exit()
bulk_compliance_frameworks = {}
# Load compliance frameworks
logger.debug("Loading compliance frameworks from .json files")
# Load the compliance framework if specified with --compliance
# If some compliance argument is specified we have to load it
if (
args.list_compliance
or args.list_compliance_requirements
or compliance_framework
):
bulk_compliance_frameworks = bulk_load_compliance_frameworks(provider)
# Complete checks metadata with the compliance framework specification
update_checks_metadata_with_compliance(
bulk_compliance_frameworks, bulk_checks_metadata
bulk_compliance_frameworks = bulk_load_compliance_frameworks(provider)
# Complete checks metadata with the compliance framework specification
update_checks_metadata_with_compliance(
bulk_compliance_frameworks, bulk_checks_metadata
)
if args.list_compliance:
print_compliance_frameworks(bulk_compliance_frameworks)
sys.exit()
if args.list_compliance_requirements:
print_compliance_requirements(
bulk_compliance_frameworks, args.list_compliance_requirements
)
if args.list_compliance:
print_compliance_frameworks(bulk_compliance_frameworks)
sys.exit()
if args.list_compliance_requirements:
print_compliance_requirements(
bulk_compliance_frameworks, args.list_compliance_requirements
)
sys.exit()
sys.exit()
# Load checks to execute
checks_to_execute = load_checks_to_execute(
@@ -115,41 +115,53 @@ def prowler():
provider,
)
# Exclude checks if -e/--excluded-checks
if excluded_checks:
checks_to_execute = exclude_checks_to_run(checks_to_execute, excluded_checks)
# Exclude services if -s/--excluded-services
if excluded_services:
checks_to_execute = exclude_services_to_run(
checks_to_execute, excluded_services, provider
)
# Sort final check list
checks_to_execute = sorted(checks_to_execute)
# if --list-checks-json, dump a json file and exit
if args.list_checks_json:
print(list_checks_json(provider, sorted(checks_to_execute)))
sys.exit()
# If -l/--list-checks passed as argument, print checks to execute and quit
if args.list_checks:
print_checks(provider, checks_to_execute, bulk_checks_metadata)
print_checks(provider, sorted(checks_to_execute), bulk_checks_metadata)
sys.exit()
# Set the audit info based on the selected provider
audit_info = set_provider_audit_info(provider, args.__dict__)
# Parse content from Allowlist file and get it, if necessary, from S3
if provider == "aws" and args.allowlist_file:
allowlist_file = parse_allowlist_file(audit_info, args.allowlist_file)
else:
allowlist_file = None
# Import custom checks from folder
if checks_folder:
parse_checks_from_folder(audit_info, checks_folder, provider)
# Setting output options based on the selected provider
# Exclude checks if -e/--excluded-checks
if excluded_checks:
checks_to_execute = exclude_checks_to_run(checks_to_execute, excluded_checks)
# Exclude services if --excluded-services
if excluded_services:
checks_to_execute = exclude_services_to_run(
checks_to_execute, excluded_services, provider
)
# Once the audit_info is set and we have the eventual checks based on the resource identifier,
# it is time to check what Prowler's checks are going to be executed
if audit_info.audit_resources:
checks_from_resources = set_provider_execution_parameters(provider, audit_info)
checks_to_execute = checks_to_execute.intersection(checks_from_resources)
# Sort final check list
checks_to_execute = sorted(checks_to_execute)
# Parse Allowlist
allowlist_file = set_provider_allowlist(provider, audit_info, args)
# Set output options based on the selected provider
audit_output_options = set_provider_output_options(
provider, args, audit_info, allowlist_file, bulk_checks_metadata
)
# Quick Inventory for AWS
if provider == "aws" and args.quick_inventory:
quick_inventory(audit_info, args.output_directory)
# Run the quick inventory for the provider if available
if hasattr(args, "quick_inventory") and args.quick_inventory:
run_provider_quick_inventory(provider, audit_info, args)
sys.exit()
# Execute checks
@@ -166,10 +178,25 @@ def prowler():
# Extract findings stats
stats = extract_findings_statistics(findings)
if args.slack:
if "SLACK_API_TOKEN" in os.environ and "SLACK_CHANNEL_ID" in os.environ:
_ = send_slack_message(
os.environ["SLACK_API_TOKEN"],
os.environ["SLACK_CHANNEL_ID"],
stats,
provider,
audit_info,
)
else:
logger.critical(
"Slack integration needs SLACK_API_TOKEN and SLACK_CHANNEL_ID environment variables (see more in https://docs.prowler.cloud/en/latest/tutorials/integrations/#slack)."
)
sys.exit(1)
if args.output_modes:
for mode in args.output_modes:
# Close json file if exists
if mode == "json" or mode == "json-asff":
if "json" in mode:
close_json(
audit_output_options.output_filename, args.output_directory, mode
)
@@ -199,27 +226,41 @@ def prowler():
)
# Resolve previous fails of Security Hub
if provider == "aws" and args.security_hub:
resolve_security_hub_previous_findings(args.output_directory, audit_info)
if provider == "aws" and args.security_hub and not args.skip_sh_update:
resolve_security_hub_previous_findings(
audit_output_options.output_directory,
audit_output_options.output_filename,
audit_info,
)
# Display summary table
display_summary_table(
findings,
audit_info,
audit_output_options,
provider,
)
if compliance_framework and findings:
# Display compliance table
display_compliance_table(
if not args.only_logs:
display_summary_table(
findings,
bulk_checks_metadata,
compliance_framework,
audit_output_options.output_filename,
audit_output_options.output_directory,
audit_info,
audit_output_options,
provider,
)
if compliance_framework and findings:
for compliance in compliance_framework:
# Display compliance table
display_compliance_table(
findings,
bulk_checks_metadata,
compliance,
audit_output_options.output_filename,
audit_output_options.output_directory,
)
# If custom checks were passed, remove the modules
if checks_folder:
remove_custom_checks_module(checks_folder, provider)
# If there are failed findings exit code 3, except if -z is input
if not args.ignore_exit_code_3 and stats["total_fail"] > 0:
sys.exit(3)
if __name__ == "__main__":
prowler()

View File

@@ -0,0 +1,214 @@
{
"Framework": "AWS-Audit-Manager-Control-Tower-Guardrails",
"Version": "",
"Provider": "AWS",
"Description": "AWS Control Tower is a management and governance service that you can use to navigate through the setup process and governance requirements that are involved in creating a multi-account AWS environment.",
"Requirements": [
{
"Id": "1.0.1",
"Name": "Disallow launch of EC2 instance types that are not EBS-optimized",
"Description": "Checks whether EBS optimization is enabled for your EC2 instances that can be EBS-optimized",
"Attributes": [
{
"ItemId": "1.0.1",
"Section": "EBS checks",
"Service": "ebs"
}
],
"Checks": []
},
{
"Id": "1.0.2",
"Name": "Disallow EBS volumes that are unattached to an EC2 instance",
"Description": "Checks whether EBS volumes are attached to EC2 instances",
"Attributes": [
{
"ItemId": "1.0.2",
"Section": "EBS checks",
"Service": "ebs"
}
],
"Checks": []
},
{
"Id": "1.0.3",
"Name": "Enable encryption for EBS volumes attached to EC2 instances",
"Description": "Checks whether EBS volumes that are in an attached state are encrypted",
"Attributes": [
{
"ItemId": "1.0.3",
"Section": "EBS checks",
"Service": "ebs"
}
],
"Checks": [
"ec2_ebs_default_encryption"
]
},
{
"Id": "2.0.1",
"Name": "Disallow internet connection through RDP",
"Description": "Checks whether security groups that are in use disallow unrestricted incoming TCP traffic to the specified",
"Attributes": [
{
"ItemId": "2.0.1",
"Section": "Disallow Internet Connection",
"Service": "vpc"
}
],
"Checks": [
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_3389"
]
},
{
"Id": "2.0.2",
"Name": "Disallow internet connection through SSH",
"Description": "Checks whether security groups that are in use disallow unrestricted incoming SSH traffic.",
"Attributes": [
{
"ItemId": "2.0.2",
"Section": "Disallow Internet Connection",
"Service": "vpc"
}
],
"Checks": [
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22"
]
},
{
"Id": "3.0.1",
"Name": "Disallow access to IAM users without MFA",
"Description": "Checks whether the AWS Identity and Access Management users have multi-factor authentication (MFA) enabled.",
"Attributes": [
{
"ItemId": "3.0.1",
"Section": "Multi-Factor Authentication",
"Service": "iam"
}
],
"Checks": [
"iam_user_mfa_enabled_console_access"
]
},
{
"Id": "3.0.2",
"Name": "Disallow console access to IAM users without MFA",
"Description": "Checks whether AWS Multi-Factor Authentication (MFA) is enabled for all AWS Identity and Access Management (IAM) users that use a console password.",
"Attributes": [
{
"ItemId": "3.0.2",
"Section": "Multi-Factor Authentication",
"Service": "iam"
}
],
"Checks": [
"iam_user_mfa_enabled_console_access"
]
},
{
"Id": "3.0.3",
"Name": "Enable MFA for the root user",
"Description": "Checks whether the root user of your AWS account requires multi-factor authentication for console sign-in.",
"Attributes": [
{
"ItemId": "3.0.3",
"Section": "Multi-Factor Authentication",
"Service": "iam"
}
],
"Checks": [
"iam_root_mfa_enabled"
]
},
{
"Id": "4.0.1",
"Name": "Disallow public access to RDS database instances",
"Description": "Checks whether the Amazon Relational Database Service (RDS) instances are not publicly accessible. The rule is non-compliant if the publiclyAccessible field is true in the instance configuration item.",
"Attributes": [
{
"ItemId": "4.0.1",
"Section": "Disallow Public Access",
"Service": "rds"
}
],
"Checks": [
"rds_instance_no_public_access"
]
},
{
"Id": "4.0.2",
"Name": "Disallow public access to RDS database snapshots",
"Description": "Checks if Amazon Relational Database Service (Amazon RDS) snapshots are public. The rule is non-compliant if any existing and new Amazon RDS snapshots are public.",
"Attributes": [
{
"ItemId": "4.0.2",
"Section": "Disallow Public Access",
"Service": "rds"
}
],
"Checks": [
"rds_snapshots_public_access"
]
},
{
"Id": "4.1.1",
"Name": "Disallow public read access to S3 buckets",
"Description": "Checks that your S3 buckets do not allow public read access.",
"Attributes": [
{
"ItemId": "4.1.1",
"Section": "Disallow Public Access",
"Service": "s3"
}
],
"Checks": [
"rds_instance_no_public_access"
]
},
{
"Id": "4.1.2",
"Name": "Disallow public write access to S3 buckets",
"Description": "Checks that your S3 buckets do not allow public write access.",
"Attributes": [
{
"ItemId": "4.1.2",
"Section": "Disallow Public Access",
"Service": "s3"
}
],
"Checks": [
"s3_bucket_policy_public_write_access"
]
},
{
"Id": "5.0.1",
"Name": "Disallow RDS database instances that are not storage encrypted ",
"Description": "Checks whether storage encryption is enabled for your RDS DB instances.",
"Attributes": [
{
"ItemId": "5.0.1",
"Section": "Disallow Instances",
"Service": "rds"
}
],
"Checks": [
"rds_instance_storage_encrypted"
]
},
{
"Id": "5.1.1",
"Name": "Disallow S3 buckets that are not versioning enabled",
"Description": "Checks whether versioning is enabled for your S3 buckets.",
"Attributes": [
{
"ItemId": "5.1.1",
"Section": "Disallow Instances",
"Service": "s3"
}
],
"Checks": [
"s3_bucket_object_versioning"
]
}
]
}

View File

@@ -0,0 +1,605 @@
{
"Framework": "AWS-Foundational-Security-Best-Practices",
"Version": "",
"Provider": "AWS",
"Description": "The AWS Foundational Security Best Practices standard is a set of controls that detect when your deployed accounts and resources deviate from security best practices.",
"Requirements": [
{
"Id": "account",
"Name": "Account",
"Description": "This section contains recommendations for configuring AWS Account.",
"Attributes": [
{
"ItemId": "account",
"Section": "Account",
"Service": "account"
}
],
"Checks": [
"account_security_contact_information_is_registered"
]
},
{
"Id": "acm",
"Name": "ACM",
"Description": "This section contains recommendations for configuring ACM resources.",
"Attributes": [
{
"ItemId": "acm",
"Section": "Acm",
"Service": "acm"
}
],
"Checks": [
"account_security_contact_information_is_registered"
]
},
{
"Id": "api-gateway",
"Name": "API Gateway",
"Description": "This section contains recommendations for configuring API Gateway resources.",
"Attributes": [
{
"ItemId": "api-gateway",
"Section": "API Gateway",
"Service": "apigateway"
}
],
"Checks": [
"apigateway_logging_enabled",
"apigateway_client_certificate_enabled",
"apigateway_waf_acl_attached",
"apigatewayv2_authorizers_enabled",
"apigatewayv2_access_logging_enabled"
]
},
{
"Id": "auto-scaling",
"Name": "Benchmark: Auto Scaling",
"Description": "This section contains recommendations for configuring Auto Scaling resources and options.",
"Attributes": [
{
"ItemId": "auto-scaling",
"Section": "Auto Scaling",
"Service": "autoscaling"
}
],
"Checks": []
},
{
"Id": "cloudformation",
"Name": "Benchmark: CloudFormation",
"Description": "This section contains recommendations for configuring CloudFormation resources and options.",
"Attributes": [
{
"ItemId": "cloudformation",
"Section": "CloudFormation",
"Service": "cloudformation"
}
],
"Checks": []
},
{
"Id": "cloudfront",
"Name": "Benchmark: CloudFront",
"Description": "This section contains recommendations for configuring CloudFront resources and options.",
"Attributes": [
{
"ItemId": "cloudfront",
"Section": "CloudFront",
"Service": "cloudfront"
}
],
"Checks": [
"cloudfront_distributions_https_enabled",
"cloudfront_distributions_logging_enabled",
"cloudfront_distributions_using_waf",
"cloudfront_distributions_field_level_encryption_enabled",
"cloudfront_distributions_using_deprecated_ssl_protocols"
]
},
{
"Id": "cloudtrail",
"Name": "Benchmark: CloudTrail",
"Description": "This section contains recommendations for configuring CloudTrail resources and options.",
"Attributes": [
{
"ItemId": "cloudtrail",
"Section": "CloudTrail",
"Service": "cloudtrail"
}
],
"Checks": [
"cloudtrail_multi_region_enabled",
"cloudtrail_kms_encryption_enabled",
"cloudtrail_log_file_validation_enabled",
"cloudtrail_cloudwatch_logging_enabled"
]
},
{
"Id": "codebuild",
"Name": "Benchmark: CodeBuild",
"Description": "This section contains recommendations for configuring CodeBuild resources and options.",
"Attributes": [
{
"ItemId": "codebuild",
"Section": "CodeBuild",
"Service": "codebuild"
}
],
"Checks": []
},
{
"Id": "config",
"Name": "Benchmark: Config",
"Description": "This section contains recommendations for configuring AWS Config.",
"Attributes": [
{
"ItemId": "config",
"Section": "Config",
"Service": "config"
}
],
"Checks": [
"config_recorder_all_regions_enabled"
]
},
{
"Id": "dms",
"Name": "Benchmark: DMS",
"Description": "This section contains recommendations for configuring AWS DMS resources and options.",
"Attributes": [
{
"ItemId": "dms",
"Section": "DMS",
"Service": "dms"
}
],
"Checks": []
},
{
"Id": "dynamodb",
"Name": "Benchmark: DynamoDB",
"Description": "This section contains recommendations for configuring AWS Dynamo DB resources and options.",
"Attributes": [
{
"ItemId": "dynamodb",
"Section": "DynamoDB",
"Service": "dynamodb"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"dynamodb_accelerator_cluster_encryption_enabled"
]
},
{
"Id": "ec2",
"Name": "Benchmark: EC2",
"Description": "This section contains recommendations for configuring EC2 resources and options.",
"Attributes": [
{
"ItemId": "ec2",
"Section": "EC2",
"Service": "ec2"
}
],
"Checks": [
"ec2_ebs_public_snapshot",
"ec2_securitygroup_default_restrict_traffic",
"ec2_ebs_volume_encryption",
"ec2_instance_older_than_specific_days",
"vpc_flow_logs_enabled",
"ec2_ebs_default_encryption",
"ec2_instance_imdsv2_enabled",
"ec2_instance_public_ip",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_not_used"
]
},
{
"Id": "ecr",
"Name": "Benchmark: Elastic Container Registry",
"Description": "This section contains recommendations for configuring AWS ECR resources and options.",
"Attributes": [
{
"ItemId": "ecr",
"Section": "ECR",
"Service": "ecr"
}
],
"Checks": [
"ecr_repositories_scan_images_on_push_enabled",
"ecr_repositories_lifecycle_policy_enabled"
]
},
{
"Id": "ecs",
"Name": "Benchmark: Elastic Container Service",
"Description": "This section contains recommendations for configuring ECS resources and options.",
"Attributes": [
{
"ItemId": "ecs",
"Section": "ECS",
"Service": "ecs"
}
],
"Checks": [
"ecs_task_definitions_no_environment_secrets"
]
},
{
"Id": "efs",
"Name": "Benchmark: EFS",
"Description": "This section contains recommendations for configuring AWS EFS resources and options.",
"Attributes": [
{
"ItemId": "efs",
"Section": "EFS",
"Service": "efs"
}
],
"Checks": [
"efs_encryption_at_rest_enabled",
"efs_have_backup_enabled"
]
},
{
"Id": "eks",
"Name": "Benchmark: EKS",
"Description": "This section contains recommendations for configuring AWS EKS resources and options.",
"Attributes": [
{
"ItemId": "eks",
"Section": "EKS",
"Service": "eks"
}
],
"Checks": []
},
{
"Id": "elastic-beanstalk",
"Name": "Benchmark: Elastic Beanstalk",
"Description": "This section contains recommendations for configuring AWS Elastic Beanstalk resources and options.",
"Attributes": [
{
"ItemId": "elastic-beanstalk",
"Section": "Elastic Beanstalk",
"Service": "elasticbeanstalk"
}
],
"Checks": []
},
{
"Id": "elb",
"Name": "Benchmark: ELB",
"Description": "This section contains recommendations for configuring Elastic Load Balancer resources and options.",
"Attributes": [
{
"ItemId": "elb",
"Section": "ELB",
"Service": "elb"
}
],
"Checks": [
"elbv2_logging_enabled",
"elb_logging_enabled",
"elbv2_deletion_protection",
"elbv2_desync_mitigation_mode"
]
},
{
"Id": "elbv2",
"Name": "Benchmark: ELBv2",
"Description": "This section contains recommendations for configuring Elastic Load Balancer resources and options.",
"Attributes": [
{
"ItemId": "elbv2",
"Section": "ELBv2",
"Service": "elbv2"
}
],
"Checks": []
},
{
"Id": "emr",
"Name": "Benchmark: EMR",
"Description": "This section contains recommendations for configuring EMR resources.",
"Attributes": [
{
"ItemId": "emr",
"Section": "EMR",
"Service": "emr"
}
],
"Checks": [
"emr_cluster_master_nodes_no_public_ip"
]
},
{
"Id": "elasticsearch",
"Name": "Benchmark: Elasticsearch",
"Description": "This section contains recommendations for configuring Elasticsearch resources and options.",
"Attributes": [
{
"ItemId": "elasticsearch",
"Section": "ElasticSearch",
"Service": "elasticsearch"
}
],
"Checks": [
"opensearch_service_domains_encryption_at_rest_enabled",
"opensearch_service_domains_node_to_node_encryption_enabled",
"opensearch_service_domains_audit_logging_enabled",
"opensearch_service_domains_audit_logging_enabled",
"opensearch_service_domains_https_communications_enforced"
]
},
{
"Id": "guardduty",
"Name": "Benchmark: GuardDuty",
"Description": "This section contains recommendations for configuring AWS GuardDuty resources and options.",
"Attributes": [
{
"ItemId": "guardduty",
"Section": "GuardDuty",
"Service": "guardduty"
}
],
"Checks": [
"guardduty_is_enabled"
]
},
{
"Id": "iam",
"Name": "Benchmark: IAM",
"Description": "This section contains recommendations for configuring AWS IAM resources and options.",
"Attributes": [
{
"ItemId": "iam",
"Section": "IAM",
"Service": "iam"
}
],
"Checks": [
"iam_rotate_access_key_90_days",
"iam_no_root_access_key",
"iam_user_mfa_enabled_console_access",
"iam_root_hardware_mfa_enabled",
"iam_password_policy_minimum_length_14",
"iam_disable_90_days_credentials",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_inline_policy_no_administrative_privileges"
]
},
{
"Id": "kinesis",
"Name": "Benchmark: Kinesis",
"Description": "This section contains recommendations for configuring AWS Kinesis resources and options.",
"Attributes": [
{
"ItemId": "kinesis",
"Section": "Kinesis",
"Service": "kinesis"
}
],
"Checks": []
},
{
"Id": "kms",
"Name": "Benchmark: KMS",
"Description": "This section contains recommendations for configuring AWS KMS resources and options.",
"Attributes": [
{
"ItemId": "kms",
"Section": "KMS",
"Service": "kms"
}
],
"Checks": []
},
{
"Id": "lambda",
"Name": "Benchmark: Lambda",
"Description": "This section contains recommendations for configuring Lambda resources and options.",
"Attributes": [
{
"ItemId": "lambda",
"Section": "Lambda",
"Service": "lambda"
}
],
"Checks": [
"awslambda_function_url_public",
"awslambda_function_using_supported_runtimes"
]
},
{
"Id": "network-firewall",
"Name": "Benchmark: Network Firewall",
"Description": "This section contains recommendations for configuring Network Firewall resources and options.",
"Attributes": [
{
"ItemId": "network-firewall",
"Section": "Network Firewall",
"Service": "network-firewall"
}
],
"Checks": []
},
{
"Id": "opensearch",
"Name": "Benchmark: OpenSearch",
"Description": "This section contains recommendations for configuring OpenSearch resources and options.",
"Attributes": [
{
"ItemId": "opensearch",
"Section": "OpenSearch",
"Service": "opensearch"
}
],
"Checks": [
"opensearch_service_domains_not_publicly_accessible"
]
},
{
"Id": "rds",
"Name": "Benchmark: RDS",
"Description": "This section contains recommendations for configuring AWS RDS resources and options.",
"Attributes": [
{
"ItemId": "rds",
"Section": "RDS",
"Service": "rds"
}
],
"Checks": [
"rds_snapshots_public_access",
"rds_instance_no_public_access",
"rds_instance_storage_encrypted",
"rds_instance_storage_encrypted",
"rds_instance_multi_az",
"rds_instance_enhanced_monitoring_enabled",
"rds_instance_deletion_protection",
"rds_instance_integration_cloudwatch_logs",
"rds_instance_minor_version_upgrade_enabled",
"rds_instance_multi_az"
]
},
{
"Id": "redshift",
"Name": "Benchmark: Redshift",
"Description": "This section contains recommendations for configuring AWS Redshift resources and options.",
"Attributes": [
{
"ItemId": "redshift",
"Section": "Redshift",
"Service": "redshift"
}
],
"Checks": [
"redshift_cluster_public_access",
"redshift_cluster_automated_snapshot",
"redshift_cluster_automated_snapshot",
"redshift_cluster_automatic_upgrades"
]
},
{
"Id": "s3",
"Name": "Benchmark: S3",
"Description": "This section contains recommendations for configuring AWS S3 resources and options.",
"Attributes": [
{
"ItemId": "s3",
"Section": "S3",
"Service": "s3"
}
],
"Checks": [
"s3_account_level_public_access_blocks",
"s3_account_level_public_access_blocks",
"s3_bucket_policy_public_write_access",
"s3_bucket_default_encryption",
"s3_bucket_secure_transport_policy",
"s3_bucket_public_access",
"s3_bucket_server_access_logging_enabled",
"s3_bucket_object_versioning",
"s3_bucket_acl_prohibited"
]
},
{
"Id": "sagemaker",
"Name": "Benchmark: SageMaker",
"Description": "This section contains recommendations for configuring AWS Sagemaker resources and options.",
"Attributes": [
{
"ItemId": "sagemaker",
"Section": "SageMaker",
"Service": "sagemaker"
}
],
"Checks": [
"sagemaker_notebook_instance_without_direct_internet_access_configured",
"sagemaker_notebook_instance_vpc_settings_configured",
"sagemaker_notebook_instance_root_access_disabled"
]
},
{
"Id": "secretsmanager",
"Name": "Benchmark: Secrets Manager",
"Description": "This section contains recommendations for configuring AWS Secrets Manager resources.",
"Attributes": [
{
"ItemId": "secretsmanager",
"Section": "Secrets Manager",
"Service": "secretsmanager"
}
],
"Checks": [
"secretsmanager_automatic_rotation_enabled",
"secretsmanager_automatic_rotation_enabled"
]
},
{
"Id": "sns",
"Name": "Benchmark: SNS",
"Description": "This section contains recommendations for configuring AWS SNS resources and options.",
"Attributes": [
{
"ItemId": "sns",
"Section": "SNS",
"Service": "sns"
}
],
"Checks": [
"sns_topics_kms_encryption_at_rest_enabled"
]
},
{
"Id": "sqs",
"Name": "Benchmark: SQS",
"Description": "This section contains recommendations for configuring AWS SQS resources and options.",
"Attributes": [
{
"ItemId": "sqs",
"Section": "SQS",
"Service": "sqs"
}
],
"Checks": [
"sqs_queues_server_side_encryption_enabled"
]
},
{
"Id": "ssm",
"Name": "Benchmark: SSM",
"Description": "This section contains recommendations for configuring AWS Systems Manager resources and options.",
"Attributes": [
{
"ItemId": "ssm",
"Section": "SSM",
"Service": "ssm"
}
],
"Checks": [
"ec2_instance_managed_by_ssm",
"ssm_managed_compliant_patching",
"ssm_managed_compliant_patching"
]
},
{
"Id": "waf",
"Name": "Benchmark: WAF",
"Description": "This section contains recommendations for configuring AWS WAF resources and options.",
"Attributes": [
{
"ItemId": "waf",
"Section": "WAF",
"Service": "waf"
}
],
"Checks": []
}
]
}

View File

@@ -0,0 +1,79 @@
{
"Framework": "AWS-Well-Architected-Framework-Reliability-Pillar",
"Version": "",
"Provider": "AWS",
"Description": "Best Practices for the AWS Well-Architected Framework Reliability Pillar encompasses the ability of a workload to perform its intended function correctly and consistently when its expected to. This includes the ability to operate and test the workload through its total lifecycle.",
"Requirements": [
{
"Id": "REL09-BP03",
"Description": "Configure backups to be taken automatically based on a periodic schedule informed by the Recovery Point Objective (RPO), or by changes in the dataset. Critical datasets with low data loss requirements need to be backed up automatically on a frequent basis, whereas less critical data where some loss is acceptable can be backed up less frequently.",
"Attributes": [
{
"Name": "REL09-BP03 Perform data backup automatically",
"WellArchitectedQuestionId": "backing-up-data",
"WellArchitectedPracticeId": "rel_backing_up_data_automated_backups_data",
"Section": "Failure management",
"SubSection": "Backup up data",
"LevelOfRisk": "High",
"AssessmentMethod": "Automated",
"Description": "Configure backups to be taken automatically based on a periodic schedule informed by the Recovery Point Objective (RPO), or by changes in the dataset. Critical datasets with low data loss requirements need to be backed up automatically on a frequent basis, whereas less critical data where some loss is acceptable can be backed up less frequently.",
"ImplementationGuidanceUrl": "https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/rel_backing_up_data_automated_backups_data.html#implementation-guidance"
}
],
"Checks": [
"cloudformation_stacks_termination_protection_enabled",
"rds_instance_backup_enabled",
"rds_instance_deletion_protection",
"dynamodb_tables_pitr_enabled"
]
},
{
"Id": "REL06-BP01",
"Description": "Monitor components and services of AWS workload effectifely, using tools like Amazon CloudWatch and AWS Health Dashboard. Define relevant metrics, set thresholds, and analyze metrics and logs for early detection of issues.",
"Attributes": [
{
"Name": "REL06-BP01 Monitor all components for the workload (Generation)",
"WellArchitectedQuestionId": "monitor-aws-resources",
"WellArchitectedPracticeId": "rel_monitor_aws_resources_monitor_resources",
"Section": "Change management",
"SubSection": "Monitor workload resources",
"LevelOfRisk": "High",
"AssessmentMethod": "Automated",
"Description": "Monitor components and services of AWS workload effectifely, using tools like Amazon CloudWatch and AWS Health Dashboard. Define relevant metrics, set thresholds, and analyze metrics and logs for early detection of issues.",
"ImplementationGuidanceUrl": "https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/rel_monitor_aws_resources_monitor_resources.html#implementation-guidance"
}
],
"Checks": [
"apigateway_logging_enabled",
"apigatewayv2_access_logging_enabled",
"awslambda_function_invoke_api_operations_cloudtrail_logging_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"elb_logging_enabled",
"opensearch_service_domains_audit_logging_enabled",
"opensearch_service_domains_cloudwatch_logging_enabled",
"rds_instance_enhanced_monitoring_enabled",
"rds_instance_integration_cloudwatch_logs"
]
},
{
"Id": "REL10-BP01",
"Description": "Distribute workload data and resources across multiple Availability Zones or, where necessary, across AWS Regions. These locations can be as diverse as required.",
"Attributes": [
{
"Name": "REL10-BP01 Deploy the workload to multiple locations",
"WellArchitectedQuestionId": "fault-isolation",
"WellArchitectedPracticeId": "rel_fault_isolation_multiaz_region_system",
"Section": "Failure management",
"SubSection": "Use fault isolation to protect your workload",
"LevelOfRisk": "High",
"AssessmentMethod": "Automated",
"Description": "Distribute workload data and resources across multiple Availability Zones or, where necessary, across AWS Regions. These locations can be as diverse as required.",
"ImplementationGuidanceUrl": "https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/use-fault-isolation-to-protect-your-workload.html#implementation-guidance."
}
],
"Checks": [
"rds_instance_multi_az"
]
}
]
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,8 @@
{
"Framework": "CIS-AWS",
"Framework": "CIS",
"Version": "1.4",
"Provider": "AWS",
"Description": "The CIS Benchmark for CIS Amazon Web Services Foundations Benchmark, v1.4.0, Level 1 and 2 provides prescriptive guidance for configuring security options for a subset of Amazon Web Services. It has an emphasis on foundational, testable, and architecture agnostic settings",
"Requirements": [
{
"Id": "1.1",
@@ -153,7 +155,8 @@
"Id": "1.16",
"Description": "Ensure IAM policies that allow full \"*:*\" administrative privileges are not attached",
"Checks": [
"iam_policy_no_administrative_privileges"
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges"
],
"Attributes": [
{
@@ -258,7 +261,7 @@
"Id": "1.20",
"Description": "Ensure that IAM Access analyzer is enabled for all regions",
"Checks": [
"accessanalyzer_enabled_without_findings"
"accessanalyzer_enabled"
],
"Attributes": [
{
@@ -531,6 +534,7 @@
"Id": "2.1.5",
"Description": "Ensure that S3 Buckets are configured with 'Block public access (bucket settings)'",
"Checks": [
"s3_bucket_level_public_access_block",
"s3_account_level_public_access_blocks"
],
"Attributes": [

View File

@@ -1,6 +1,8 @@
{
"Framework": "CIS-AWS",
"Framework": "CIS",
"Version": "1.5",
"Provider": "AWS",
"Description": "The CIS Amazon Web Services Foundations Benchmark provides prescriptive guidance for configuring security options for a subset of Amazon Web Services with an emphasis on foundational, testable, and architecture agnostic settings.",
"Requirements": [
{
"Id": "1.1",
@@ -153,7 +155,8 @@
"Id": "1.16",
"Description": "Ensure IAM policies that allow full \"*:*\" administrative privileges are not attached",
"Checks": [
"iam_policy_no_administrative_privileges"
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges"
],
"Attributes": [
{
@@ -258,7 +261,7 @@
"Id": "1.20",
"Description": "Ensure that IAM Access analyzer is enabled for all regions",
"Checks": [
"accessanalyzer_enabled_without_findings"
"accessanalyzer_enabled"
],
"Attributes": [
{
@@ -531,6 +534,7 @@
"Id": "2.1.5",
"Description": "Ensure that S3 Buckets are configured with 'Block public access (bucket settings)'",
"Checks": [
"s3_bucket_level_public_access_block",
"s3_account_level_public_access_blocks"
],
"Attributes": [

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,425 @@
{
"Framework": "CISA",
"Version": "",
"Provider": "AWS",
"Description": "Cybersecurity & Infrastructure Security Agency's (CISA) Cyber Essentials is a guide for leaders of small businesses as well as leaders of small and local government agencies to develop an actionable understanding of where to start implementing organizational cybersecurity practices.",
"Requirements": [
{
"Id": "your-systems-1",
"Name": "Your Systems-1",
"Description": "Learn what is on your network. Maintain inventories of hardware and software assets to know what is in play and at-risk from attack.",
"Attributes": [
{
"ItemId": "your-systems-1",
"Section": "your systems",
"Service": "aws"
}
],
"Checks": [
"ec2_instance_managed_by_ssm",
"ec2_instance_older_than_specific_days",
"ssm_managed_compliant_patching",
"ec2_elastic_ip_unassgined"
]
},
{
"Id": "your-systems-2",
"Name": "Your Systems-2",
"Description": "Leverage automatic updates for all operating systems and third-party software.",
"Attributes": [
{
"ItemId": "your-systems-2",
"Section": "your systems",
"Service": "aws"
}
],
"Checks": [
"rds_instance_minor_version_upgrade_enabled",
"redshift_cluster_automatic_upgrades",
"ssm_managed_compliant_patching"
]
},
{
"Id": "your-systems-3",
"Name": "Your Systems-3",
"Description": "Implement security configurations for all hardware and software assets.",
"Attributes": [
{
"ItemId": "your-systems-3",
"Section": "your systems",
"Service": "aws"
}
],
"Checks": [
"apigateway_client_certificate_enabled",
"apigateway_logging_enabled",
"apigateway_waf_acl_attached",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_kms_encryption_enabled",
"cloudtrail_log_file_validation_enabled",
"codebuild_project_user_controlled_buildspec",
"dynamodb_accelerator_cluster_encryption_enabled",
"dynamodb_tables_kms_cmk_encryption_enabled",
"dynamodb_tables_pitr_enabled",
"dynamodb_tables_pitr_enabled",
"ec2_ebs_volume_encryption",
"ec2_ebs_public_snapshot",
"ec2_ebs_default_encryption",
"ec2_instance_public_ip",
"efs_encryption_at_rest_enabled",
"efs_have_backup_enabled",
"elb_logging_enabled",
"elbv2_deletion_protection",
"elbv2_waf_acl_attached",
"elbv2_ssl_listeners",
"elb_ssl_listeners",
"emr_cluster_master_nodes_no_public_ip",
"opensearch_service_domains_encryption_at_rest_enabled",
"opensearch_service_domains_cloudwatch_logging_enabled",
"opensearch_service_domains_node_to_node_encryption_enabled",
"guardduty_is_enabled",
"iam_password_policy_minimum_length_14",
"iam_password_policy_lowercase",
"iam_password_policy_number",
"iam_password_policy_number",
"iam_password_policy_symbol",
"iam_password_policy_uppercase",
"iam_no_custom_policy_permissive_role_assumption",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_inline_policy_no_administrative_privileges",
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_no_root_access_key",
"iam_rotate_access_key_90_days",
"iam_user_mfa_enabled_console_access",
"iam_user_mfa_enabled_console_access",
"iam_disable_90_days_credentials",
"kms_cmk_rotation_enabled",
"awslambda_function_not_publicly_accessible",
"awslambda_function_not_publicly_accessible",
"cloudwatch_log_group_kms_encryption_enabled",
"cloudwatch_log_group_kms_encryption_enabled",
"rds_instance_enhanced_monitoring_enabled",
"rds_instance_backup_enabled",
"rds_instance_deletion_protection",
"rds_instance_storage_encrypted",
"rds_instance_backup_enabled",
"rds_instance_integration_cloudwatch_logs",
"rds_instance_multi_az",
"rds_instance_no_public_access",
"rds_instance_storage_encrypted",
"rds_snapshots_public_access",
"redshift_cluster_automated_snapshot",
"redshift_cluster_audit_logging",
"redshift_cluster_public_access",
"s3_bucket_default_encryption",
"s3_bucket_secure_transport_policy",
"s3_bucket_server_access_logging_enabled",
"s3_bucket_public_access",
"s3_bucket_policy_public_write_access",
"s3_bucket_object_versioning",
"s3_account_level_public_access_blocks",
"s3_bucket_public_access",
"sagemaker_training_jobs_volume_and_output_encryption_enabled",
"sagemaker_notebook_instance_without_direct_internet_access_configured",
"sagemaker_notebook_instance_encryption_enabled",
"secretsmanager_automatic_rotation_enabled",
"securityhub_enabled",
"sns_topics_kms_encryption_at_rest_enabled",
"vpc_endpoint_connections_trust_boundaries",
"ec2_securitygroup_default_restrict_traffic",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_securitygroup_allow_ingress_from_internet_to_any_port"
]
},
{
"Id": "your_-urroundings-1",
"Name": "Your Surroundings-1",
"Description": "Learn who is on your network. Maintain inventories of network connections (user accounts, vendors, business partners, etc.).",
"Attributes": [
{
"ItemId": "your-surroundings-1",
"Section": "your surroundings",
"Service": "aws"
}
],
"Checks": [
"ec2_elastic_ip_unassgined",
"vpc_flow_logs_enabled"
]
},
{
"Id": "your-surroundings-2",
"Name": "Your Surroundings-2",
"Description": "Leverage multi-factor authentication for all users, starting with privileged, administrative and remote access users.",
"Attributes": [
{
"ItemId": "your-surroundings-2",
"Section": "your surroundings",
"Service": "aws"
}
],
"Checks": [
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_user_mfa_enabled_console_access"
]
},
{
"Id": "your-surroundings-3",
"Name": "Your Surroundings-3",
"Description": "Grant access and admin permissions based on need-to-know and least privilege.",
"Attributes": [
{
"ItemId": "your-surroundings-3",
"Section": "your surroundings",
"Service": "aws"
}
],
"Checks": [
"elbv2_ssl_listeners",
"iam_no_custom_policy_permissive_role_assumption",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_inline_policy_no_administrative_privileges",
"iam_no_root_access_key"
]
},
{
"Id": "your-surroundings-4",
"Name": "Your Surroundings-4",
"Description": "Leverage unique passwords for all user accounts.",
"Attributes": [
{
"ItemId": "your-surroundings-4",
"Section": "your surroundings",
"Service": "aws"
}
],
"Checks": [
"iam_password_policy_minimum_length_14",
"iam_password_policy_lowercase",
"iam_password_policy_number",
"iam_password_policy_number",
"iam_password_policy_symbol",
"iam_password_policy_uppercase"
]
},
{
"Id": "your-data-1",
"Name": "Your Data-1",
"Description": "Learn how your data is protected.",
"Attributes": [
{
"ItemId": "your-data-1",
"Section": "your data",
"Service": "aws"
}
],
"Checks": [
"efs_encryption_at_rest_enabled",
"cloudtrail_kms_encryption_enabled",
"dynamodb_tables_kms_cmk_encryption_enabled",
"ec2_ebs_volume_encryption",
"ec2_ebs_default_encryption",
"opensearch_service_domains_encryption_at_rest_enabled",
"rds_instance_storage_encrypted",
"rds_instance_storage_encrypted",
"redshift_cluster_audit_logging",
"s3_bucket_default_encryption",
"sagemaker_training_jobs_volume_and_output_encryption_enabled",
"sagemaker_notebook_instance_encryption_enabled",
"sns_topics_kms_encryption_at_rest_enabled"
]
},
{
"Id": "your-data-2",
"Name": "Your Data-2",
"Description": "Learn what is happening on your network, manage network and perimeter components, host and device components, data-at-rest and in-transit, and user behavior activities.",
"Attributes": [
{
"ItemId": "your-data-2",
"Section": "your data",
"Service": "aws"
}
],
"Checks": [
"acm_certificates_expiration_check",
"apigateway_client_certificate_enabled",
"apigateway_logging_enabled",
"efs_have_backup_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudwatch_log_metric_filter_and_alarm_for_cloudtrail_configuration_changes_enabled",
"cloudwatch_log_group_kms_encryption_enabled",
"dynamodb_tables_kms_cmk_encryption_enabled",
"ec2_ebs_volume_encryption",
"ec2_instance_public_ip",
"efs_encryption_at_rest_enabled",
"elb_logging_enabled",
"elbv2_waf_acl_attached",
"elbv2_ssl_listeners",
"elb_ssl_listeners",
"emr_cluster_master_nodes_no_public_ip",
"opensearch_service_domains_encryption_at_rest_enabled",
"opensearch_service_domains_cloudwatch_logging_enabled",
"opensearch_service_domains_node_to_node_encryption_enabled",
"awslambda_function_not_publicly_accessible",
"awslambda_function_not_publicly_accessible",
"cloudwatch_log_group_kms_encryption_enabled",
"rds_instance_storage_encrypted",
"rds_instance_integration_cloudwatch_logs",
"rds_instance_no_public_access",
"rds_snapshots_public_access",
"rds_snapshots_public_access",
"redshift_cluster_audit_logging",
"redshift_cluster_public_access",
"s3_bucket_default_encryption",
"s3_bucket_secure_transport_policy",
"redshift_cluster_public_access",
"s3_bucket_server_access_logging_enabled",
"s3_bucket_public_access",
"s3_bucket_policy_public_write_access",
"s3_account_level_public_access_blocks",
"s3_bucket_acl_prohibited",
"sagemaker_training_jobs_volume_and_output_encryption_enabled",
"sagemaker_notebook_instance_without_direct_internet_access_configured",
"sagemaker_notebook_instance_encryption_enabled",
"sns_topics_kms_encryption_at_rest_enabled",
"ec2_securitygroup_default_restrict_traffic",
"vpc_flow_logs_enabled",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_securitygroup_allow_ingress_from_internet_to_any_port"
]
},
{
"Id": "your-data-3",
"Name": "Your Data-3",
"Description": "Domain name system protection.",
"Attributes": [
{
"ItemId": "your-data-3",
"Section": "your data",
"Service": "aws"
}
],
"Checks": [
"elbv2_waf_acl_attached"
]
},
{
"Id": "your-data-4",
"Name": "Your Data-4",
"Description": "Establish regular automated backups and redundancies of key systems.",
"Attributes": [
{
"ItemId": "your-data-4",
"Section": "your data",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"efs_have_backup_enabled",
"elbv2_deletion_protection",
"rds_instance_backup_enabled",
"rds_instance_deletion_protection",
"rds_instance_backup_enabled",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
},
{
"Id": "your-data-5",
"Name": "Your Data-5",
"Description": "Leverage protections for backups, including physical security, encryption and offline copies.",
"Attributes": [
{
"ItemId": "your-data-5",
"Section": "your data",
"Service": "aws"
}
],
"Checks": []
},
{
"Id": "your-crisis-response-2",
"Name": "Your Crisis Response-2",
"Description": "Lead development of an internal reporting structure to detect, communicate and contain attacks.",
"Attributes": [
{
"ItemId": "your-crisis-response-2",
"Section": "your crisis response",
"Service": "aws"
}
],
"Checks": [
"guardduty_is_enabled",
"securityhub_enabled"
]
},
{
"Id": "booting-up-thing-to-do-first-1",
"Name": "YBooting Up: Things to Do First-1",
"Description": "Lead development of an internal reporting structure to detect, communicate and contain attacks.",
"Attributes": [
{
"ItemId": "booting-up-thing-to-do-first-1",
"Section": "booting up thing to do first",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"dynamodb_tables_pitr_enabled",
"efs_have_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
},
{
"Id": "booting-up-thing-to-do-first-2",
"Name": "YBooting Up: Things to Do First-2",
"Description": "Require multi-factor authentication (MFA) for accessing your systems whenever possible. MFA should be required of all users, but start with privileged, administrative, and remote access users.",
"Attributes": [
{
"ItemId": "booting-up-thing-to-do-first-2",
"Section": "booting up thing to do first",
"Service": "aws"
}
],
"Checks": [
"iam_user_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_user_mfa_enabled_console_access",
"iam_user_hardware_mfa_enabled"
]
},
{
"Id": "booting-up-thing-to-do-first-3",
"Name": "YBooting Up: Things to Do First-3",
"Description": "Enable automatic updates whenever possible. Replace unsupported operating systems, applications and hardware. Test and deploy patches quickly.",
"Attributes": [
{
"ItemId": "booting-up-thing-to-do-first-1",
"Section": "booting up thing to do first",
"Service": "aws"
}
],
"Checks": [
"rds_instance_minor_version_upgrade_enabled",
"redshift_cluster_automatic_upgrades",
"ssm_managed_compliant_patching"
]
}
]
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,442 @@
{
"Framework": "FedRAMP-Low-Revision-4",
"Version": "",
"Provider": "AWS",
"Description": "The Federal Risk and Authorization Management Program (FedRAMP) was established in 2011. It provides a cost-effective, risk-based approach for the adoption and use of cloud services by the U.S. federal government. FedRAMP empowers federal agencies to use modern cloud technologies, with an emphasis on the security and protection of federal information.",
"Requirements": [
{
"Id": "ac-2",
"Name": "Account Management (AC-2)",
"Description": "Manage system accounts, group memberships, privileges, workflow, notifications, deactivations, and authorizations.",
"Attributes": [
{
"ItemId": "ac-2",
"Section": "Access Control (AC)",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_log_file_validation_enabled",
"cloudwatch_changes_to_network_acls_alarm_configured",
"opensearch_service_domains_cloudwatch_logging_enabled",
"guardduty_is_enabled",
"iam_password_policy_minimum_length_14",
"iam_policy_attached_only_to_group_or_roles",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_inline_policy_no_administrative_privileges",
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_no_root_access_key",
"iam_rotate_access_key_90_days",
"iam_user_mfa_enabled_console_access",
"iam_user_hardware_mfa_enabled",
"iam_disable_90_days_credentials",
"rds_instance_integration_cloudwatch_logs",
"redshift_cluster_audit_logging",
"s3_bucket_server_access_logging_enabled",
"securityhub_enabled"
]
},
{
"Id": "ac-3",
"Name": "Account Management (AC-3)",
"Description": "The information system enforces approved authorizations for logical access to information and system resources in accordance with applicable access control policies.",
"Attributes": [
{
"ItemId": "ac-3",
"Section": "Access Control (AC)",
"Service": "aws"
}
],
"Checks": [
"ec2_ebs_public_snapshot",
"ec2_instance_public_ip",
"ec2_instance_imdsv2_enabled",
"emr_cluster_master_nodes_no_public_ip",
"iam_policy_attached_only_to_group_or_roles",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_inline_policy_no_administrative_privileges",
"iam_no_root_access_key",
"iam_disable_90_days_credentials",
"awslambda_function_not_publicly_accessible",
"awslambda_function_url_public",
"rds_instance_no_public_access",
"rds_snapshots_public_access",
"redshift_cluster_public_access",
"s3_bucket_policy_public_write_access",
"s3_account_level_public_access_blocks",
"s3_bucket_public_access",
"sagemaker_notebook_instance_without_direct_internet_access_configured"
]
},
{
"Id": "ac-17",
"Name": "Remote Access (AC-17)",
"Description": "Authorize remote access systems prior to connection. Enforce remote connection requirements to information systems.",
"Attributes": [
{
"ItemId": "ac-17",
"Section": "Access Control (AC)",
"Service": "aws"
}
],
"Checks": [
"acm_certificates_expiration_check",
"ec2_ebs_public_snapshot",
"ec2_instance_public_ip",
"elb_ssl_listeners",
"emr_cluster_master_nodes_no_public_ip",
"guardduty_is_enabled",
"awslambda_function_not_publicly_accessible",
"awslambda_function_url_public",
"rds_instance_no_public_access",
"rds_snapshots_public_access",
"redshift_cluster_public_access",
"s3_bucket_secure_transport_policy",
"s3_bucket_policy_public_write_access",
"s3_account_level_public_access_blocks",
"s3_bucket_public_access",
"sagemaker_notebook_instance_without_direct_internet_access_configured",
"securityhub_enabled",
"ec2_securitygroup_default_restrict_traffic",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "au-2",
"Name": "Audit Events (AU-2)",
"Description": "The organization: a. Determines that the information system is capable of auditing the following events: [Assignment: organization-defined auditable events]; b. Coordinates the security audit function with other organizational entities requiring audit- related information to enhance mutual support and to help guide the selection of auditable events; c. Provides a rationale for why the auditable events are deemed to be adequate support after- the-fact investigations of security incidents",
"Attributes": [
{
"ItemId": "au-2",
"Section": "Audit and Accountability (AU)",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_log_file_validation_enabled",
"elbv2_logging_enabled",
"rds_instance_integration_cloudwatch_logs",
"redshift_cluster_audit_logging",
"s3_bucket_server_access_logging_enabled",
"vpc_flow_logs_enabled"
]
},
{
"Id": "au-9",
"Name": "Protection of Audit Information (AU-9)",
"Description": "The information system protects audit information and audit tools from unauthorized access, modification, and deletion.",
"Attributes": [
{
"ItemId": "au-9",
"Section": "Audit and Accountability (AU)",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_kms_encryption_enabled",
"cloudtrail_log_file_validation_enabled",
"cloudwatch_log_group_kms_encryption_enabled",
"s3_bucket_object_versioning"
]
},
{
"Id": "au-11",
"Name": "Audit Record Retention (AU-11)",
"Description": "The organization retains audit records for at least 90 days to provide support for after-the-fact investigations of security incidents and to meet regulatory and organizational information retention requirements.",
"Attributes": [
{
"ItemId": "au-11",
"Section": "Audit and Accountability (AU)",
"Service": "aws"
}
],
"Checks": [
"cloudwatch_log_group_retention_policy_specific_days_enabled"
]
},
{
"Id": "ca-7",
"Name": "Continuous Monitoring (CA-7)",
"Description": "Continuously monitor configuration management processes. Determine security impact, environment and operational risks.",
"Attributes": [
{
"ItemId": "ca-7",
"Section": "Security Assessment And Authorization (CA)",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudwatch_changes_to_network_acls_alarm_configured",
"ec2_instance_imdsv2_enabled",
"elbv2_waf_acl_attached",
"guardduty_is_enabled",
"rds_instance_enhanced_monitoring_enabled",
"redshift_cluster_audit_logging",
"securityhub_enabled"
]
},
{
"Id": "cm-2",
"Name": "Baseline Configuration (CM-2)",
"Description": "The organization develops, documents, and maintains under configuration control, a current baseline configuration of the information system.",
"Attributes": [
{
"ItemId": "cm-2",
"Section": "Configuration Management (CM)",
"Service": "aws"
}
],
"Checks": [
"apigateway_waf_acl_attached",
"ec2_ebs_public_snapshot",
"ec2_instance_public_ip",
"ec2_instance_older_than_specific_days",
"elbv2_deletion_protection",
"emr_cluster_master_nodes_no_public_ip",
"awslambda_function_not_publicly_accessible",
"awslambda_function_url_public",
"rds_instance_no_public_access",
"rds_snapshots_public_access",
"redshift_cluster_public_access",
"s3_bucket_public_access",
"s3_bucket_policy_public_write_access",
"s3_account_level_public_access_blocks",
"s3_bucket_public_access",
"sagemaker_notebook_instance_without_direct_internet_access_configured",
"ssm_managed_compliant_patching",
"ec2_securitygroup_default_restrict_traffic",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "cm-8",
"Name": "Information System Component Inventory (CM-8)",
"Description": "The organization develops and documents an inventory of information system components that accurately reflects the current information system, includes all components within the authorization boundary of the information system, is at the level of granularity deemed necessary for tracking and reporting and reviews and updates the information system component inventory.",
"Attributes": [
{
"ItemId": "cm-8",
"Section": "Configuration Management (CM)",
"Service": "aws"
}
],
"Checks": [
"ec2_instance_managed_by_ssm",
"guardduty_is_enabled",
"ssm_managed_compliant_patching",
"ssm_managed_compliant_patching"
]
},
{
"Id": "cp-9",
"Name": "Information System Backup (CP-9)",
"Description": "The organization conducts backups of user-level information, system-level information and information system documentation including security-related documentation contained in the information system and protects the confidentiality, integrity, and availability of backup information at storage locations.",
"Attributes": [
{
"ItemId": "cp-9",
"Section": "Contingency Planning (CP)",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"dynamodb_tables_pitr_enabled",
"efs_have_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
},
{
"Id": "cp-10",
"Name": "Information System Recovery And Reconstitution (CP-10)",
"Description": "The organization provides for the recovery and reconstitution of the information system to a known state after a disruption, compromise, or failure.",
"Attributes": [
{
"ItemId": "cp-10",
"Section": "Contingency Planning (CP)",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"dynamodb_tables_pitr_enabled",
"efs_have_backup_enabled",
"elbv2_deletion_protection",
"rds_instance_backup_enabled",
"rds_instance_multi_az",
"rds_instance_backup_enabled",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
},
{
"Id": "ia-2",
"Name": "Identification and Authentication (Organizational users) (IA-2)",
"Description": "The information system uniquely identifies and authenticates organizational users (or processes acting on behalf of organizational users).",
"Attributes": [
{
"ItemId": "ia-2",
"Section": "Identification and Authentication (IA)",
"Service": "aws"
}
],
"Checks": [
"iam_password_policy_minimum_length_14",
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_no_root_access_key",
"iam_user_mfa_enabled_console_access",
"iam_user_mfa_enabled_console_access"
]
},
{
"Id": "ir-4",
"Name": "Incident Handling (IR-4)",
"Description": "The organization implements an incident handling capability for security incidents that includes preparation, detection and analysis, containment, eradication, and recovery, coordinates incident handling activities with contingency planning activities and incorporates lessons learned from ongoing incident handling activities into incident response procedures, training, and testing, and implements the resulting changes accordingly.",
"Attributes": [
{
"ItemId": "ir-4",
"Section": "Incident Response (IR)",
"Service": "aws"
}
],
"Checks": [
"cloudwatch_changes_to_network_acls_alarm_configured",
"cloudwatch_changes_to_network_gateways_alarm_configured",
"cloudwatch_changes_to_network_route_tables_alarm_configured",
"cloudwatch_changes_to_vpcs_alarm_configured",
"guardduty_is_enabled",
"guardduty_no_high_severity_findings",
"securityhub_enabled"
]
},
{
"Id": "sa-3",
"Name": "System Development Life Cycle (SA-3)",
"Description": "The organization manages the information system using organization-defined system development life cycle, defines and documents information security roles and responsibilities throughout the system development life cycle, identifies individuals having information security roles and responsibilities and integrates the organizational information security risk management process into system development life cycle activities.",
"Attributes": [
{
"ItemId": "sa-3",
"Section": "System and Services Acquisition (SA)",
"Service": "aws"
}
],
"Checks": [
"ec2_instance_managed_by_ssm"
]
},
{
"Id": "sc-5",
"Name": "Denial Of Service Protection (SC-5)",
"Description": "The information system protects against or limits the effects of the following types of denial of service attacks: [Assignment: organization-defined types of denial of service attacks or references to sources for such information] by employing [Assignment: organization-defined security safeguards].",
"Attributes": [
{
"ItemId": "sc-5",
"Section": "System and Communications Protection (SC)",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"elbv2_deletion_protection",
"guardduty_is_enabled",
"rds_instance_backup_enabled",
"rds_instance_deletion_protection",
"rds_instance_multi_az",
"redshift_cluster_automated_snapshot",
"s3_bucket_object_versioning"
]
},
{
"Id": "sc-7",
"Name": "Boundary Protection (SC-7)",
"Description": "The information system: a. Monitors and controls communications at the external boundary of the system and at key internal boundaries within the system; b. Implements subnetworks for publicly accessible system components that are [Selection: physically; logically] separated from internal organizational networks; and c. Connects to external networks or information systems only through managed interfaces consisting of boundary protection devices arranged in accordance with an organizational security architecture.",
"Attributes": [
{
"ItemId": "sc-7",
"Section": "System and Communications Protection (SC)",
"Service": "aws"
}
],
"Checks": [
"ec2_ebs_public_snapshot",
"ec2_instance_public_ip",
"elbv2_waf_acl_attached",
"elb_ssl_listeners",
"emr_cluster_master_nodes_no_public_ip",
"opensearch_service_domains_node_to_node_encryption_enabled",
"awslambda_function_not_publicly_accessible",
"awslambda_function_url_public",
"rds_instance_no_public_access",
"rds_snapshots_public_access",
"redshift_cluster_public_access",
"s3_bucket_secure_transport_policy",
"s3_bucket_public_access",
"s3_bucket_policy_public_write_access",
"s3_account_level_public_access_blocks",
"s3_bucket_public_access",
"sagemaker_notebook_instance_without_direct_internet_access_configured",
"ec2_securitygroup_default_restrict_traffic",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "sc-12",
"Name": "Cryptographic Key Establishment And Management (SC-12)",
"Description": "The organization establishes and manages cryptographic keys for required cryptography employed within the information system in accordance with [Assignment: organization-defined requirements for key generation, distribution, storage, access, and destruction].",
"Attributes": [
{
"ItemId": "sc-12",
"Section": "System and Communications Protection (SC)",
"Service": "aws"
}
],
"Checks": [
"acm_certificates_expiration_check",
"kms_cmk_rotation_enabled"
]
},
{
"Id": "sc-13",
"Name": "Use of Cryptography (SC-13)",
"Description": "The information system implements FIPS-validated or NSA-approved cryptography in accordance with applicable federal laws, Executive Orders, directives, policies, regulations, and standards.",
"Attributes": [
{
"ItemId": "sc-13",
"Section": "System and Communications Protection (SC)",
"Service": "aws"
}
],
"Checks": [
"s3_bucket_default_encryption",
"sagemaker_training_jobs_volume_and_output_encryption_enabled",
"sagemaker_notebook_instance_encryption_enabled",
"sns_topics_kms_encryption_at_rest_enabled"
]
}
]
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,908 @@
{
"Framework": "FFIEC",
"Version": "",
"Provider": "AWS",
"Description": "In light of the increasing volume and sophistication of cyber threats, the Federal Financial Institutions Examination Council (FFIEC) developed the Cybersecurity Assessment Tool (Assessment), on behalf of its members, to help institutions identify their risks and determine their cybersecurity maturity.",
"Requirements": [
{
"Id": "d1-g-it-b-1",
"Name": "D1.G.IT.B.1",
"Description": "An inventory of organizational assets (e.g., hardware, software, data, and systems hosted externally) is maintained.",
"Attributes": [
{
"ItemId": "d1-g-it-b-1",
"Section": "Cyber Risk Management and Oversight (Domain 1)",
"SubSection": "Governance (G)",
"Service": "aws"
}
],
"Checks": [
"ec2_instance_managed_by_ssm",
"ec2_instance_older_than_specific_days",
"ec2_elastic_ip_unassgined"
]
},
{
"Id": "d1-rm-ra-b-2",
"Name": "D1.RM.RA.B.2",
"Description": "The risk assessment identifies Internet- based systems and high-risk transactions that warrant additional authentication controls.",
"Attributes": [
{
"ItemId": "d1-rm-ra-b-2",
"Section": "Cyber Risk Management and Oversight (Domain 1)",
"SubSection": "Risk Management (RM)",
"Service": "aws"
}
],
"Checks": [
"guardduty_is_enabled"
]
},
{
"Id": "d1-rm-rm-b-1",
"Name": "D1.RM.Rm.B.1",
"Description": "An information security and business continuity risk management function(s) exists within the institution.",
"Attributes": [
{
"ItemId": "d1-rm-rm-b-1",
"Section": "Cyber Risk Management and Oversight (Domain 1)",
"SubSection": "Risk Management (RM)",
"Service": "aws"
}
],
"Checks": [
"rds_instance_backup_enabled",
"rds_instance_backup_enabled",
"rds_instance_multi_az",
"redshift_cluster_automated_snapshot"
]
},
{
"Id": "d2-is-is-b-1",
"Name": "D2.IS.Is.B.1",
"Description": "Information security threats are gathered and shared with applicable internal employees.",
"Attributes": [
{
"ItemId": "d2-is-is-b-1",
"Section": "Threat Intelligence and Collaboration (Domain 2)",
"SubSection": "Information Sharing (IS)",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_cloudwatch_logging_enabled",
"guardduty_is_enabled",
"securityhub_enabled"
]
},
{
"Id": "d2-ma-ma-b-1",
"Name": "D2.MA.Ma.B.1",
"Description": "Information security threats are gathered and shared with applicable internal employees.",
"Attributes": [
{
"ItemId": "d2-ma-ma-b-1",
"Section": "Threat Intelligence and Collaboration (Domain 2)",
"SubSection": "Monitoring and Analyzing (MA)",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"cloudwatch_log_group_retention_policy_specific_days_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
"opensearch_service_domains_cloudwatch_logging_enabled",
"rds_instance_integration_cloudwatch_logs",
"redshift_cluster_audit_logging",
"s3_bucket_server_access_logging_enabled",
"vpc_flow_logs_enabled"
]
},
{
"Id": "d2-ma-ma-b-2",
"Name": "D2.MA.Ma.B.2",
"Description": "Computer event logs are used for investigations once an event has occurred.",
"Attributes": [
{
"ItemId": "d2-ma-ma-b-2",
"Section": "Threat Intelligence and Collaboration (Domain 2)",
"SubSection": "Monitoring and Analyzing (MA)",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
"opensearch_service_domains_cloudwatch_logging_enabled",
"redshift_cluster_audit_logging",
"s3_bucket_server_access_logging_enabled",
"vpc_flow_logs_enabled"
]
},
{
"Id": "d2-ti-ti-b-1",
"Name": "D2.TI.Ti.B.1",
"Description": "The institution belongs or subscribes to a threat and vulnerability information-sharing source(s) that provides information on threats (e.g., FS-ISAC, US- CERT).",
"Attributes": [
{
"ItemId": "d2-ti-ti-b-1",
"Section": "Threat Intelligence and Collaboration (Domain 2)",
"SubSection": "Threat Intelligence (TI)",
"Service": "aws"
}
],
"Checks": [
"guardduty_is_enabled",
"securityhub_enabled"
]
},
{
"Id": "d2-ti-ti-b-2",
"Name": "D2.TI.Ti.B.2",
"Description": "Threat information is used to monitor threats and vulnerabilities.",
"Attributes": [
{
"ItemId": "d2-ti-ti-b-2",
"Section": "Threat Intelligence and Collaboration (Domain 2)",
"SubSection": "Threat Intelligence (TI)",
"Service": "aws"
}
],
"Checks": [
"guardduty_is_enabled",
"securityhub_enabled",
"ssm_managed_compliant_patching"
]
},
{
"Id": "d2-ti-ti-b-3",
"Name": "D2.TI.Ti.B.3",
"Description": "Threat information is used to enhance internal risk management and controls.",
"Attributes": [
{
"ItemId": "d2-ti-ti-b-3",
"Section": "Threat Intelligence and Collaboration (Domain 2)",
"SubSection": "Threat Intelligence (TI)",
"Service": "aws"
}
],
"Checks": [
"guardduty_is_enabled",
"securityhub_enabled"
]
},
{
"Id": "d3-cc-pm-b-1",
"Name": "D3.CC.PM.B.1",
"Description": "A patch management program is implemented and ensures that software and firmware patches are applied in a timely manner.",
"Attributes": [
{
"ItemId": "d3-cc-pm-b-1",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Corrective Controls (CC)",
"Service": "aws"
}
],
"Checks": [
"rds_instance_minor_version_upgrade_enabled",
"redshift_cluster_automatic_upgrades",
"ssm_managed_compliant_patching"
]
},
{
"Id": "d3-cc-pm-b-3",
"Name": "D3.CC.PM.B.3",
"Description": "Patch management reports are reviewed and reflect missing security patches.",
"Attributes": [
{
"ItemId": "d3-cc-pm-b-3",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Corrective Controls (CC)",
"Service": "aws"
}
],
"Checks": [
"rds_instance_minor_version_upgrade_enabled",
"redshift_cluster_automatic_upgrades",
"ssm_managed_compliant_patching"
]
},
{
"Id": "d3-dc-an-b-1",
"Name": "D3.DC.An.B.1",
"Description": "The institution is able to detect anomalous activities through monitoring across the environment.",
"Attributes": [
{
"ItemId": "d3-dc-an-b-1",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Detective Controls (DC)",
"Service": "aws"
}
],
"Checks": [
"guardduty_is_enabled",
"guardduty_no_high_severity_findings",
"securityhub_enabled"
]
},
{
"Id": "d3-dc-an-b-2",
"Name": "D3.DC.An.B.2",
"Description": "Customer transactions generating anomalous activity alerts are monitored and reviewed.",
"Attributes": [
{
"ItemId": "d3-dc-an-b-2",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Detective Controls (DC)",
"Service": "aws"
}
],
"Checks": [
"guardduty_is_enabled",
"securityhub_enabled"
]
},
{
"Id": "d3-dc-an-b-3",
"Name": "D3.DC.An.B.3",
"Description": "Logs of physical and/or logical access are reviewed following events.",
"Attributes": [
{
"ItemId": "d3-dc-an-b-3",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Detective Controls (DC)",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
"opensearch_service_domains_cloudwatch_logging_enabled",
"rds_instance_integration_cloudwatch_logs",
"s3_bucket_server_access_logging_enabled",
"vpc_flow_logs_enabled"
]
},
{
"Id": "d3-dc-an-b-4",
"Name": "D3.DC.An.B.4",
"Description": "Access to critical systems by third parties is monitored for unauthorized or unusual activity.",
"Attributes": [
{
"ItemId": "d3-dc-an-b-4",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Detective Controls (DC)",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
"opensearch_service_domains_cloudwatch_logging_enabled",
"rds_instance_integration_cloudwatch_logs",
"redshift_cluster_audit_logging",
"s3_bucket_server_access_logging_enabled",
"vpc_flow_logs_enabled"
]
},
{
"Id": "d3-dc-an-b-5",
"Name": "D3.DC.An.B.5",
"Description": "Elevated privileges are monitored.",
"Attributes": [
{
"ItemId": "d3-dc-an-b-5",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Detective Controls (DC)",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled"
]
},
{
"Id": "d3-dc-ev-b-1",
"Name": "D3.DC.Ev.B.1",
"Description": "A normal network activity baseline is established.",
"Attributes": [
{
"ItemId": "d3-dc-ev-b-1",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Detective Controls (DC)",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
"redshift_cluster_audit_logging",
"vpc_flow_logs_enabled"
]
},
{
"Id": "d3-dc-ev-b-2",
"Name": "D3.DC.Ev.B.2",
"Description": "Mechanisms (e.g., antivirus alerts, log event alerts) are in place to alert management to potential attacks.",
"Attributes": [
{
"ItemId": "d3-dc-ev-b-2",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Detective Controls (DC)",
"Service": "aws"
}
],
"Checks": [
"guardduty_is_enabled"
]
},
{
"Id": "d3-dc-ev-b-3",
"Name": "D3.DC.Ev.B.3",
"Description": "Processes are in place to monitor for the presence of unauthorized users, devices, connections, and software.",
"Attributes": [
{
"ItemId": "d3-dc-ev-b-3",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Detective Controls (DC)",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_multi_region_enabled",
"guardduty_is_enabled",
"securityhub_enabled",
"vpc_flow_logs_enabled"
]
},
{
"Id": "d3-dc-th-b-1",
"Name": "D3.DC.Th.B.1",
"Description": "Independent testing (including penetration testing and vulnerability scanning) is conducted according to the risk assessment for external-facing systems and the internal network.",
"Attributes": [
{
"ItemId": "d3-dc-th-b-1",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Detective Controls (DC)",
"Service": "aws"
}
],
"Checks": [
"guardduty_is_enabled",
"securityhub_enabled",
"ssm_managed_compliant_patching"
]
},
{
"Id": "d3-pc-am-b-1",
"Name": "D3.PC.Am.B.1",
"Description": "Employee access is granted to systems and confidential data based on job responsibilities and the principles of least privilege.",
"Attributes": [
{
"ItemId": "d3-pc-am-b-1",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"ec2_instance_profile_attached",
"iam_policy_attached_only_to_group_or_roles",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_inline_policy_no_administrative_privileges",
"iam_no_root_access_key"
]
},
{
"Id": "d3-pc-am-b-10",
"Name": "D3.PC.Am.B.10",
"Description": "Production and non-production environments are segregated to prevent unauthorized access or changes to information assets. (*N/A if no production environment exists at the institution or the institution's third party.)",
"Attributes": [
{
"ItemId": "d3-pc-am-b-10",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "d3-pc-am-b-12",
"Name": "D3.PC.Am.B.12",
"Description": "All passwords are encrypted in storage and in transit.",
"Attributes": [
{
"ItemId": "d3-pc-am-b-12",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"apigateway_client_certificate_enabled",
"ec2_ebs_volume_encryption",
"ec2_ebs_default_encryption",
"efs_encryption_at_rest_enabled",
"opensearch_service_domains_encryption_at_rest_enabled",
"opensearch_service_domains_node_to_node_encryption_enabled",
"rds_instance_storage_encrypted",
"redshift_cluster_audit_logging",
"s3_bucket_default_encryption",
"s3_bucket_secure_transport_policy"
]
},
{
"Id": "d3-pc-am-b-13",
"Name": "D3.PC.Am.B.13",
"Description": "Confidential data is encrypted when transmitted across public or untrusted networks (e.g., Internet).",
"Attributes": [
{
"ItemId": "d3-pc-am-b-13",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"apigateway_client_certificate_enabled",
"elbv2_insecure_ssl_ciphers",
"elb_ssl_listeners",
"s3_bucket_secure_transport_policy"
]
},
{
"Id": "d3-pc-am-b-15",
"Name": "D3.PC.Am.B.15",
"Description": "Remote access to critical systems by employees, contractors, and third parties uses encrypted connections and multifactor authentication.",
"Attributes": [
{
"ItemId": "d3-pc-am-b-15",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"apigateway_client_certificate_enabled",
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_user_mfa_enabled_console_access",
"s3_bucket_secure_transport_policy"
]
},
{
"Id": "d3-pc-am-b-16",
"Name": "D3.PC.Am.B.16",
"Description": "Administrative, physical, or technical controls are in place to prevent users without administrative responsibilities from installing unauthorized software.",
"Attributes": [
{
"ItemId": "d3-pc-am-b-16",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_inline_policy_no_administrative_privileges"
]
},
{
"Id": "d3-pc-am-b-2",
"Name": "D3.PC.Am.B.2",
"Description": "Employee access to systems and confidential data provides for separation of duties.",
"Attributes": [
{
"ItemId": "d3-pc-am-b-2",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_inline_policy_no_administrative_privileges"
]
},
{
"Id": "d3-pc-am-b-3",
"Name": "D3.PC.Am.B.3",
"Description": "Elevated privileges (e.g., administrator privileges) are limited and tightly controlled (e.g., assigned to individuals, not shared, and require stronger password controls",
"Attributes": [
{
"ItemId": "d3-pc-am-b-3",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_inline_policy_no_administrative_privileges",
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_no_root_access_key"
]
},
{
"Id": "d3-pc-am-b-6",
"Name": "D3.PC.Am.B.6",
"Description": "Identification and authentication are required and managed for access to systems, applications, and hardware.",
"Attributes": [
{
"ItemId": "d3-pc-am-b-6",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"iam_password_policy_minimum_length_14",
"iam_password_policy_lowercase",
"iam_password_policy_number",
"iam_password_policy_number",
"iam_password_policy_symbol",
"iam_password_policy_uppercase",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_inline_policy_no_administrative_privileges",
"iam_root_hardware_mfa_enabled",
"iam_root_mfa_enabled",
"iam_rotate_access_key_90_days",
"iam_user_mfa_enabled_console_access",
"iam_user_mfa_enabled_console_access",
"iam_disable_90_days_credentials"
]
},
{
"Id": "d3-pc-am-b-7",
"Name": "D3.PC.Am.B.7",
"Description": "Access controls include password complexity and limits to password attempts and reuse.",
"Attributes": [
{
"ItemId": "d3-pc-am-b-7",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"iam_password_policy_minimum_length_14",
"iam_password_policy_lowercase",
"iam_password_policy_number",
"iam_password_policy_number",
"iam_password_policy_symbol",
"iam_password_policy_uppercase"
]
},
{
"Id": "d3-pc-am-b-8",
"Name": "D3.PC.Am.B.8",
"Description": "All default passwords and unnecessary default accounts are changed before system implementation.",
"Attributes": [
{
"ItemId": "d3-pc-am-b-8",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"iam_no_root_access_key"
]
},
{
"Id": "d3-pc-im-b-1",
"Name": "D3.PC.Im.B.1",
"Description": "Network perimeter defense tools (e.g., border router and firewall) are used.",
"Attributes": [
{
"ItemId": "d3-pc-im-b-1",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"acm_certificates_expiration_check",
"apigateway_waf_acl_attached",
"ec2_ebs_public_snapshot",
"ec2_instance_public_ip",
"elbv2_waf_acl_attached",
"emr_cluster_master_nodes_no_public_ip",
"awslambda_function_not_publicly_accessible",
"awslambda_function_url_public",
"rds_instance_no_public_access",
"rds_snapshots_public_access",
"redshift_cluster_public_access",
"s3_bucket_public_access",
"s3_bucket_policy_public_write_access",
"s3_account_level_public_access_blocks",
"s3_bucket_public_access",
"sagemaker_notebook_instance_without_direct_internet_access_configured",
"ec2_securitygroup_default_restrict_traffic",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "d3-pc-im-b-2",
"Name": "D3.PC.Im.B.2",
"Description": "Systems that are accessed from the Internet or by external parties are protected by firewalls or other similar devices.",
"Attributes": [
{
"ItemId": "d3-pc-im-b-2",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"apigateway_waf_acl_attached",
"elbv2_waf_acl_attached",
"ec2_securitygroup_default_restrict_traffic",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "d3-pc-im-b-3",
"Name": "D3.PC.Im.B.3",
"Description": "All ports are monitored.",
"Attributes": [
{
"ItemId": "d3-pc-im-b-3",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"apigateway_logging_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"elbv2_logging_enabled",
"elb_logging_enabled",
"vpc_flow_logs_enabled"
]
},
{
"Id": "d3-pc-im-b-5",
"Name": "D3.PC.Im.B.5",
"Description": "Systems configurations (for servers, desktops, routers, etc.) follow industry standards and are enforced",
"Attributes": [
{
"ItemId": "d3-pc-im-b-5",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"ec2_instance_managed_by_ssm",
"ssm_managed_compliant_patching",
"ssm_managed_compliant_patching"
]
},
{
"Id": "d3-pc-im-b-6",
"Name": "D3.PC.Im.B.6",
"Description": "Ports, functions, protocols and services are prohibited if no longer needed for business purposes.",
"Attributes": [
{
"ItemId": "d3-pc-im-b-6",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"ec2_securitygroup_default_restrict_traffic",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "d3-pc-im-b-7",
"Name": "D3.PC.Im.B.7",
"Description": "Access to make changes to systems configurations (including virtual machines and hypervisors) is controlled and monitored.",
"Attributes": [
{
"ItemId": "d3-pc-im-b-7",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"iam_policy_attached_only_to_group_or_roles",
"iam_aws_attached_policy_no_administrative_privileges",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_inline_policy_no_administrative_privileges"
]
},
{
"Id": "d3-pc-se-b-1",
"Name": "D3.PC.Se.B.1",
"Description": "Developers working for the institution follow secure program coding practices, as part of a system development life cycle (SDLC), that meet industry standards.",
"Attributes": [
{
"ItemId": "d3-pc-se-b1",
"Section": "Cybersecurity Controls (Domain 3)",
"SubSection": "Preventative Controls (PC)",
"Service": "aws"
}
],
"Checks": []
},
{
"Id": "d4-c-co-b-2",
"Name": "D4.C.Co.B.2",
"Description": "The institution ensures that third-party connections are authorized.",
"Attributes": [
{
"ItemId": "d4-c-co-b-2",
"Section": "External Dependency Management (Domain 4)",
"SubSection": "Connections (C)",
"Service": "aws"
}
],
"Checks": [
"ec2_securitygroup_default_restrict_traffic",
"ec2_networkacl_allow_ingress_any_port",
"ec2_securitygroup_allow_ingress_from_internet_to_tcp_port_22",
"ec2_networkacl_allow_ingress_any_port"
]
},
{
"Id": "d5-dr-de-b-1",
"Name": "D5.DR.De.B.1",
"Description": "Alert parameters are set for detecting information security incidents that prompt mitigating actions.",
"Attributes": [
{
"ItemId": "d5-dr-de-b-1",
"Section": "Cyber Incident Management and Resilience (Domain 5)",
"SubSection": "Detection, Response, & Mitigation (DR)",
"Service": "aws"
}
],
"Checks": [
"cloudwatch_changes_to_network_acls_alarm_configured",
"cloudwatch_changes_to_network_gateways_alarm_configured",
"cloudwatch_changes_to_network_route_tables_alarm_configured",
"cloudwatch_changes_to_vpcs_alarm_configured",
"guardduty_is_enabled",
"securityhub_enabled"
]
},
{
"Id": "d5-dr-de-b-2",
"Name": "D5.DR.De.B.2",
"Description": "System performance reports contain information that can be used as a risk indicator to detect information security incidents.",
"Attributes": [
{
"ItemId": "d5-dr-de-b-2",
"Section": "Cyber Incident Management and Resilience (Domain 5)",
"SubSection": "Detection, Response, & Mitigation (DR)",
"Service": "aws"
}
],
"Checks": []
},
{
"Id": "d5-dr-de-b-3",
"Name": "D5.DR.De.B.3",
"Description": "Tools and processes are in place to detect, alert, and trigger the incident response program.",
"Attributes": [
{
"ItemId": "d5-dr-de-b-3",
"Section": "Cyber Incident Management and Resilience (Domain 5)",
"SubSection": "Detection, Response, & Mitigation (DR)",
"Service": "aws"
}
],
"Checks": [
"cloudtrail_multi_region_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_multi_region_enabled",
"cloudtrail_cloudwatch_logging_enabled",
"cloudwatch_changes_to_network_acls_alarm_configured",
"cloudwatch_changes_to_network_gateways_alarm_configured",
"cloudwatch_changes_to_network_route_tables_alarm_configured",
"cloudwatch_changes_to_vpcs_alarm_configured",
"elbv2_logging_enabled",
"elb_logging_enabled",
"guardduty_is_enabled",
"rds_instance_integration_cloudwatch_logs",
"redshift_cluster_audit_logging",
"s3_bucket_server_access_logging_enabled",
"securityhub_enabled"
]
},
{
"Id": "d5-er-es-b-4",
"Name": "D5.ER.Es.B.4",
"Description": "Incidents are classified, logged and tracked.",
"Attributes": [
{
"ItemId": "d5-er-es-b-4",
"Section": "Cyber Incident Management and Resilience (Domain 5)",
"SubSection": "Escalation and Reporting (ER)",
"Service": "aws"
}
],
"Checks": [
"guardduty_no_high_severity_findings"
]
},
{
"Id": "d5-ir-pl-b-6",
"Name": "D5.IR.Pl.B.6",
"Description": "The institution plans to use business continuity, disaster recovery, and data backup programs to recover operations following an incident.",
"Attributes": [
{
"ItemId": "d5-ir-pl-b-6",
"Section": "Cyber Incident Management and Resilience (Domain 5)",
"SubSection": "Incident Resilience Planning & Strategy (IR)",
"Service": "aws"
}
],
"Checks": [
"dynamodb_tables_pitr_enabled",
"elbv2_deletion_protection",
"rds_instance_enhanced_monitoring_enabled",
"rds_instance_backup_enabled",
"rds_instance_deletion_protection",
"rds_instance_multi_az",
"rds_instance_backup_enabled",
"s3_bucket_object_versioning"
]
}
]
}

Some files were not shown because too many files have changed in this diff Show More