Compare commits

...

338 Commits

Author SHA1 Message Date
github-actions
551d9d7745 chore(release): 3.13.1 2024-02-20 12:37:18 +00:00
Pepe Fagoaga
753f32b4cb fix(inspector2): Report must have status field (#3419) 2024-02-20 12:58:03 +01:00
dependabot[bot]
bdf3236350 build(deps): bump google-api-python-client from 2.117.0 to 2.118.0 (#3417)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-20 10:51:49 +00:00
dependabot[bot]
d8a505b87c build(deps): bump mkdocs-material from 9.5.9 to 9.5.10 (#3416)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-20 09:52:19 +00:00
dependabot[bot]
caf021a7a6 build(deps): bump slack-sdk from 3.26.2 to 3.27.0 (#3415)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-20 09:50:48 +01:00
dependabot[bot]
3776856a6c build(deps-dev): bump pytest from 8.0.0 to 8.0.1 (#3414)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-20 09:28:44 +01:00
dependabot[bot]
c9f87b907c build(deps-dev): bump moto from 5.0.1 to 5.0.2 (#3413)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-20 08:30:33 +01:00
dependabot[bot]
ae378b6d50 build(deps): bump trufflesecurity/trufflehog from 3.67.5 to 3.67.6 (#3412)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-20 08:30:17 +01:00
Pedro Martín
f7afd7d1d6 feat(azure): Add new checks related to PostgreSQL service (#3409) 2024-02-19 11:33:59 +00:00
Rubén De la Torre Vico
c92a99baaf fix(azure): Typo in appinsights service (#3407)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2024-02-19 11:05:28 +00:00
Pepe Fagoaga
3c82d89aa4 fix(labeler): Work on forks too (#3410) 2024-02-19 11:04:37 +00:00
Nacho Rivera
69aedb8490 chore(regions_update): Changes in regions for AWS services. (#3406)
Co-authored-by: sergargar <38561120+sergargar@users.noreply.github.com>
2024-02-16 10:45:17 +01:00
Rubén De la Torre Vico
af00c5382b feat(azure): checks related with MySQL service (#3385)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2024-02-16 10:40:41 +01:00
Pepe Fagoaga
8e93493d2b test(aws): Add default Boto3 credentials (#3404) 2024-02-16 09:13:51 +01:00
Pepe Fagoaga
ac439060a3 fix(labeler): Add right path for testing (#3405) 2024-02-16 09:13:25 +01:00
Pepe Fagoaga
d6f28be8f2 chore(pull-request): Add automatic labeler (#3398) 2024-02-15 14:26:41 +01:00
Nacho Rivera
d3946840de chore(regions_update): Changes in regions for AWS services. (#3401)
Co-authored-by: sergargar <38561120+sergargar@users.noreply.github.com>
2024-02-15 14:25:37 +01:00
Pedro Martín
355f589e5a feat(azure): New Azure checks related to CosmosDB (#3386)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2024-02-13 13:53:36 +01:00
Rubén De la Torre Vico
4740a7b930 feat(azure): check related with App Insights service (#3395) 2024-02-13 13:27:12 +01:00
Hugo966
cc71249e21 fix(storage): update metadata with CIS 2.0 in storage_default_network_access_rule_is_denied (#3387) 2024-02-13 12:05:39 +01:00
dependabot[bot]
ccd9e27823 build(deps): bump google-api-python-client from 2.116.0 to 2.117.0 (#3391)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-13 10:52:21 +01:00
Sergio Garcia
9f16e4dc81 fix(backup): handle if last_attempted_execution_date is None (#3394) 2024-02-13 10:25:49 +01:00
dependabot[bot]
eca7f7be61 build(deps): bump mkdocs-material from 9.5.6 to 9.5.9 (#3392)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-13 10:25:35 +01:00
dependabot[bot]
409675e0c0 build(deps-dev): bump bandit from 1.7.6 to 1.7.7 (#3390)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-13 09:50:44 +01:00
dependabot[bot]
f9c839bfdc build(deps): bump trufflesecurity/trufflehog from 3.67.2 to 3.67.5 (#3393)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-13 09:24:11 +01:00
dependabot[bot]
47e212ee17 build(deps-dev): bump black from 24.1.1 to 24.2.0 (#3389)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-13 09:23:55 +01:00
Nacho Rivera
042976fac3 chore(regions_update): Changes in regions for AWS services. (#3384)
Co-authored-by: sergargar <38561120+sergargar@users.noreply.github.com>
2024-02-09 13:44:26 +01:00
Sergio Garcia
5b45bbb1a5 chore(list): list compliance and categories sorted (#3381) 2024-02-08 16:54:47 +01:00
Sergio Garcia
9bb702076a chore(release): update Prowler Version to 3.13.0 (#3380)
Co-authored-by: github-actions <noreply@github.com>
2024-02-08 15:09:13 +01:00
Sergio Garcia
8ed97810a8 feat(cis): add new CIS AWS v3.0.0 (#3379)
Co-authored-by: pedrooot <pedromarting3@gmail.com>
2024-02-08 13:31:12 +01:00
Sergio Garcia
c5af9605ee fix(alias): allow multiple check aliases (#3378) 2024-02-08 12:21:42 +01:00
Iain Wallace
f5a18dce56 fix(cis): update CIS AWS v2.0 Section 2.1 refs (#3375)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2024-02-08 12:09:49 +01:00
Sergio Garcia
d14d8f5e02 chore(regions_update): Changes in regions for AWS services. (#3377) 2024-02-08 10:42:19 +01:00
Pepe Fagoaga
eadc66f53b fix(allowlist): Handle tags and resources (#3376) 2024-02-08 10:06:02 +01:00
Sergio Garcia
5f946d08cb chore(regions_update): Changes in regions for AWS services. (#3370) 2024-02-07 17:57:29 +01:00
Rubén De la Torre Vico
3f7c37abb9 feat(defender): New Terraform URL for metadata checks (#3374) 2024-02-07 16:02:56 +01:00
Pedro Martín
b60b48b948 feat(Azure): Add 4 new checks related to SQLServer and Vulnerability Assessment (#3372) 2024-02-07 16:01:52 +01:00
Sergio Garcia
68ecf939d9 feat(python): support Python 3.12 (#3371) 2024-02-07 15:16:02 +01:00
Rubén De la Torre Vico
a50d093679 fix(defender): Manage 404 exception for "default" security contacts (#3373) 2024-02-07 13:38:20 +01:00
Rubén De la Torre Vico
740e829e4f feat(azure): Defender check defender_ensure_iot_hub_defender_is_on (#3367) 2024-02-07 12:46:02 +01:00
Pedro Martín
f7051351ec fix(azure): Fix check sqlserver_auditing_retention_90_days (#3365) 2024-02-06 17:17:10 +01:00
dependabot[bot]
a1018ad683 build(deps): bump aiohttp from 3.9.1 to 3.9.2 (#3366)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-06 13:52:05 +01:00
dependabot[bot]
a912189e51 build(deps): bump msgraph-core from 0.2.2 to 1.0.0 (#3309)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2024-02-06 13:35:22 +01:00
Sergio Garcia
7298f64e5c fix(s3): add s3:Get* case to s3_bucket_policy_public_write_access (#3364) 2024-02-06 13:04:55 +01:00
Rubén De la Torre Vico
fcf902eb1f feat(azure): Defender checks related to defender settings (#3347)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2024-02-06 12:23:36 +01:00
Sergio Garcia
89c71a068b chore(pre-commit): remove pytest from pre-commit (#3363) 2024-02-06 11:22:00 +01:00
dependabot[bot]
8946145070 build(deps-dev): bump coverage from 7.4.0 to 7.4.1 (#3357)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-06 10:50:20 +01:00
Sergio Garcia
db15c0de9e fix(rds): verify SGs in rds_instance_no_public_access (#3341)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2024-02-06 10:49:58 +01:00
dependabot[bot]
643a918034 build(deps-dev): bump moto from 5.0.0 to 5.0.1 (#3358)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-06 10:33:51 +01:00
Sergio Garcia
f21dcd8122 chore(inspector): refactor inspector2_findings_exist check into two (#3338) 2024-02-06 10:32:19 +01:00
dependabot[bot]
ac44d4a27b build(deps-dev): bump black from 22.12.0 to 24.1.1 (#3356)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2024-02-06 10:17:01 +01:00
dependabot[bot]
9c898c34f6 build(deps): bump cryptography from 41.0.6 to 42.0.0 (#3362)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-06 10:05:34 +01:00
dependabot[bot]
c0e0ddbc1c build(deps): bump trufflesecurity/trufflehog from 3.66.1 to 3.67.2 (#3361)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-06 10:05:14 +01:00
dependabot[bot]
6c756ea52f build(deps): bump codecov/codecov-action from 3 to 4 (#3360)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-06 10:04:56 +01:00
dependabot[bot]
0a413b6fd2 build(deps): bump peter-evans/create-pull-request from 5 to 6 (#3359)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-06 09:58:55 +01:00
dependabot[bot]
7ac7d9c9a8 build(deps): bump google-api-python-client from 2.113.0 to 2.116.0 (#3355)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-06 09:58:28 +01:00
Toni de la Fuente
7322d0bd30 chore(docs): Update README.md (#3353)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2024-02-05 17:52:46 +01:00
Pedro Martín
469cc749d8 feat(readme): Update readme with new numbers for Prowler checks (#3354) 2024-02-05 17:49:43 +01:00
Toni de la Fuente
e91a694b46 chore(docs): update CODE_OF_CONDUCT.md (#3352) 2024-02-05 17:27:12 +01:00
Pedro Martín
4587a9f651 refactor(azure): Change class names from azure services and fix typing error (#3350) 2024-02-05 15:43:04 +01:00
Rubén De la Torre Vico
8c51094df1 fix(storage) Manage None type manage for key_expiration_period_in_days (#3351) 2024-02-05 15:42:03 +01:00
Rubén De la Torre Vico
c795d76fe9 feat(azure): Defender checks related to security contacts and notifications (#3344) 2024-02-05 13:51:56 +01:00
Pepe Fagoaga
c6e8a0b6d3 fix(organizations): Handle non existent policy (#3319) 2024-02-05 12:37:08 +01:00
dependabot[bot]
b23be4164f build(deps-dev): bump moto from 4.2.13 to 5.0.0 (#3329)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2024-02-05 12:06:16 +01:00
Pedro Martín
de77f3ff13 feat(azure): new check sqlserver_vulnerability_assessment_enabled (#3349) 2024-02-05 11:39:05 +01:00
Pedro Martín
7c0ff1ff6a feat(azure): New Azure SQLServer related check sqlserver_auditing_retention_90_days (#3345) 2024-02-05 10:58:44 +01:00
Sergio Garcia
888cb92987 chore(regions_update): Changes in regions for AWS services. (#3342)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2024-02-05 09:37:02 +01:00
Sergio Garcia
9a038f7bed chore(regions_update): Changes in regions for AWS services. (#3348)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2024-02-05 09:36:48 +01:00
Sergio Garcia
b98f245bf2 chore(regions_update): Changes in regions for AWS services. (#3339)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2024-02-05 09:20:26 +01:00
Sergio Garcia
e59b5caaf9 chore(regions_update): Changes in regions for AWS services. (#3333)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2024-02-05 09:20:09 +01:00
Sergio Garcia
5a602d7adb chore(regions_update): Changes in regions for AWS services. (#3325)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2024-02-05 09:18:49 +01:00
Pedro Martín
14aa7a3f67 feat(azure): SQLServer checks related to TDE encryption (#3343) 2024-02-02 11:35:18 +01:00
Pedro Martín
6e991107e7 feat(azure): New check storage_ensure_soft_delete_is_enabled (#3334) 2024-01-31 13:29:20 +01:00
Rubén De la Torre Vico
622bce9c52 feat(azure): Add check defender_ensure_system_updates_are_applied and defender_auto_provisioning_vulnerabilty_assessments_machines_on (#3327) 2024-01-31 12:29:45 +01:00
Pedro Martín
48587bd034 feat(compliance): account security onboarding compliance framework (#3286)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2024-01-31 10:18:31 +01:00
Rubén De la Torre Vico
19d6352950 fix(GuardDuty): fix class name (#3337) 2024-01-30 14:43:55 +01:00
dependabot[bot]
2c4b5c99ce build(deps): bump mkdocs-material from 9.5.4 to 9.5.6 (#3330)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-30 10:59:15 +01:00
dependabot[bot]
15a194c9b0 build(deps-dev): bump pytest from 7.4.4 to 8.0.0 (#3331)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-30 10:15:07 +01:00
dependabot[bot]
e94e3cead9 build(deps): bump trufflesecurity/trufflehog from 3.63.11 to 3.66.1 (#3332)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-30 10:14:51 +01:00
dependabot[bot]
ee2ed92fb5 build(deps-dev): bump vulture from 2.10 to 2.11 (#3328)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-30 09:46:17 +01:00
Pedro Martín
db4579435a feat(azure): add new check storage_ensure_private_endpoints_in_storage_accounts (#3326)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2024-01-29 13:55:19 +01:00
Pedro Martín
ae1ab1d957 feat(azure): Add new check storage_key_rotation_90_days (#3323) 2024-01-29 12:57:19 +01:00
Rubén De la Torre Vico
a8edd03e65 feat(azure): Add check defender_auto_provisioning_log_analytics_agent_vms_on (#3322)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2024-01-29 11:02:49 +01:00
Pepe Fagoaga
8768b4cc31 chore(actions): Add AWS tag to the update regions bot (#3321) 2024-01-29 10:15:16 +01:00
Pedro Martín
cd9c192208 chore(azure): Remove all unnecessary init methods in @dataclass (#3324) 2024-01-26 13:15:42 +01:00
Sergio Garcia
dcd97e7d26 chore(regions_update): Changes in regions for AWS services. (#3320)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2024-01-26 10:50:14 +01:00
Pedro Martín
8a6ae68b9a feat(azure): Add new check "iam_custom_role_permits_administering_resource_locks" (#3317)
Co-authored-by: Pepe Fagoaga <pepe@prowler.com>
2024-01-25 14:29:29 +01:00
Sergio Garcia
dff3e72e7d chore(regions_update): Changes in regions for AWS services. (#3318)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2024-01-25 14:14:27 +01:00
Sergio Garcia
f0ac440146 chore(regions_update): Changes in regions for AWS services. (#3316)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2024-01-24 11:57:11 +01:00
dependabot[bot]
7d7e5f4e1d build(deps): bump azure-mgmt-security from 5.0.0 to 6.0.0 (#3312)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-23 13:55:28 +01:00
Antoine Ansari
a21dd4a2ed feat(quick-inventory): custom output file in quick inventory (#3306)
Co-authored-by: antoinea <antoinea@padok.fr>
2024-01-23 10:05:45 +01:00
dependabot[bot]
7f4e5bf435 build(deps-dev): bump safety from 2.3.5 to 3.0.1 (#3313)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-23 09:13:19 +01:00
dependabot[bot]
dad590f070 build(deps): bump pydantic from 1.10.13 to 1.10.14 (#3311)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-23 08:57:26 +01:00
dependabot[bot]
f22b81fe3b build(deps): bump trufflesecurity/trufflehog from 3.63.9 to 3.63.11 (#3307)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-23 08:40:55 +01:00
dependabot[bot]
68c1acbc7a build(deps): bump tj-actions/changed-files from 41 to 42 (#3308)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-23 08:40:37 +01:00
dependabot[bot]
e5412404ca build(deps): bump jsonschema from 4.20.0 to 4.21.1 (#3310)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-23 08:40:13 +01:00
Sergio Garcia
5e733f6217 chore(regions_update): Changes in regions for AWS services. (#3303)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2024-01-22 09:23:39 +01:00
Pepe Fagoaga
c830e4e399 docs(security-hub): Add integration steps and images (#3304) 2024-01-22 09:13:24 +01:00
Pepe Fagoaga
c3ecd2b3e5 docs(security-hub): improve documentation and clarify steps (#3301) 2024-01-18 13:55:07 +01:00
Sergio Garcia
fd4d2db467 fix(BadRequest): add BadRequest exception to WellArchitected (#3300) 2024-01-18 10:42:27 +01:00
Sergio Garcia
49b76ab050 chore(docs): update documentation (#3297) 2024-01-18 10:40:06 +01:00
Sergio Garcia
c53f931d09 fix(NoSuchEntity): add NoSuchEntity exception to IAM (#3299) 2024-01-18 10:39:09 +01:00
Sergio Garcia
f344dbbc07 chore(regions_update): Changes in regions for AWS services. (#3298)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2024-01-18 10:35:23 +01:00
Esteban Mendoza
c617c10ffa fix(acm): adding more details on remaining expiration days (#3293)
Co-authored-by: Esteban <mendoza@versprite.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2024-01-17 09:42:19 +01:00
Sergio Garcia
4a15625bf9 chore(compliance): make SocType attribute general (#3287) 2024-01-16 13:41:08 +01:00
dependabot[bot]
c5def6d736 build(deps): bump mkdocs-material from 9.5.3 to 9.5.4 (#3285)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-16 08:07:11 +01:00
dependabot[bot]
b232b675a7 build(deps): bump actions/checkout from 3 to 4 (#3284)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-16 08:06:54 +01:00
dependabot[bot]
6c03683c20 build(deps): bump peter-evans/create-pull-request from 4 to 5 (#3283)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-16 08:06:37 +01:00
dependabot[bot]
2da57db5a8 build(deps): bump docker/login-action from 2 to 3 (#3282)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-16 08:05:59 +01:00
dependabot[bot]
c7b794c1c4 build(deps): bump docker/build-push-action from 2 to 5 (#3281)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-16 08:05:44 +01:00
dependabot[bot]
5154cec7d2 build(deps): bump slack-sdk from 3.26.1 to 3.26.2 (#3280)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-15 11:44:57 +01:00
dependabot[bot]
e4cbb3c90e build(deps): bump actions/setup-python from 2 to 5 (#3277)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-15 11:36:01 +01:00
dependabot[bot]
17f5cbeac2 build(deps): bump docker/setup-buildx-action from 2 to 3 (#3276)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-15 11:30:25 +01:00
dependabot[bot]
90a4924508 build(deps): bump github/codeql-action from 2 to 3 (#3279)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-15 11:29:58 +01:00
dependabot[bot]
d499053016 build(deps): bump aws-actions/configure-aws-credentials from 1 to 4 (#3278)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-15 11:29:39 +01:00
dependabot[bot]
d343a67d6a build(deps): bump trufflesecurity/trufflehog from 3.4.4 to 3.63.9 (#3275)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-15 11:29:30 +01:00
Pepe Fagoaga
8435ab48b0 chore(dependabot): Run for GHA (#3274) 2024-01-15 11:19:44 +01:00
Sergio Garcia
27edf0f55a chore(regions_update): Changes in regions for AWS services. (#3273)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2024-01-15 10:53:55 +01:00
Sergio Garcia
3d00554332 chore(README): update syntax of supported Python versions (#3271) 2024-01-12 12:59:56 +01:00
Toni de la Fuente
2631709abf docs(README): Update Kubernetes development status and Python supported versions (#3270) 2024-01-12 12:17:06 +01:00
Sergio Garcia
4b0102b309 chore(release): update Prowler Version to 3.12.1 (#3269)
Co-authored-by: github-actions <noreply@github.com>
2024-01-12 11:52:02 +01:00
Nacho Rivera
b9a24e0338 fix(fms): handle list compliance status error (#3259) 2024-01-12 11:00:07 +01:00
Sergio Garcia
f127d4a8b1 chore(regions_update): Changes in regions for AWS services. (#3268)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2024-01-12 10:15:16 +01:00
Pepe Fagoaga
73780682a1 fix(allowlist): Handle empty exceptions (#3266) 2024-01-12 09:54:03 +01:00
dependabot[bot]
9a1c034a51 build(deps): bump jinja2 from 3.1.2 to 3.1.3 (#3267)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-12 08:31:37 +01:00
Sergio Garcia
94179f27ec chore(readme): remove deprecated library name (#3251) 2024-01-11 17:55:44 +01:00
Pepe Fagoaga
6797b5a93d fix(apigatewayv2_api_access_logging_enabled): Finding ID should be unique (#3263) 2024-01-11 15:15:48 +01:00
Nacho Rivera
874a131ec9 chore(precommit): set trufflehog as command (#3262) 2024-01-11 11:47:19 +01:00
Nacho Rivera
641727ee0e fix(rds): handle api call error response (#3258) 2024-01-11 09:50:44 +01:00
dependabot[bot]
f50075257c build(deps-dev): bump gitpython from 3.1.37 to 3.1.41 (#3257) 2024-01-11 09:50:16 +01:00
Sergio Garcia
4d1de8f75c chore(regions_update): Changes in regions for AWS services. (#3256)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2024-01-10 10:20:50 +01:00
Pepe Fagoaga
b76d0153eb chore(s3): Update log not to duplicate it (#3255) 2024-01-10 10:00:02 +01:00
Sergio Garcia
f82789b99f chore(regions_update): Changes in regions for AWS services. (#3249)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2024-01-09 10:31:05 +01:00
dependabot[bot]
89c789ce10 build(deps-dev): bump flake8 from 6.1.0 to 7.0.0 (#3246)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-09 10:02:54 +01:00
Pepe Fagoaga
6dba54b028 docs: Add Codecov badge (#3248) 2024-01-09 09:54:30 +01:00
dependabot[bot]
d852cb4ed6 build(deps): bump google-api-python-client from 2.111.0 to 2.113.0 (#3245)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-09 09:44:47 +01:00
dependabot[bot]
4c666fa1fe build(deps-dev): bump moto from 4.2.12 to 4.2.13 (#3244)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-09 09:01:42 +01:00
Sergio Garcia
98adc1872d chore(release): update Prowler Version to 3.12.0 (#3242)
Co-authored-by: github-actions <noreply@github.com>
2024-01-08 15:05:17 +01:00
Sergio Garcia
1df84ef6e4 chore(role arguments): enhance role arguments validation (#3240) 2024-01-08 14:41:52 +01:00
Sergio Garcia
80b88a9365 chore(exception): handle error in describing regions (#3241) 2024-01-08 14:16:27 +01:00
Fennerr
558b7a54c7 feat(aws): Added AWS role session name parameter (#3234)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2024-01-08 12:49:13 +01:00
Sergio Garcia
9522d0c733 fix(organizations_scp_check_deny_regions): enhance check logic (#3239) 2024-01-08 12:20:39 +01:00
dependabot[bot]
396d6e5c0e build(deps-dev): bump coverage from 7.3.4 to 7.4.0 (#3233)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-03 18:21:12 +01:00
Sergio Garcia
a69d7471b3 chore(regions_update): Changes in regions for AWS services. (#3236)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2024-01-03 13:34:14 +01:00
dependabot[bot]
eb56e1417c build(deps-dev): bump pytest from 7.4.3 to 7.4.4 (#3232)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-03 13:33:48 +01:00
dependabot[bot]
3d032a8efe build(deps): bump tj-actions/changed-files from 39 to 41 in /.github/workflows (#3235)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-03 13:30:21 +01:00
Sergio Garcia
d712470047 chore(regions_update): Changes in regions for AWS services. (#3231) 2023-12-29 10:56:24 +01:00
Pepe Fagoaga
423f96b95f fix(fms): Handle PolicyComplianceStatusList key error (#3230)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-12-28 18:25:21 +01:00
Sergio Garcia
d1bd097079 chore(regions_update): Changes in regions for AWS services. (#3228)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-12-28 10:24:10 +01:00
Evgenii
ceabe8ecba chore: сhanged concatenation of strings to f-strings to improve readability (#3227) 2023-12-28 08:51:00 +01:00
Pepe Fagoaga
0fff0568fa fix(allowlist): Analyse single and multi account allowlist if present (#3210)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-12-27 11:02:31 +01:00
dependabot[bot]
10e822238e build(deps): bump google-api-python-client from 2.110.0 to 2.111.0 (#3224)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-26 10:26:13 +01:00
dependabot[bot]
1cf1c827f1 build(deps-dev): bump freezegun from 1.3.1 to 1.4.0 (#3222)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-26 09:33:12 +01:00
dependabot[bot]
5bada440fa build(deps-dev): bump coverage from 7.3.3 to 7.3.4 (#3223)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-26 09:09:29 +01:00
Sergio Garcia
04bb95e044 chore(ENS): add missing ENS mappings (#3218) 2023-12-26 09:08:54 +01:00
dependabot[bot]
819140bc59 build(deps): bump shodan from 1.30.1 to 1.31.0 (#3221)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-26 08:54:01 +01:00
Sergio Garcia
d490bcc955 chore(regions_update): Changes in regions for AWS services. (#3219)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-12-26 08:49:41 +01:00
dependabot[bot]
cb94960178 build(deps): bump mkdocs-material from 9.5.2 to 9.5.3 (#3220)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-26 08:39:31 +01:00
Sergio Garcia
7361c10cb9 fix(s3): handle NoSuchBucketPolicy error (#3217) 2023-12-22 10:57:55 +01:00
Sergio Garcia
b47408e94e fix(trustedadvisor): solve trustedadvisor check metadata (#3216) 2023-12-22 10:56:21 +01:00
Sergio Garcia
806a3590aa chore(regions_update): Changes in regions for AWS services. (#3215)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-12-22 10:49:04 +01:00
Sergio Garcia
e953fe021d chore(regions_update): Changes in regions for AWS services. (#3214) 2023-12-21 11:34:33 +01:00
Sergio Garcia
e570d94a6e chore(regions_update): Changes in regions for AWS services. (#3213)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-20 14:08:52 +01:00
Nacho Rivera
78505cb0a8 chore(sqs_...not_publicly_accessible): less restrictive condition test (#3211)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-19 16:53:19 +01:00
dependabot[bot]
f8d77d9a30 build(deps): bump google-auth-httplib2 from 0.1.1 to 0.2.0 (#3207)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 13:05:30 +01:00
Sergio Garcia
1a4887f028 chore(regions_update): Changes in regions for AWS services. (#3209)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-12-19 12:39:19 +01:00
dependabot[bot]
71042b5919 build(deps): bump mkdocs-material from 9.4.14 to 9.5.2 (#3206)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 12:39:10 +01:00
dependabot[bot]
435976800a build(deps-dev): bump moto from 4.2.11 to 4.2.12 (#3205)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 10:14:04 +01:00
dependabot[bot]
18f4c7205b build(deps-dev): bump coverage from 7.3.2 to 7.3.3 (#3204)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 08:55:14 +01:00
dependabot[bot]
06eeefb8bf build(deps-dev): bump pylint from 3.0.2 to 3.0.3 (#3203)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 08:30:45 +01:00
Sergio Garcia
1737d7cf42 fix(gcp): fix UnknownApiNameOrVersion error (#3202) 2023-12-18 14:32:33 +01:00
dependabot[bot]
cd03fa6d46 build(deps): bump jsonschema from 4.18.0 to 4.20.0 (#3057)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-18 13:00:43 +01:00
Sergio Garcia
a10a73962e chore(regions_update): Changes in regions for AWS services. (#3200) 2023-12-18 07:21:18 +01:00
Pepe Fagoaga
99d6fee7a0 fix(iam): Handle NoSuchEntity in list_group_policies (#3197) 2023-12-15 14:04:59 +01:00
Nacho Rivera
c8831f0f50 chore(s3 bucket input validation): validates input bucket (#3198) 2023-12-15 13:37:41 +01:00
Pepe Fagoaga
fdeb523581 feat(securityhub): Send only FAILs but storing all in the output files (#3195) 2023-12-15 13:31:55 +01:00
Sergio Garcia
9a868464ee chore(regions_update): Changes in regions for AWS services. (#3196)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-12-15 10:15:54 +01:00
Alexandros Gidarakos
051ec75e01 docs(cloudshell): Update AWS CloudShell installation steps (#3192)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-14 08:35:23 +01:00
Alexandros Gidarakos
fc3909491a docs(cloudshell): Add missing steps to workaround (#3191) 2023-12-14 08:18:24 +01:00
Pepe Fagoaga
2437fe270c docs(cloudshell): Add workaround to clone from github (#3190) 2023-12-13 17:19:30 +01:00
Nacho Rivera
c937b193d0 fix(apigw_restapi_auth check): add method auth testing (#3183) 2023-12-13 16:20:09 +01:00
Fennerr
8b5c995486 fix(lambda): memory leakage with lambda function code (#3167)
Co-authored-by: Justin Moorcroft <justin.moorcroft@mwrcybersec.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-13 15:15:13 +01:00
Sergio Garcia
4410f2a582 chore(regions_update): Changes in regions for AWS services. (#3189)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-12-13 10:32:10 +01:00
Fennerr
bbb816868e docs(aws): Added debug information to inspect retries in API calls (#3186)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-12 14:07:33 +01:00
Fennerr
2441cca810 fix(threading): Improved threading for the AWS Service (#3175)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-12 12:50:26 +01:00
Sergio Garcia
3c3dfb380b fix(gcp): improve logging messages (#3185) 2023-12-12 12:38:50 +01:00
Nacho Rivera
0f165f0bf0 chore(actions): add prowler 4.0 branch to actions (#3184) 2023-12-12 11:40:01 +01:00
Sergio Garcia
7fcff548eb chore(regions_update): Changes in regions for AWS services. (#3182)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-12-12 10:28:01 +01:00
dependabot[bot]
8fa7b9ba00 build(deps-dev): bump docker from 6.1.3 to 7.0.0 (#3180)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-12 10:27:49 +01:00
dependabot[bot]
b101e15985 build(deps-dev): bump bandit from 1.7.5 to 1.7.6 (#3179)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-12 09:53:03 +01:00
dependabot[bot]
b4e412a37f build(deps-dev): bump pylint from 3.0.2 to 3.0.3 (#3181)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-12 09:33:27 +01:00
dependabot[bot]
ac0e2bbdb2 build(deps): bump google-api-python-client from 2.109.0 to 2.110.0 (#3178)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-12 08:07:30 +01:00
Sergio Garcia
ba16330e20 feat(cognito): add Amazon Cognito service (#3060) 2023-12-11 14:35:00 +01:00
Pepe Fagoaga
c9cb9774c6 fix(aws_regions): Get enabled regions (#3095) 2023-12-11 14:09:39 +01:00
Pepe Fagoaga
7b5b14dbd0 refactor(cloudwatch): simplify logic (#3172) 2023-12-11 11:23:24 +01:00
Fennerr
bd13973cf5 docs(parallel-execution): Combining the output files (#3096) 2023-12-11 11:11:53 +01:00
Fennerr
a7f8656e89 chore(elb): Improve status in elbv2_insecure_ssl_ciphers (#3169) 2023-12-11 11:04:37 +01:00
Sergio Garcia
1be52fab06 chore(ens): do not apply recomendation type to score (#3058) 2023-12-11 10:53:26 +01:00
Pepe Fagoaga
c9baff1a7f fix(generate_regional_clients): Global is not needed anymore (#3162) 2023-12-11 10:50:15 +01:00
Pepe Fagoaga
d1bc68086d fix(access-analyzer): Handle ValidationException (#3165) 2023-12-11 09:40:12 +01:00
Pepe Fagoaga
44a4c0670b fix(cloudtrail): Handle UnsupportedOperationException (#3166) 2023-12-11 09:38:23 +01:00
Pepe Fagoaga
4785056740 fix(elasticache): Handle CacheClusterNotFound (#3174) 2023-12-11 09:37:01 +01:00
Pepe Fagoaga
694aa448a4 fix(s3): Handle NoSuchBucket in the service (#3173) 2023-12-11 09:36:26 +01:00
Sergio Garcia
ee215b1ced chore(regions_update): Changes in regions for AWS services. (#3168) 2023-12-11 08:04:48 +01:00
Nacho Rivera
018e87884c test(audit_info): missing workspace test (#3164) 2023-12-05 16:05:39 +01:00
Nacho Rivera
a81cbbc325 test(audit_info): refactor iam (#3163) 2023-12-05 15:59:53 +01:00
Pepe Fagoaga
3962c9d816 test(audit_info): refactor acm, account and access analyzer (#3097) 2023-12-05 15:09:14 +01:00
Pepe Fagoaga
e187875da5 test(audit_info): refactor guardduty (#3160) 2023-12-05 15:00:46 +01:00
Pepe Fagoaga
f0d1a799a2 test(audit_info): refactor cloudtrail (#3111) 2023-12-05 14:59:42 +01:00
Pepe Fagoaga
5452d535d7 test(audit_info): refactor ec2 (#3132) 2023-12-05 14:58:58 +01:00
Pepe Fagoaga
7a776532a8 test(aws_account_id): refactor (#3161) 2023-12-05 14:58:42 +01:00
Nacho Rivera
e704d57957 test(audit_info): refactor inspector2 (#3159) 2023-12-05 14:19:40 +01:00
Pepe Fagoaga
c9a6eb5a1a test(audit_info): refactor globalaccelerator (#3154) 2023-12-05 14:13:02 +01:00
Pepe Fagoaga
c071812160 test(audit_info): refactor glue (#3158) 2023-12-05 14:12:44 +01:00
Pepe Fagoaga
3f95ad9ada test(audit_info): refactor glacier (#3153) 2023-12-05 14:09:04 +01:00
Nacho Rivera
250f59c9f5 test(audit_info): refactor kms (#3157) 2023-12-05 14:05:56 +01:00
Nacho Rivera
c17bbea2c7 test(audit_info): refactor macie (#3156) 2023-12-05 13:59:08 +01:00
Nacho Rivera
0262f8757a test(audit_info): refactor neptune (#3155) 2023-12-05 13:48:32 +01:00
Nacho Rivera
dbc2c481dc test(audit_info): refactor networkfirewall (#3152) 2023-12-05 13:20:52 +01:00
Pepe Fagoaga
e432c39eec test(audit_info): refactor fms (#3151) 2023-12-05 13:18:28 +01:00
Pepe Fagoaga
7383ae4f9c test(audit_info): refactor elbv2 (#3148) 2023-12-05 13:18:06 +01:00
Pepe Fagoaga
d217e33678 test(audit_info): refactor emr (#3149) 2023-12-05 13:17:42 +01:00
Nacho Rivera
d1daceff91 test(audit_info): refactor opensearch (#3150) 2023-12-05 13:17:28 +01:00
Nacho Rivera
dbbd556830 test(audit_info): refactor organizations (#3147) 2023-12-05 12:59:22 +01:00
Nacho Rivera
d483f1d90f test(audit_info): refactor rds (#3146) 2023-12-05 12:51:22 +01:00
Nacho Rivera
80684a998f test(audit_info): refactor redshift (#3144) 2023-12-05 12:42:08 +01:00
Pepe Fagoaga
0c4f0fde48 test(audit_info): refactor elb (#3145) 2023-12-05 12:41:37 +01:00
Pepe Fagoaga
071115cd52 test(audit_info): refactor elasticache (#3142) 2023-12-05 12:41:11 +01:00
Nacho Rivera
9136a755fe test(audit_info): refactor resourceexplorer2 (#3143) 2023-12-05 12:28:38 +01:00
Nacho Rivera
6ff864fc04 test(audit_info): refactor route53 (#3141) 2023-12-05 12:28:12 +01:00
Nacho Rivera
828a6f4696 test(audit_info): refactor s3 (#3140) 2023-12-05 12:13:21 +01:00
Pepe Fagoaga
417aa550a6 test(audit_info): refactor eks (#3139) 2023-12-05 12:07:41 +01:00
Pepe Fagoaga
78ffc2e238 test(audit_info): refactor efs (#3138) 2023-12-05 12:07:21 +01:00
Pepe Fagoaga
c9f22db1b5 test(audit_info): refactor ecs (#3137) 2023-12-05 12:07:01 +01:00
Pepe Fagoaga
41da560b64 test(audit_info): refactor ecr (#3136) 2023-12-05 12:06:42 +01:00
Nacho Rivera
b49e0b95f7 test(audit_info): refactor shield (#3131) 2023-12-05 11:40:42 +01:00
Nacho Rivera
50ef2729e6 test(audit_info): refactor sagemaker (#3135) 2023-12-05 11:40:19 +01:00
Nacho Rivera
6a901bb7de test(audit_info): refactor secretsmanager (#3134) 2023-12-05 11:33:54 +01:00
Nacho Rivera
f0da63c850 test(audit_info): refactor shub (#3133) 2023-12-05 11:33:34 +01:00
Nacho Rivera
b861c1dd3c test(audit_info): refactor sns (#3128) 2023-12-05 11:05:27 +01:00
Nacho Rivera
45faa2e9e8 test(audit_info): refactor sqs (#3130) 2023-12-05 11:05:05 +01:00
Pepe Fagoaga
b2e1eed684 test(audit_info): refactor dynamodb (#3129) 2023-12-05 10:59:26 +01:00
Pepe Fagoaga
4018221da6 test(audit_info): refactor drs (#3127) 2023-12-05 10:59:09 +01:00
Pepe Fagoaga
28ec3886f9 test(audit_info): refactor documentdb (#3126) 2023-12-05 10:58:48 +01:00
Pepe Fagoaga
ed323f4602 test(audit_info): refactor dlm (#3124) 2023-12-05 10:58:31 +01:00
Pepe Fagoaga
f72d360384 test(audit_info): refactor directoryservice (#3123) 2023-12-05 10:58:09 +01:00
Nacho Rivera
682bba452b test(audit_info): refactor ssm (#3125) 2023-12-05 10:45:15 +01:00
Nacho Rivera
e2ce5ae2af test(audit_info): refactor ssmincidents (#3122) 2023-12-05 10:38:09 +01:00
Nacho Rivera
039a0da69e tests(audit_info): refactor trustedadvisor (#3120) 2023-12-05 10:30:54 +01:00
Pepe Fagoaga
c9ad12b87e test(audit_info): refactor config (#3121) 2023-12-05 10:30:13 +01:00
Pepe Fagoaga
094be2e2e6 test(audit_info): refactor codeartifact (#3117) 2023-12-05 10:17:08 +01:00
Pepe Fagoaga
1b3029d833 test(audit_info): refactor codebuild (#3118) 2023-12-05 10:17:02 +01:00
Nacho Rivera
d00d5e863b tests(audit_info): refactor vpc (#3119) 2023-12-05 10:16:51 +01:00
Pepe Fagoaga
3d19e89710 test(audit_info): refactor cloudwatch (#3116) 2023-12-05 10:04:45 +01:00
Pepe Fagoaga
247cd6fc44 test(audit_info): refactor cloudfront (#3110) 2023-12-05 10:04:07 +01:00
Pepe Fagoaga
ba244c887f test(audit_info): refactor cloudformation (#3105) 2023-12-05 10:03:50 +01:00
Pepe Fagoaga
f77d92492a test(audit_info): refactor backup (#3104) 2023-12-05 10:03:32 +01:00
Pepe Fagoaga
1b85af95c0 test(audit_info): refactor athena (#3101) 2023-12-05 10:03:11 +01:00
Pepe Fagoaga
9236f5d058 test(audit_info): refactor autoscaling (#3102) 2023-12-05 10:02:54 +01:00
dependabot[bot]
39ba8cd230 build(deps-dev): bump freezegun from 1.2.2 to 1.3.1 (#3109)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-05 09:51:57 +01:00
Nacho Rivera
e67328945f test(audit_info): refactor waf (#3115) 2023-12-05 09:51:37 +01:00
Nacho Rivera
bcee2b0b6d test(audit_info): refactor wafv2 (#3114) 2023-12-05 09:51:20 +01:00
Nacho Rivera
be9a1b2f9a test(audit_info): refactor wellarchitected (#3113) 2023-12-05 09:40:31 +01:00
dependabot[bot]
4f9c2aadc2 build(deps-dev): bump moto from 4.2.10 to 4.2.11 (#3108)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-05 09:34:13 +01:00
Pepe Fagoaga
25d419ac7f test(audit_info): refactor appstream (#3100) 2023-12-05 09:33:53 +01:00
Pepe Fagoaga
57cfb508f1 test(audit_info): refactor apigateway (#3098) 2023-12-05 09:33:20 +01:00
Pepe Fagoaga
c88445f90d test(audit_info): refactor apigatewayv2 (#3099) 2023-12-05 09:32:31 +01:00
Nacho Rivera
9b6d6c3a42 test(audit_info): refactor workspaces (#3112) 2023-12-05 09:32:13 +01:00
Pepe Fagoaga
d26c1405ce test(audit_info): refactor awslambda (#3103) 2023-12-05 09:18:23 +01:00
dependabot[bot]
4bb35ab92d build(deps): bump slack-sdk from 3.26.0 to 3.26.1 (#3107)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-05 08:39:26 +01:00
dependabot[bot]
cdd983aa04 build(deps): bump google-api-python-client from 2.108.0 to 2.109.0 (#3106)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-05 08:12:57 +01:00
Nacho Rivera
e83ce86eb3 fix(docs): typo in reporting/csv (#3094) 2023-12-04 10:20:57 +01:00
Nacho Rivera
bcc590a3ee chore(actions): not launch linters for mkdocs.yml (#3093) 2023-12-04 09:57:18 +01:00
Fennerr
5fdffb93d1 docs(parallel-execution): How to execute it in parallel (#3091)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-04 09:48:46 +01:00
Nacho Rivera
db20b2c04f fix(docs): csv fields (#3092)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-04 09:46:20 +01:00
Nacho Rivera
4e037c0f43 fix(send_to_s3_bucket): don't kill exec when fail (#3088) 2023-12-01 13:25:59 +01:00
Nacho Rivera
fdcc2ac5cb revert(clean local dirs): delete clean local dirs output feature (#3087) 2023-12-01 12:26:59 +01:00
William
9099bd79f8 fix(vpc_different_regions): Handle if there are no VPC (#3081)
Co-authored-by: William Brady <will@crofton.cloud>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-12-01 11:44:23 +01:00
Pepe Fagoaga
a01683d8f6 refactor(severities): Define it in one place (#3086) 2023-12-01 11:39:35 +01:00
Pepe Fagoaga
6d2b2a9a93 refactor(load_checks_to_execute): Refactor function and add tests (#3066) 2023-11-30 17:41:14 +01:00
Sergio Garcia
de4166bf0d chore(regions_update): Changes in regions for AWS services. (#3079) 2023-11-29 11:21:06 +01:00
dependabot[bot]
1cbef30788 build(deps): bump cryptography from 41.0.4 to 41.0.6 (#3078)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-29 08:17:34 +01:00
Nacho Rivera
89c6e27489 fix(trustedadvisor): handle missing checks dict key (#3075) 2023-11-28 10:37:24 +01:00
Sergio Garcia
f74ffc530d chore(regions_update): Changes in regions for AWS services. (#3074)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-28 10:22:29 +01:00
dependabot[bot]
441d4d6a38 build(deps-dev): bump moto from 4.2.9 to 4.2.10 (#3073)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-28 09:57:56 +01:00
dependabot[bot]
3c6b9d63a6 build(deps): bump slack-sdk from 3.24.0 to 3.26.0 (#3072) 2023-11-28 09:21:46 +01:00
dependabot[bot]
254d8616b7 build(deps-dev): bump pytest-xdist from 3.4.0 to 3.5.0 (#3071) 2023-11-28 09:06:23 +01:00
dependabot[bot]
d3bc6fda74 build(deps): bump mkdocs-material from 9.4.10 to 9.4.14 (#3070)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-28 08:46:49 +01:00
Nacho Rivera
e4a5d9376f fix(clean local output dirs): change function description (#3068)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-11-27 14:55:34 +01:00
Nacho Rivera
523605e3e7 fix(set_azure_audit_info): assign correct logging when no auth (#3063) 2023-11-27 11:00:22 +01:00
Nacho Rivera
ed33fac337 fix(gcp provider): move generate_client for consistency (#3064) 2023-11-27 10:31:40 +01:00
Sergio Garcia
bf0e62aca5 chore(regions_update): Changes in regions for AWS services. (#3065)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-27 10:30:12 +01:00
Nacho Rivera
60c0b79b10 fix(outputs): initialize_file_descriptor is called dynamically (#3050) 2023-11-21 16:05:26 +01:00
Sergio Garcia
f9d2e7aa93 chore(regions_update): Changes in regions for AWS services. (#3059)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-21 11:07:08 +01:00
dependabot[bot]
0646748e24 build(deps): bump google-api-python-client from 2.107.0 to 2.108.0 (#3056)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-21 09:31:25 +01:00
dependabot[bot]
f6408e9df7 build(deps-dev): bump moto from 4.2.8 to 4.2.9 (#3055)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-21 08:14:00 +01:00
dependabot[bot]
5769bc815c build(deps): bump mkdocs-material from 9.4.8 to 9.4.10 (#3054)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-21 07:51:27 +01:00
dependabot[bot]
5a3e3e9b1f build(deps): bump slack-sdk from 3.23.0 to 3.24.0 (#3053) 2023-11-21 07:31:15 +01:00
Pepe Fagoaga
26cbafa204 fix(deps): Add missing jsonschema (#3052) 2023-11-20 18:41:39 +01:00
Sergio Garcia
d14541d1de fix(json-ocsf): add profile only for AWS provider (#3051) 2023-11-20 17:00:36 +01:00
Sergio Garcia
3955ebd56c chore(python): update python version constraint <3.12 (#3047) 2023-11-20 14:49:09 +01:00
Ignacio Dominguez
e212645cf0 fix(codeartifact): solve dependency confusion check (#2999)
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-11-20 14:48:46 +01:00
Sergio Garcia
db9c1c24d3 chore(moto): install all moto dependencies (#3048) 2023-11-20 13:44:53 +01:00
Vajrala Venkateswarlu
0a305c281f feat(custom_checks_metadata): Add checks metadata overide for severity (#3038)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-11-20 10:44:47 +01:00
Sergio Garcia
43c96a7875 chore(regions_update): Changes in regions for AWS services. (#3045)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-20 10:15:32 +01:00
Sergio Garcia
3a93aba7d7 chore(release): update Prowler Version to 3.11.3 (#3044)
Co-authored-by: github-actions <noreply@github.com>
2023-11-16 17:07:14 +01:00
Sergio Garcia
3d563356e5 fix(json): check if profile is None (#3043) 2023-11-16 13:52:07 +01:00
Johnny Lu
9205ef30f8 fix(securityhub): findings not being imported or archived in non-aws partitions (#3040)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-11-16 11:27:28 +01:00
Sergio Garcia
19c2dccc6d chore(regions_update): Changes in regions for AWS services. (#3042)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-16 11:09:41 +01:00
Sergio Garcia
8f819048ed chore(release): update Prowler Version to 3.11.2 (#3037)
Co-authored-by: github-actions <noreply@github.com>
2023-11-15 09:07:57 +01:00
Sergio Garcia
3a3bb44f11 fix(GuardDuty): only execute checks if GuardDuty enabled (#3028) 2023-11-14 14:14:05 +01:00
Nacho Rivera
f8e713a544 feat(azure regions): support non default azure region (#3013)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2023-11-14 13:17:48 +01:00
Pepe Fagoaga
573f1eba56 fix(securityhub): Use enabled_regions instead of audited_regions (#3029) 2023-11-14 12:57:54 +01:00
simone ragonesi
a36be258d8 chore: modify latest version msg (#3036)
Signed-off-by: r3drun3 <simone.ragonesi@sighup.io>
2023-11-14 12:11:55 +01:00
Sergio Garcia
690ec057c3 fix(ec2_securitygroup_not_used): check if security group is associated (#3026) 2023-11-14 12:03:01 +01:00
dependabot[bot]
2681feb1f6 build(deps): bump azure-storage-blob from 12.18.3 to 12.19.0 (#3034)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 11:47:42 +01:00
Sergio Garcia
e662adb8c5 chore(regions_update): Changes in regions for AWS services. (#3035)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-14 11:47:24 +01:00
Sergio Garcia
c94bd96c93 chore(args): make compatible severity and services arguments (#3024) 2023-11-14 11:26:53 +01:00
dependabot[bot]
6d85433194 build(deps): bump alive-progress from 3.1.4 to 3.1.5 (#3033)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 09:41:32 +01:00
dependabot[bot]
7a6092a779 build(deps): bump google-api-python-client from 2.106.0 to 2.107.0 (#3032)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 09:16:00 +01:00
dependabot[bot]
4c84529aed build(deps-dev): bump pytest-xdist from 3.3.1 to 3.4.0 (#3031)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 08:48:02 +01:00
Sergio Garcia
512d3e018f chore(accessanalyzer): include service in allowlist_non_default_regions (#3025) 2023-11-14 08:00:17 +01:00
dependabot[bot]
c6aff985c9 build(deps-dev): bump moto from 4.2.7 to 4.2.8 (#3030)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 07:54:34 +01:00
Sergio Garcia
7fadf31a2b chore(release): update Prowler Version to 3.11.1 (#3021)
Co-authored-by: github-actions <noreply@github.com>
2023-11-10 12:53:07 +01:00
Sergio Garcia
e7d098ed1e chore(regions_update): Changes in regions for AWS services. (#3020)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-10 11:34:44 +01:00
Sergio Garcia
21fba27355 fix(iam): do not list tags for inline policies (#3014) 2023-11-10 09:51:19 +01:00
John Mastron
74e37307f7 fix(SQS): fix invalid SQS ARNs (#3016)
Co-authored-by: John Mastron <jmastron@jpl.nasa.gov>
2023-11-10 09:33:18 +01:00
Sergio Garcia
d9d7c009a5 fix(rds): check if engines exist in region (#3012) 2023-11-10 09:20:36 +01:00
Pepe Fagoaga
2220cf9733 refactor(allowlist): Simplify and handle corner cases (#3019) 2023-11-10 09:11:52 +01:00
Pepe Fagoaga
3325b72b86 fix(iam-sqs): Handle exceptions for non-existent resources (#3010) 2023-11-08 14:06:45 +01:00
Sergio Garcia
9182d56246 chore(regions_update): Changes in regions for AWS services. (#3011)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-08 10:42:23 +01:00
Nacho Rivera
299ece19a8 fix(clean local output dirs): clean dirs when output to s3 (#2997) 2023-11-08 10:05:24 +01:00
Sergio Garcia
0a0732d7c0 docs(gcp): update GCP permissions (#3008) 2023-11-07 14:06:22 +01:00
Sergio Garcia
28011d97a9 chore(regions_update): Changes in regions for AWS services. (#3007)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-07 11:04:45 +01:00
Sergio Garcia
e71b0d1b6a chore(regions_update): Changes in regions for AWS services. (#3001)
Co-authored-by: sergargar <sergargar@users.noreply.github.com>
2023-11-07 11:04:36 +01:00
John Mastron
ec01b62a82 fix(aws): check all conditions in IAM policy parser (#3006)
Co-authored-by: John Mastron <jmastron@jpl.nasa.gov>
Co-authored-by: Sergio Garcia <sergargar1@gmail.com>
2023-11-07 10:40:34 +01:00
dependabot[bot]
12b45c6896 build(deps): bump google-api-python-client from 2.105.0 to 2.106.0 (#3005)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-07 09:45:51 +01:00
dependabot[bot]
51c60dd4ee build(deps): bump mkdocs-material from 9.4.7 to 9.4.8 (#3004)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-07 09:02:02 +01:00
713 changed files with 31152 additions and 20016 deletions

View File

@@ -13,3 +13,8 @@ updates:
labels:
- "dependencies"
- "pip"
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
target-branch: master

27
.github/labeler.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
documentation:
- changed-files:
- any-glob-to-any-file: "docs/**"
provider/aws:
- changed-files:
- any-glob-to-any-file: "prowler/providers/aws/**"
- any-glob-to-any-file: "tests/providers/aws/**"
provider/azure:
- changed-files:
- any-glob-to-any-file: "prowler/providers/azure/**"
- any-glob-to-any-file: "tests/providers/azure/**"
provider/gcp:
- changed-files:
- any-glob-to-any-file: "prowler/providers/gcp/**"
- any-glob-to-any-file: "tests/providers/gcp/**"
provider/kubernetes:
- changed-files:
- any-glob-to-any-file: "prowler/providers/kubernetes/**"
- any-glob-to-any-file: "tests/providers/kubernetes/**"
github_actions:
- changed-files:
- any-glob-to-any-file: ".github/workflows/*"

View File

@@ -32,11 +32,11 @@ jobs:
POETRY_VIRTUALENVS_CREATE: "false"
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Setup python (release)
if: github.event_name == 'release'
uses: actions/setup-python@v2
uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
@@ -52,13 +52,13 @@ jobs:
poetry version ${{ github.event.release.tag_name }}
- name: Login to DockerHub
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to Public ECR
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
registry: public.ecr.aws
username: ${{ secrets.PUBLIC_ECR_AWS_ACCESS_KEY_ID }}
@@ -67,11 +67,11 @@ jobs:
AWS_REGION: ${{ env.AWS_REGION }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
- name: Build and push container image (latest)
if: github.event_name == 'push'
uses: docker/build-push-action@v2
uses: docker/build-push-action@v5
with:
push: true
tags: |
@@ -83,7 +83,7 @@ jobs:
- name: Build and push container image (release)
if: github.event_name == 'release'
uses: docker/build-push-action@v2
uses: docker/build-push-action@v5
with:
# Use local context to get changes
# https://github.com/docker/build-push-action#path-context

View File

@@ -13,10 +13,10 @@ name: "CodeQL"
on:
push:
branches: [ "master", prowler-2, prowler-3.0-dev ]
branches: [ "master", "prowler-4.0-dev" ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ "master" ]
branches: [ "master", "prowler-4.0-dev" ]
schedule:
- cron: '00 12 * * *'
@@ -37,11 +37,11 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v3
uses: actions/checkout@v4
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
@@ -52,6 +52,6 @@ jobs:
# queries: security-extended,security-and-quality
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
uses: github/codeql-action/analyze@v3
with:
category: "/language:${{matrix.language}}"

View File

@@ -7,11 +7,11 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: TruffleHog OSS
uses: trufflesecurity/trufflehog@v3.4.4
uses: trufflesecurity/trufflehog@v3.67.6
with:
path: ./
base: ${{ github.event.repository.default_branch }}

16
.github/workflows/labeler.yml vendored Normal file
View File

@@ -0,0 +1,16 @@
name: "Pull Request Labeler"
on:
pull_request_target:
branches:
- "master"
- "prowler-4.0-dev"
jobs:
labeler:
permissions:
contents: read
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v5

View File

@@ -4,21 +4,23 @@ on:
push:
branches:
- "master"
- "prowler-4.0-dev"
pull_request:
branches:
- "master"
- "prowler-4.0-dev"
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9", "3.10", "3.11"]
python-version: ["3.9", "3.10", "3.11", "3.12"]
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Test if changes are in not ignored paths
id: are-non-ignored-files-changed
uses: tj-actions/changed-files@v39
uses: tj-actions/changed-files@v42
with:
files: ./**
files_ignore: |
@@ -26,6 +28,7 @@ jobs:
README.md
docs/**
permissions/**
mkdocs.yml
- name: Install poetry
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
run: |
@@ -33,7 +36,7 @@ jobs:
pipx install poetry
- name: Set up Python ${{ matrix.python-version }}
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: actions/setup-python@v4
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
cache: "poetry"
@@ -85,6 +88,6 @@ jobs:
poetry run pytest -n auto --cov=./prowler --cov-report=xml tests
- name: Upload coverage reports to Codecov
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
uses: codecov/codecov-action@v3
uses: codecov/codecov-action@v4
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -16,7 +16,7 @@ jobs:
name: Release Prowler to PyPI
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
ref: ${{ env.GITHUB_BRANCH }}
- name: Install dependencies
@@ -24,7 +24,7 @@ jobs:
pipx install poetry
pipx inject poetry poetry-bumpversion
- name: setup python
uses: actions/setup-python@v4
uses: actions/setup-python@v5
with:
python-version: 3.9
cache: 'poetry'
@@ -44,7 +44,7 @@ jobs:
poetry publish
# Create pull request with new version
- name: Create Pull Request
uses: peter-evans/create-pull-request@v4
uses: peter-evans/create-pull-request@v6
with:
token: ${{ secrets.PROWLER_ACCESS_TOKEN }}
commit-message: "chore(release): update Prowler Version to ${{ env.RELEASE_TAG }}."

View File

@@ -23,12 +23,12 @@ jobs:
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
ref: ${{ env.GITHUB_BRANCH }}
- name: setup python
uses: actions/setup-python@v2
uses: actions/setup-python@v5
with:
python-version: 3.9 #install the python needed
@@ -38,7 +38,7 @@ jobs:
pip install boto3
- name: Configure AWS Credentials -- DEV
uses: aws-actions/configure-aws-credentials@v1
uses: aws-actions/configure-aws-credentials@v4
with:
aws-region: ${{ env.AWS_REGION_DEV }}
role-to-assume: ${{ secrets.DEV_IAM_ROLE_ARN }}
@@ -50,12 +50,12 @@ jobs:
# Create pull request
- name: Create Pull Request
uses: peter-evans/create-pull-request@v4
uses: peter-evans/create-pull-request@v6
with:
token: ${{ secrets.PROWLER_ACCESS_TOKEN }}
commit-message: "feat(regions_update): Update regions for AWS services."
branch: "aws-services-regions-updated-${{ github.sha }}"
labels: "status/waiting-for-revision, severity/low"
labels: "status/waiting-for-revision, severity/low, provider/aws"
title: "chore(regions_update): Changes in regions for AWS services."
body: |
### Description

View File

@@ -1,7 +1,7 @@
repos:
## GENERAL
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
rev: v4.5.0
hooks:
- id: check-merge-conflict
- id: check-yaml
@@ -15,7 +15,7 @@ repos:
## TOML
- repo: https://github.com/macisamuele/language-formatters-pre-commit-hooks
rev: v2.10.0
rev: v2.12.0
hooks:
- id: pretty-format-toml
args: [--autofix]
@@ -28,7 +28,7 @@ repos:
- id: shellcheck
## PYTHON
- repo: https://github.com/myint/autoflake
rev: v2.2.0
rev: v2.2.1
hooks:
- id: autoflake
args:
@@ -39,25 +39,25 @@ repos:
]
- repo: https://github.com/timothycrosley/isort
rev: 5.12.0
rev: 5.13.2
hooks:
- id: isort
args: ["--profile", "black"]
- repo: https://github.com/psf/black
rev: 22.12.0
rev: 24.1.1
hooks:
- id: black
- repo: https://github.com/pycqa/flake8
rev: 6.1.0
rev: 7.0.0
hooks:
- id: flake8
exclude: contrib
args: ["--ignore=E266,W503,E203,E501,W605"]
- repo: https://github.com/python-poetry/poetry
rev: 1.6.0 # add version here
rev: 1.7.0
hooks:
- id: poetry-check
- id: poetry-lock
@@ -80,18 +80,12 @@ repos:
- id: trufflehog
name: TruffleHog
description: Detect secrets in your data.
# entry: bash -c 'trufflehog git file://. --only-verified --fail'
entry: bash -c 'trufflehog --no-update git file://. --only-verified --fail'
# For running trufflehog in docker, use the following entry instead:
entry: bash -c 'docker run -v "$(pwd):/workdir" -i --rm trufflesecurity/trufflehog:latest git file:///workdir --only-verified --fail'
# entry: bash -c 'docker run -v "$(pwd):/workdir" -i --rm trufflesecurity/trufflehog:latest git file:///workdir --only-verified --fail'
language: system
stages: ["commit", "push"]
- id: pytest-check
name: pytest-check
entry: bash -c 'pytest tests -n auto'
language: system
files: '.*\.py'
- id: bandit
name: bandit
description: "Bandit is a tool for finding common security issues in Python code"

View File

@@ -55,7 +55,7 @@ further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at community@prowler.cloud. All
reported by contacting the project team at [support.prowler.com](https://customer.support.prowler.com/servicedesk/customer/portals). All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.

View File

@@ -1,24 +1,25 @@
<p align="center">
<img align="center" src="https://github.com/prowler-cloud/prowler/blob/62c1ce73bbcdd6b9e5ba03dfcae26dfd165defd9/docs/img/prowler-pro-dark.png?raw=True#gh-dark-mode-only" width="150" height="36">
<img align="center" src="https://github.com/prowler-cloud/prowler/blob/62c1ce73bbcdd6b9e5ba03dfcae26dfd165defd9/docs/img/prowler-pro-light.png?raw=True#gh-light-mode-only" width="15%" height="15%">
<img align="center" src="https://github.com/prowler-cloud/prowler/blob/master/docs/img/prowler-logo-black.png?raw=True#gh-light-mode-only" width="350" height="115">
<img align="center" src="https://github.com/prowler-cloud/prowler/blob/master/docs/img/prowler-logo-white.png?raw=True#gh-dark-mode-only" width="350" height="115">
</p>
<p align="center">
<b><i>See all the things you and your team can do with ProwlerPro at <a href="https://prowler.pro">prowler.pro</a></i></b>
<b><i>Prowler SaaS </b> and <b>Prowler Open Source</b> are as dynamic and adaptable as the environment theyre meant to protect. Trusted by the leaders in security.
</p>
<p align="center">
<b>Learn more at <a href="https://prowler.com">prowler.com</i></b>
</p>
<hr>
<p align="center">
<img src="https://user-images.githubusercontent.com/3985464/113734260-7ba06900-96fb-11eb-82bc-d4f68a1e2710.png" />
</p>
<p align="center">
<a href="https://join.slack.com/t/prowler-workspace/shared_invite/zt-1hix76xsl-2uq222JIXrC7Q8It~9ZNog"><img alt="Slack Shield" src="https://img.shields.io/badge/slack-prowler-brightgreen.svg?logo=slack"></a>
<a href="https://pypi.org/project/prowler/"><img alt="Python Version" src="https://img.shields.io/pypi/v/prowler.svg"></a>
<a href="https://pypi.python.org/pypi/prowler/"><img alt="Python Version" src="https://img.shields.io/pypi/pyversions/prowler.svg"></a>
<a href="https://pypistats.org/packages/prowler"><img alt="PyPI Prowler Downloads" src="https://img.shields.io/pypi/dw/prowler.svg?label=prowler%20downloads"></a>
<a href="https://pypistats.org/packages/prowler-cloud"><img alt="PyPI Prowler-Cloud Downloads" src="https://img.shields.io/pypi/dw/prowler-cloud.svg?label=prowler-cloud%20downloads"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/toniblyx/prowler"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker" src="https://img.shields.io/docker/cloud/build/toniblyx/prowler"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker" src="https://img.shields.io/docker/image-size/toniblyx/prowler"></a>
<a href="https://gallery.ecr.aws/prowler-cloud/prowler"><img width="120" height=19" alt="AWS ECR Gallery" src="https://user-images.githubusercontent.com/3985464/151531396-b6535a68-c907-44eb-95a1-a09508178616.png"></a>
<a href="https://codecov.io/gh/prowler-cloud/prowler"><img src="https://codecov.io/gh/prowler-cloud/prowler/graph/badge.svg?token=OflBGsdpDl"/></a>
</p>
<p align="center">
<a href="https://github.com/prowler-cloud/prowler"><img alt="Repo size" src="https://img.shields.io/github/repo-size/prowler-cloud/prowler"></a>
@@ -30,6 +31,7 @@
<a href="https://twitter.com/ToniBlyx"><img alt="Twitter" src="https://img.shields.io/twitter/follow/toniblyx?style=social"></a>
<a href="https://twitter.com/prowlercloud"><img alt="Twitter" src="https://img.shields.io/twitter/follow/prowlercloud?style=social"></a>
</p>
<hr>
# Description
@@ -39,10 +41,10 @@ It contains hundreds of controls covering CIS, NIST 800, NIST CSF, CISA, RBI, Fe
| Provider | Checks | Services | [Compliance Frameworks](https://docs.prowler.cloud/en/latest/tutorials/compliance/) | [Categories](https://docs.prowler.cloud/en/latest/tutorials/misc/#categories) |
|---|---|---|---|---|
| AWS | 301 | 61 -> `prowler aws --list-services` | 25 -> `prowler aws --list-compliance` | 5 -> `prowler aws --list-categories` |
| AWS | 302 | 61 -> `prowler aws --list-services` | 27 -> `prowler aws --list-compliance` | 6 -> `prowler aws --list-categories` |
| GCP | 73 | 11 -> `prowler gcp --list-services` | 1 -> `prowler gcp --list-compliance` | 2 -> `prowler gcp --list-categories`|
| Azure | 23 | 4 -> `prowler azure --list-services` | CIS soon | 1 -> `prowler azure --list-categories` |
| Kubernetes | Planned | - | - | - |
| Azure | 37 | 4 -> `prowler azure --list-services` | CIS soon | 1 -> `prowler azure --list-categories` |
| Kubernetes | Work In Progress | - | CIS soon | - |
# 📖 Documentation
@@ -54,7 +56,7 @@ For Prowler v2 Documentation, please go to https://github.com/prowler-cloud/prow
# ⚙️ Install
## Pip package
Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-cloud/), thus can be installed using pip with Python >= 3.9:
Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-cloud/), thus can be installed using pip with Python >= 3.9, < 3.13:
```console
pip install prowler
@@ -77,7 +79,7 @@ The container images are available here:
## From Github
Python >= 3.9 is required with pip and poetry:
Python >= 3.9, < 3.13 is required with pip and poetry:
```
git clone https://github.com/prowler-cloud/prowler
@@ -178,11 +180,7 @@ Prowler will follow the same credentials search as [Google authentication librar
2. [User credentials set up by using the Google Cloud CLI](https://cloud.google.com/docs/authentication/application-default-credentials#personal)
3. [The attached service account, returned by the metadata server](https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa)
Those credentials must be associated to a user or service account with proper permissions to do all checks. To make sure, add the following roles to the member associated with the credentials:
- Viewer
- Security Reviewer
- Stackdriver Account Viewer
Those credentials must be associated to a user or service account with proper permissions to do all checks. To make sure, add the `Viewer` role to the member associated with the credentials.
> By default, `prowler` will scan all accessible GCP Projects, use flag `--project-ids` to specify the projects to be scanned.

View File

@@ -14,7 +14,7 @@ As an **AWS Partner** and we have passed the [AWS Foundation Technical Review (F
If you would like to report a vulnerability or have a security concern regarding Prowler Open Source or ProwlerPro service, please submit the information by contacting to help@prowler.pro.
The information you share with Verica as part of this process is kept confidential within Verica and the Prowler team. We will only share this information with a third party if the vulnerability you report is found to affect a third-party product, in which case we will share this information with the third-party product's author or manufacturer. Otherwise, we will only share this information as permitted by you.
The information you share with ProwlerPro as part of this process is kept confidential within ProwlerPro. We will only share this information with a third party if the vulnerability you report is found to affect a third-party product, in which case we will share this information with the third-party product's author or manufacturer. Otherwise, we will only share this information as permitted by you.
We will review the submitted report, and assign it a tracking number. We will then respond to you, acknowledging receipt of the report, and outline the next steps in the process.

View File

@@ -523,7 +523,7 @@ from unittest import mock
from uuid import uuid4
# Azure Constants
AZURE_SUSCRIPTION = str(uuid4())
AZURE_SUBSCRIPTION = str(uuid4())
@@ -542,7 +542,7 @@ class Test_defender_ensure_defender_for_arm_is_on:
# Create the custom Defender object to be tested
defender_client.pricings = {
AZURE_SUSCRIPTION: {
AZURE_SUBSCRIPTION: {
"Arm": Defender_Pricing(
resource_id=resource_id,
pricing_tier="Not Standard",
@@ -580,9 +580,9 @@ class Test_defender_ensure_defender_for_arm_is_on:
assert result[0].status == "FAIL"
assert (
result[0].status_extended
== f"Defender plan Defender for ARM from subscription {AZURE_SUSCRIPTION} is set to OFF (pricing tier not standard)"
== f"Defender plan Defender for ARM from subscription {AZURE_SUBSCRIPTION} is set to OFF (pricing tier not standard)"
)
assert result[0].subscription == AZURE_SUSCRIPTION
assert result[0].subscription == AZURE_SUBSCRIPTION
assert result[0].resource_name == "Defender plan ARM"
assert result[0].resource_id == resource_id
```

View File

@@ -97,10 +97,6 @@ Prowler will follow the same credentials search as [Google authentication librar
2. [User credentials set up by using the Google Cloud CLI](https://cloud.google.com/docs/authentication/application-default-credentials#personal)
3. [The attached service account, returned by the metadata server](https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa)
Those credentials must be associated to a user or service account with proper permissions to do all checks. To make sure, add the following roles to the member associated with the credentials:
- Viewer
- Security Reviewer
- Stackdriver Account Viewer
Those credentials must be associated to a user or service account with proper permissions to do all checks. To make sure, add the `Viewer` role to the member associated with the credentials.
> By default, `prowler` will scan all accessible GCP Projects, use flag `--project-ids` to specify the projects to be scanned.

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

View File

@@ -136,26 +136,16 @@ Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-clo
=== "AWS CloudShell"
Prowler can be easely executed in AWS CloudShell but it has some prerequsites to be able to to so. AWS CloudShell is a container running with `Amazon Linux release 2 (Karoo)` that comes with Python 3.7, since Prowler requires Python >= 3.9 we need to first install a newer version of Python. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
After the migration of AWS CloudShell from Amazon Linux 2 to Amazon Linux 2023 [[1]](https://aws.amazon.com/about-aws/whats-new/2023/12/aws-cloudshell-migrated-al2023/) [2](https://docs.aws.amazon.com/cloudshell/latest/userguide/cloudshell-AL2023-migration.html), there is no longer a need to manually compile Python 3.9 as it's already included in AL2023. Prowler can thus be easily installed following the Generic method of installation via pip. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
_Requirements_:
* First install all dependences and then Python, in this case we need to compile it because there is not a package available at the time this document is written:
```
sudo yum -y install gcc openssl-devel bzip2-devel libffi-devel
wget https://www.python.org/ftp/python/3.9.16/Python-3.9.16.tgz
tar zxf Python-3.9.16.tgz
cd Python-3.9.16/
./configure --enable-optimizations
sudo make altinstall
python3.9 --version
cd
```
* Open AWS CloudShell `bash`.
_Commands_:
* Once Python 3.9 is available we can install Prowler from pip:
```
pip3.9 install prowler
pip install prowler
prowler -v
```

View File

@@ -15,7 +15,7 @@ As an **AWS Partner** and we have passed the [AWS Foundation Technical Review (F
If you would like to report a vulnerability or have a security concern regarding Prowler Open Source or ProwlerPro service, please submit the information by contacting to help@prowler.pro.
The information you share with Verica as part of this process is kept confidential within Verica and the Prowler team. We will only share this information with a third party if the vulnerability you report is found to affect a third-party product, in which case we will share this information with the third-party product's author or manufacturer. Otherwise, we will only share this information as permitted by you.
The information you share with ProwlerPro as part of this process is kept confidential within ProwlerPro. We will only share this information with a third party if the vulnerability you report is found to affect a third-party product, in which case we will share this information with the third-party product's author or manufacturer. Otherwise, we will only share this information as permitted by you.
We will review the submitted report, and assign it a tracking number. We will then respond to you, acknowledging receipt of the report, and outline the next steps in the process.

View File

@@ -32,3 +32,14 @@ Prowler's AWS Provider uses the Boto3 [Standard](https://boto3.amazonaws.com/v1/
- Retry attempts on nondescriptive, transient error codes. Specifically, these HTTP status codes: 500, 502, 503, 504.
- Any retry attempt will include an exponential backoff by a base factor of 2 for a maximum backoff time of 20 seconds.
## Notes for validating retry attempts
If you are making changes to Prowler, and want to validate if requests are being retried or given up on, you can take the following approach
* Run prowler with `--log-level DEBUG` and `--log-file debuglogs.txt`
* Search for retry attempts using `grep -i 'Retry needed' debuglogs.txt`
This is based off of the [AWS documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/retries.html#checking-retry-attempts-in-your-client-logs), which states that if a retry is performed, you will see a message starting with "Retry needed".
You can determine the total number of calls made using `grep -i 'Sending http request' debuglogs.txt | wc -l`

View File

@@ -1,26 +1,26 @@
# AWS CloudShell
Prowler can be easily executed in AWS CloudShell but it has some prerequisites to be able to to so. AWS CloudShell is a container running with `Amazon Linux release 2 (Karoo)` that comes with Python 3.7, since Prowler requires Python >= 3.9 we need to first install a newer version of Python. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
- First install all dependences and then Python, in this case we need to compile it because there is not a package available at the time this document is written:
```
sudo yum -y install gcc openssl-devel bzip2-devel libffi-devel
wget https://www.python.org/ftp/python/3.9.16/Python-3.9.16.tgz
tar zxf Python-3.9.16.tgz
cd Python-3.9.16/
./configure --enable-optimizations
sudo make altinstall
python3.9 --version
cd
```
- Once Python 3.9 is available we can install Prowler from pip:
```
pip3.9 install prowler
```
- Now enjoy Prowler:
```
## Installation
After the migration of AWS CloudShell from Amazon Linux 2 to Amazon Linux 2023 [[1]](https://aws.amazon.com/about-aws/whats-new/2023/12/aws-cloudshell-migrated-al2023/) [[2]](https://docs.aws.amazon.com/cloudshell/latest/userguide/cloudshell-AL2023-migration.html), there is no longer a need to manually compile Python 3.9 as it's already included in AL2023. Prowler can thus be easily installed following the Generic method of installation via pip. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
```shell
pip install prowler
prowler -v
prowler
```
- To download the results from AWS CloudShell, select Actions -> Download File and add the full path of each file. For the CSV file it will be something like `/home/cloudshell-user/output/prowler-output-123456789012-20221220191331.csv`
## Download Files
To download the results from AWS CloudShell, select Actions -> Download File and add the full path of each file. For the CSV file it will be something like `/home/cloudshell-user/output/prowler-output-123456789012-20221220191331.csv`
## Clone Prowler from Github
The limited storage that AWS CloudShell provides for the user's home directory causes issues when installing the poetry dependencies to run Prowler from GitHub. Here is a workaround:
```shell
git clone https://github.com/prowler-cloud/prowler.git
cd prowler
pip install poetry
mkdir /tmp/pypoetry
poetry config cache-dir /tmp/pypoetry
poetry shell
poetry install
python prowler.py -v
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 341 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 291 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 306 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 346 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 293 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 252 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 603 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 273 KiB

View File

@@ -23,6 +23,15 @@ prowler aws -R arn:aws:iam::<account_id>:role/<role_name>
prowler aws -T/--session-duration <seconds> -I/--external-id <external_id> -R arn:aws:iam::<account_id>:role/<role_name>
```
## Custom Role Session Name
Prowler can use your custom Role Session name with:
```console
prowler aws --role-session-name <role_session_name>
```
> It defaults to `ProwlerAssessmentSession`
## STS Endpoint Region
If you are using Prowler in AWS regions that are not enabled by default you need to use the argument `--sts-endpoint-region` to point the AWS STS API calls `assume-role` and `get-caller-identity` to the non-default region, e.g.: `prowler aws --sts-endpoint-region eu-south-2`.

View File

@@ -1,48 +1,116 @@
# AWS Security Hub Integration
Prowler supports natively and as **official integration** sending findings to [AWS Security Hub](https://aws.amazon.com/security-hub). This integration allows Prowler to import its findings to AWS Security Hub.
Prowler supports natively and as **official integration** sending findings to [AWS Security Hub](https://aws.amazon.com/security-hub). This integration allows **Prowler** to import its findings to AWS Security Hub.
With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Identity and Access Management (IAM) Access Analyzer, and AWS Firewall Manager, as well as from AWS Partner solutions and from Prowler for free.
Before sending findings to Prowler, you will need to perform next steps:
Before sending findings, you will need to enable AWS Security Hub and the **Prowler** integration.
1. Since Security Hub is a region based service, enable it in the region or regions you require. Use the AWS Management Console or using the AWS CLI with this command if you have enough permissions:
- `aws securityhub enable-security-hub --region <region>`.
2. Enable Prowler as partner integration integration. Use the AWS Management Console or using the AWS CLI with this command if you have enough permissions:
- `aws securityhub enable-import-findings-for-product --region <region> --product-arn arn:aws:securityhub:<region>::product/prowler/prowler` (change region also inside the ARN).
- Using the AWS Management Console:
![Screenshot 2020-10-29 at 10 26 02 PM](https://user-images.githubusercontent.com/3985464/97634660-5ade3400-1a36-11eb-9a92-4a45cc98c158.png)
3. Allow Prowler to import its findings to AWS Security Hub by adding the policy below to the role or user running Prowler:
- [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-security-hub.json)
## Enable AWS Security Hub
To enable the integration you have to perform the following steps, in _at least_ one AWS region of a given AWS account, to enable **AWS Security Hub** and **Prowler** as a partner integration.
Since **AWS Security Hub** is a region based service, you will need to enable it in the region or regions you require. You can configure it using the AWS Management Console or the AWS CLI.
> Take into account that enabling this integration will incur in costs in AWS Security Hub, please refer to its pricing [here](https://aws.amazon.com/security-hub/pricing/) for more information.
### Using the AWS Management Console
#### Enable AWS Security Hub
If you have currently AWS Security Hub enabled you can skip to the [next section](#enable-prowler-integration).
1. Open the **AWS Security Hub** console at https://console.aws.amazon.com/securityhub/.
2. When you open the Security Hub console for the first time make sure that you are in the region you want to enable, then choose **Go to Security Hub**.
![](./img/enable.png)
3. On the next page, the Security standards section lists the security standards that Security Hub supports. Select the check box for a standard to enable it, and clear the check box to disable it.
4. Choose **Enable Security Hub**.
![](./img/enable-2.png)
#### Enable Prowler Integration
If you have currently the Prowler integration enabled in AWS Security Hub you can skip to the [next section](#send-findings) and start sending findings.
Once **AWS Security Hub** is enabled you will need to enable **Prowler** as partner integration to allow **Prowler** to send findings to your **AWS Security Hub**.
1. Open the **AWS Security Hub** console at https://console.aws.amazon.com/securityhub/.
2. Select the **Integrations** tab in the right-side menu bar.
![](./img/enable-partner-integration.png)
3. Search for _Prowler_ in the text search box and the **Prowler** integration will appear.
4. Once there, click on **Accept Findings** to allow **AWS Security Hub** to receive findings from **Prowler**.
![](./img/enable-partner-integration-2.png)
5. A new modal will appear to confirm that you are enabling the **Prowler** integration.
![](./img/enable-partner-integration-3.png)
6. Right after click on **Accept Findings**, you will see that the integration is enabled in **AWS Security Hub**.
![](./img/enable-partner-integration-4.png)
### Using the AWS CLI
To enable **AWS Security Hub** and the **Prowler** integration you have to run the following commands using the AWS CLI:
```shell
aws securityhub enable-security-hub --region <region>
```
> For this command to work you will need the `securityhub:EnableSecurityHub` permission.
> You will need to set the AWS region where you want to enable AWS Security Hub.
Once **AWS Security Hub** is enabled you will need to enable **Prowler** as partner integration to allow **Prowler** to send findings to your AWS Security Hub. You have to run the following commands using the AWS CLI:
```shell
aws securityhub enable-import-findings-for-product --region eu-west-1 --product-arn arn:aws:securityhub:<region>::product/prowler/prowler
```
> You will need to set the AWS region where you want to enable the integration and also the AWS region also within the ARN.
> For this command to work you will need the `securityhub:securityhub:EnableImportFindingsForProduct` permission.
## Send Findings
Once it is enabled, it is as simple as running the command below (for all regions):
```sh
prowler aws -S
prowler aws --security-hub
```
or for only one filtered region like eu-west-1:
```sh
prowler -S -f eu-west-1
prowler --security-hub --region eu-west-1
```
> **Note 1**: It is recommended to send only fails to Security Hub and that is possible adding `-q` to the command.
> **Note 1**: It is recommended to send only fails to Security Hub and that is possible adding `-q/--quiet` to the command. You can use, instead of the `-q/--quiet` argument, the `--send-sh-only-fails` argument to save all the findings in the Prowler outputs but just to send FAIL findings to AWS Security Hub.
> **Note 2**: Since Prowler perform checks to all regions by default you may need to filter by region when running Security Hub integration, as shown in the example above. Remember to enable Security Hub in the region or regions you need by calling `aws securityhub enable-security-hub --region <region>` and run Prowler with the option `-f <region>` (if no region is used it will try to push findings in all regions hubs). Prowler will send findings to the Security Hub on the region where the scanned resource is located.
> **Note 2**: Since Prowler perform checks to all regions by default you may need to filter by region when running Security Hub integration, as shown in the example above. Remember to enable Security Hub in the region or regions you need by calling `aws securityhub enable-security-hub --region <region>` and run Prowler with the option `-f/--region <region>` (if no region is used it will try to push findings in all regions hubs). Prowler will send findings to the Security Hub on the region where the scanned resource is located.
> **Note 3**: To have updated findings in Security Hub you have to run Prowler periodically. Once a day or every certain amount of hours.
Once you run findings for first time you will be able to see Prowler findings in Findings section:
### See you Prowler findings in AWS Security Hub
![Screenshot 2020-10-29 at 10 29 05 PM](https://user-images.githubusercontent.com/3985464/97634676-66c9f600-1a36-11eb-9341-70feb06f6331.png)
Once configured the **AWS Security Hub** in your next scan you will receive the **Prowler** findings in the AWS regions configured. To review those findings in **AWS Security Hub**:
1. Open the **AWS Security Hub** console at https://console.aws.amazon.com/securityhub/.
2. Select the **Findings** tab in the right-side menu bar.
![](./img/findings.png)
3. Use the search box filters and use the **Product Name** filter with the value _Prowler_ to see the findings sent from **Prowler**.
4. Then, you can click on the check **Title** to see the details and the history of a finding.
![](./img/finding-details.png)
As you can see in the related requirements section, in the detailed view of the findings, **Prowler** also sends compliance information related to every finding.
## Send findings to Security Hub assuming an IAM Role
When you are auditing a multi-account AWS environment, you can send findings to a Security Hub of another account by assuming an IAM role from that account using the `-R` flag in the Prowler command:
```sh
prowler -S -R arn:aws:iam::123456789012:role/ProwlerExecRole
prowler --security-hub --role arn:aws:iam::123456789012:role/ProwlerExecutionRole
```
> Remember that the used role needs to have permissions to send findings to Security Hub. To get more information about the permissions required, please refer to the following IAM policy [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-security-hub.json)
@@ -50,12 +118,17 @@ prowler -S -R arn:aws:iam::123456789012:role/ProwlerExecRole
## Send only failed findings to Security Hub
When using Security Hub it is recommended to send only the failed findings generated. To follow that recommendation you could add the `-q` flag to the Prowler command:
When using the **AWS Security Hub** integration you can send only the `FAIL` findings generated by **Prowler**. Therefore, the **AWS Security Hub** usage costs eventually would be lower. To follow that recommendation you could add the `-q/--quiet` flag to the Prowler command:
```sh
prowler -S -q
prowler --security-hub --quiet
```
You can use, instead of the `-q/--quiet` argument, the `--send-sh-only-fails` argument to save all the findings in the Prowler outputs but just to send FAIL findings to AWS Security Hub:
```sh
prowler --security-hub --send-sh-only-fails
```
## Skip sending updates of findings to Security Hub
@@ -63,5 +136,5 @@ By default, Prowler archives all its findings in Security Hub that have not appe
You can skip this logic by using the option `--skip-sh-update` so Prowler will not archive older findings:
```sh
prowler -S --skip-sh-update
prowler --security-hub --skip-sh-update
```

View File

@@ -0,0 +1,16 @@
# Use non default Azure regions
Microsoft provides clouds for compliance with regional laws, which are available for your use.
By default, Prowler uses `AzureCloud` cloud which is the comercial one. (you can list all the available with `az cloud list --output table`).
At the time of writing this documentation the available Azure Clouds from different regions are the following:
- AzureCloud
- AzureChinaCloud
- AzureUSGovernment
- AzureGermanCloud
If you want to change the default one you must include the flag `--azure-region`, i.e.:
```console
prowler azure --az-cli-auth --azure-region AzureChinaCloud
```

View File

@@ -7,30 +7,35 @@ In order to see which compliance frameworks are cover by Prowler, you can use op
prowler <provider> --list-compliance
```
Currently, the available frameworks are:
- `cis_1.4_aws`
- `cis_1.5_aws`
- `ens_rd2022_aws`
- `aws_account_security_onboarding_aws`
- `aws_audit_manager_control_tower_guardrails_aws`
- `aws_foundational_security_best_practices_aws`
- `aws_well_architected_framework_reliability_pillar_aws`
- `aws_well_architected_framework_security_pillar_aws`
- `cis_1.4_aws`
- `cis_1.5_aws`
- `cis_2.0_aws`
- `cis_2.0_gcp`
- `cis_3.0_aws`
- `cisa_aws`
- `ens_rd2022_aws`
- `fedramp_low_revision_4_aws`
- `fedramp_moderate_revision_4_aws`
- `ffiec_aws`
- `gdpr_aws`
- `gxp_eu_annex_11_aws`
- `gxp_21_cfr_part_11_aws`
- `gxp_eu_annex_11_aws`
- `hipaa_aws`
- `iso27001_2013_aws`
- `mitre_attack_aws`
- `nist_800_171_revision_2_aws`
- `nist_800_53_revision_4_aws`
- `nist_800_53_revision_5_aws`
- `nist_800_171_revision_2_aws`
- `nist_csf_1.1_aws`
- `pci_3.2.1_aws`
- `rbi_cyber_security_framework_aws`
- `soc2_aws`
## List Requirements of Compliance Frameworks
For each compliance framework, you can use option `--list-compliance-requirements` to list its requirements:
```sh

View File

@@ -0,0 +1,43 @@
# Custom Checks Metadata
In certain organizations, the severity of specific checks might differ from the default values defined in the check's metadata. For instance, while `s3_bucket_level_public_access_block` could be deemed `critical` for some organizations, others might assign a different severity level.
The custom metadata option offers a means to override default metadata set by Prowler
You can utilize `--custom-checks-metadata-file` followed by the path to your custom checks metadata YAML file.
## Available Fields
The list of supported check's metadata fields that can be override are listed as follows:
- Severity
## File Syntax
This feature is available for all the providers supported in Prowler since the metadata format is common between all the providers. The following is the YAML format for the custom checks metadata file:
```yaml title="custom_checks_metadata.yaml"
CustomChecksMetadata:
aws:
Checks:
s3_bucket_level_public_access_block:
Severity: high
s3_bucket_no_mfa_delete:
Severity: high
azure:
Checks:
storage_infrastructure_encryption_is_enabled:
Severity: medium
gcp:
Checks:
compute_instance_public_ip:
Severity: critical
```
## Usage
Executing the following command will assess all checks and generate a report while overriding the metadata for those checks:
```sh
prowler <provider> --custom-checks-metadata-file <path/to/custom/metadata>
```
This customization feature enables organizations to tailor the severity of specific checks based on their unique requirements, providing greater flexibility in security assessment and reporting.

View File

@@ -22,8 +22,4 @@ Prowler will follow the same credentials search as [Google authentication librar
2. [User credentials set up by using the Google Cloud CLI](https://cloud.google.com/docs/authentication/application-default-credentials#personal)
3. [The attached service account, returned by the metadata server](https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa)
Those credentials must be associated to a user or service account with proper permissions to do all checks. To make sure, add the following roles to the member associated with the credentials:
- Viewer
- Security Reviewer
- Stackdriver Account Viewer
Those credentials must be associated to a user or service account with proper permissions to do all checks. To make sure, add the `Viewer` role to the member associated with the credentials.

View File

@@ -47,7 +47,7 @@ It is a best practice to encrypt both metadata and connection passwords in AWS G
#### Inspector
Amazon Inspector is a vulnerability discovery service that automates continuous scanning for security vulnerabilities within your Amazon EC2, Amazon ECR, and AWS Lambda environments. Prowler recommends to enable it and resolve all the Inspector's findings. Ignoring the unused services, Prowler will only notify you if there are any Lambda functions, EC2 instances or ECR repositories in the region where Amazon inspector should be enabled.
- `inspector2_findings_exist`
- `inspector2_is_enabled`
#### Macie
Amazon Macie is a security service that uses machine learning to automatically discover, classify and protect sensitive data in S3 buckets. Prowler will only create a finding when Macie is not enabled if there are S3 buckets in your account.

View File

@@ -0,0 +1,187 @@
# Parallel Execution
The strategy used here will be to execute Prowler once per service. You can modify this approach as per your requirements.
This can help for really large accounts, but please be aware of AWS API rate limits:
1. **Service-Specific Limits**: Each AWS service has its own rate limits. For instance, Amazon EC2 might have different rate limits for launching instances versus making API calls to describe instances.
2. **API Rate Limits**: Most of the rate limits in AWS are applied at the API level. Each API call to an AWS service counts towards the rate limit for that service.
3. **Throttling Responses**: When you exceed the rate limit for a service, AWS responds with a throttling error. In AWS SDKs, these are typically represented as `ThrottlingException` or `RateLimitExceeded` errors.
For information on Prowler's retrier configuration please refer to this [page](https://docs.prowler.cloud/en/latest/tutorials/aws/boto3-configuration/).
> Note: You might need to increase the `--aws-retries-max-attempts` parameter from the default value of 3. The retrier follows an exponential backoff strategy.
## Linux
Generate a list of services that Prowler supports, and populate this info into a file:
```bash
prowler aws --list-services | awk -F"- " '{print $2}' | sed '/^$/d' > services
```
Make any modifications for services you would like to skip scanning by modifying this file.
Then create a new PowerShell script file `parallel-prowler.sh` and add the following contents. Update the `$profile` variable to the AWS CLI profile you want to run Prowler with.
```bash
#!/bin/bash
# Change these variables as needed
profile="your_profile"
account_id=$(aws sts get-caller-identity --profile "${profile}" --query 'Account' --output text)
echo "Executing in account: ${account_id}"
# Maximum number of concurrent processes
MAX_PROCESSES=5
# Loop through the services
while read service; do
echo "$(date '+%Y-%m-%d %H:%M:%S'): Starting job for service: ${service}"
# Run the command in the background
(prowler -p "$profile" -s "$service" -F "${account_id}-${service}" --ignore-unused-services --only-logs; echo "$(date '+%Y-%m-%d %H:%M:%S') - ${service} has completed") &
# Check if we have reached the maximum number of processes
while [ $(jobs -r | wc -l) -ge ${MAX_PROCESSES} ]; do
# Wait for a second before checking again
sleep 1
done
done < ./services
# Wait for all background processes to finish
wait
echo "All jobs completed"
```
Output will be stored in the `output/` folder that is in the same directory from which you executed the script.
## Windows
Generate a list of services that Prowler supports, and populate this info into a file:
```powershell
prowler aws --list-services | ForEach-Object {
# Capture lines that are likely service names
if ($_ -match '^\- \w+$') {
$_.Trim().Substring(2)
}
} | Where-Object {
# Filter out empty or null lines
$_ -ne $null -and $_ -ne ''
} | Set-Content -Path "services"
```
Make any modifications for services you would like to skip scanning by modifying this file.
Then create a new PowerShell script file `parallel-prowler.ps1` and add the following contents. Update the `$profile` variable to the AWS CLI profile you want to run prowler with.
Change any parameters you would like when calling prowler in the `Start-Job -ScriptBlock` section. Note that you need to keep the `--only-logs` parameter, else some encoding issue occurs when trying to render the progress-bar and prowler won't successfully execute.
```powershell
$profile = "your_profile"
$account_id = Invoke-Expression -Command "aws sts get-caller-identity --profile $profile --query 'Account' --output text"
Write-Host "Executing Prowler in $account_id"
# Maximum number of concurrent jobs
$MAX_PROCESSES = 5
# Read services from a file
$services = Get-Content -Path "services"
# Array to keep track of started jobs
$jobs = @()
foreach ($service in $services) {
# Start the command as a job
$job = Start-Job -ScriptBlock {
prowler -p ${using:profile} -s ${using:service} -F "${using:account_id}-${using:service}" --ignore-unused-services --only-logs
$endTimestamp = Get-Date -Format "yyyy-MM-dd HH:mm:ss"
Write-Output "${endTimestamp} - $using:service has completed"
}
$jobs += $job
Write-Host "$(Get-Date -Format 'yyyy-MM-dd HH:mm:ss') - Starting job for service: $service"
# Check if we have reached the maximum number of jobs
while (($jobs | Where-Object { $_.State -eq 'Running' }).Count -ge $MAX_PROCESSES) {
Start-Sleep -Seconds 1
# Check for any completed jobs and receive their output
$completedJobs = $jobs | Where-Object { $_.State -eq 'Completed' }
foreach ($completedJob in $completedJobs) {
Receive-Job -Job $completedJob -Keep | ForEach-Object { Write-Host $_ }
$jobs = $jobs | Where-Object { $_.Id -ne $completedJob.Id }
Remove-Job -Job $completedJob
}
}
}
# Check for any remaining completed jobs
$remainingCompletedJobs = $jobs | Where-Object { $_.State -eq 'Completed' }
foreach ($remainingJob in $remainingCompletedJobs) {
Receive-Job -Job $remainingJob -Keep | ForEach-Object { Write-Host $_ }
Remove-Job -Job $remainingJob
}
Write-Host "$(Get-Date -Format 'yyyy-MM-dd HH:mm:ss') - All jobs completed"
```
Output will be stored in `C:\Users\YOUR-USER\Documents\output\`
## Combining the output files
Guidance is provided for the CSV file format. From the ouput directory, execute either the following Bash or PowerShell script. The script will collect the output from the CSV files, only include the header from the first file, and then output the result as CombinedCSV.csv in the current working directory.
There is no logic implemented in terms of which CSV files it will combine. If you have additional CSV files from other actions, such as running a quick inventory, you will need to move that out of the current (or any nested) directory, or move the output you want to combine into its own folder and run the script from there.
```bash
#!/bin/bash
# Initialize a variable to indicate the first file
firstFile=true
# Find all CSV files and loop through them
find . -name "*.csv" -print0 | while IFS= read -r -d '' file; do
if [ "$firstFile" = true ]; then
# For the first file, keep the header
cat "$file" > CombinedCSV.csv
firstFile=false
else
# For subsequent files, skip the header
tail -n +2 "$file" >> CombinedCSV.csv
fi
done
```
```powershell
# Get all CSV files from current directory and its subdirectories
$csvFiles = Get-ChildItem -Recurse -Filter "*.csv"
# Initialize a variable to track if it's the first file
$firstFile = $true
# Loop through each CSV file
foreach ($file in $csvFiles) {
if ($firstFile) {
# For the first file, keep the header and change the flag
$combinedCsv = Import-Csv -Path $file.FullName
$firstFile = $false
} else {
# For subsequent files, skip the header
$tempCsv = Import-Csv -Path $file.FullName
$combinedCsv += $tempCsv | Select-Object * -Skip 1
}
}
# Export the combined data to a new CSV file
$combinedCsv | Export-Csv -Path "CombinedCSV.csv" -NoTypeInformation
```
## TODO: Additional Improvements
Some services need to instantiate another service to perform a check. For instance, `cloudwatch` will instantiate Prowler's `iam` service to perform the `cloudwatch_cross_account_sharing_disabled` check. When the `iam` service is instantiated, it will perform the `__init__` function, and pull all the information required for that service. This provides an opportunity for an improvement in the above script to group related services together so that the `iam` services (or any other cross-service references) isn't repeatedily instantiated by grouping dependant services together. A complete mapping between these services still needs to be further investigated, but these are the cross-references that have been noted:
* inspector2 needs lambda and ec2
* cloudwatch needs iam
* dlm needs ec2

View File

@@ -43,46 +43,71 @@ Hereunder is the structure for each of the supported report formats by Prowler:
![HTML Output](../img/output-html.png)
### CSV
The following are the columns present in the CSV format:
CSV format has a set of common columns for all the providers, and then provider specific columns.
The common columns are the following:
- ASSESSMENT_START_TIME
- FINDING_UNIQUE_ID
- PROVIDER
- CHECK_ID
- CHECK_TITLE
- CHECK_TYPE
- STATUS
- STATUS_EXTENDED
- SERVICE_NAME
- SUBSERVICE_NAME
- SEVERITY
- RESOURCE_TYPE
- RESOURCE_DETAILS
- RESOURCE_TAGS
- DESCRIPTION
- RISK
- RELATED_URL
- REMEDIATION_RECOMMENDATION_TEXT
- REMEDIATION_RECOMMENDATION_URL
- REMEDIATION_RECOMMENDATION_CODE_NATIVEIAC
- REMEDIATION_RECOMMENDATION_CODE_TERRAFORM
- REMEDIATION_RECOMMENDATION_CODE_CLI
- REMEDIATION_RECOMMENDATION_CODE_OTHER
- COMPLIANCE
- CATEGORIES
- DEPENDS_ON
- RELATED_TO
- NOTES
And then by the provider specific columns:
#### AWS
- PROFILE
- ACCOUNT_ID
- ACCOUNT_NAME
- ACCOUNT_EMAIL
- ACCOUNT_ARN
- ACCOUNT_ORG
- ACCOUNT_TAGS
- REGION
- CHECK_ID
- CHECK_TITLE
- CHECK_TYPE
- STATUS
- STATUS_EXTENDED
- SERVICE_NAME
- SUBSERVICE_NAME
- SEVERITY
- RESOURCE_ID
- RESOURCE_ARN
- RESOURCE_TYPE
- RESOURCE_DETAILS
- RESOURCE_TAGS
- DESCRIPTION
- COMPLIANCE
- RISK
- RELATED_URL
- REMEDIATION_RECOMMENDATION_TEXT
- REMEDIATION_RECOMMENDATION_URL
- REMEDIATION_RECOMMENDATION_CODE_NATIVEIAC
- REMEDIATION_RECOMMENDATION_CODE_TERRAFORM
- REMEDIATION_RECOMMENDATION_CODE_CLI
- REMEDIATION_RECOMMENDATION_CODE_OTHER
- CATEGORIES
- DEPENDS_ON
- RELATED_TO
- NOTES
- ACCOUNT_NAME
- ACCOUNT_EMAIL
- ACCOUNT_ARN
- ACCOUNT_ORG
- ACCOUNT_TAGS
- REGION
- RESOURCE_ID
- RESOURCE_ARN
#### AZURE
- TENANT_DOMAIN
- SUBSCRIPTION
- RESOURCE_ID
- RESOURCE_NAME
#### GCP
- PROJECT_ID
- LOCATION
- RESOURCE_ID
- RESOURCE_NAME
> Since Prowler v3 the CSV column delimiter is the semicolon (`;`)
### JSON

View File

@@ -38,8 +38,10 @@ nav:
- Logging: tutorials/logging.md
- Allowlist: tutorials/allowlist.md
- Check Aliases: tutorials/check-aliases.md
- Custom Metadata: tutorials/custom-checks-metadata.md
- Ignore Unused Services: tutorials/ignore-unused-services.md
- Pentesting: tutorials/pentesting.md
- Parallel Execution: tutorials/parallel-execution.md
- Developer Guide: developer-guide/introduction.md
- AWS:
- Authentication: tutorials/aws/authentication.md
@@ -56,6 +58,7 @@ nav:
- Boto3 Configuration: tutorials/aws/boto3-configuration.md
- Azure:
- Authentication: tutorials/azure/authentication.md
- Non default clouds: tutorials/azure/use-non-default-cloud.md
- Subscriptions: tutorials/azure/subscriptions.md
- Google Cloud:
- Authentication: tutorials/gcp/authentication.md
@@ -99,7 +102,7 @@ extra:
link: https://twitter.com/prowlercloud
# Copyright
copyright: Copyright &copy; 2022 Toni de la Fuente, Maintained by the Prowler Team at Verica, Inc.</a>
copyright: Copyright &copy; 2024 Toni de la Fuente, Maintained by the Prowler Team at ProwlerPro, Inc.</a>
markdown_extensions:
- abbr

3557
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -26,6 +26,10 @@ from prowler.lib.check.check import (
)
from prowler.lib.check.checks_loader import load_checks_to_execute
from prowler.lib.check.compliance import update_checks_metadata_with_compliance
from prowler.lib.check.custom_checks_metadata import (
parse_custom_checks_metadata_file,
update_checks_metadata,
)
from prowler.lib.cli.parser import ProwlerArgumentParser
from prowler.lib.logger import logger, set_logging_config
from prowler.lib.outputs.compliance import display_compliance_table
@@ -67,6 +71,7 @@ def prowler():
checks_folder = args.checks_folder
severities = args.severity
compliance_framework = args.compliance
custom_checks_metadata_file = args.custom_checks_metadata_file
if not args.no_banner:
print_banner(args)
@@ -96,9 +101,19 @@ def prowler():
bulk_compliance_frameworks = bulk_load_compliance_frameworks(provider)
# Complete checks metadata with the compliance framework specification
update_checks_metadata_with_compliance(
bulk_checks_metadata = update_checks_metadata_with_compliance(
bulk_compliance_frameworks, bulk_checks_metadata
)
# Update checks metadata if the --custom-checks-metadata-file is present
custom_checks_metadata = None
if custom_checks_metadata_file:
custom_checks_metadata = parse_custom_checks_metadata_file(
provider, custom_checks_metadata_file
)
bulk_checks_metadata = update_checks_metadata(
bulk_checks_metadata, custom_checks_metadata
)
if args.list_compliance:
print_compliance_frameworks(bulk_compliance_frameworks)
sys.exit()
@@ -174,7 +189,11 @@ def prowler():
findings = []
if len(checks_to_execute):
findings = execute_checks(
checks_to_execute, provider, audit_info, audit_output_options
checks_to_execute,
provider,
audit_info,
audit_output_options,
custom_checks_metadata,
)
else:
logger.error(
@@ -246,7 +265,10 @@ def prowler():
for region in security_hub_regions:
# Save the regions where AWS Security Hub is enabled
if verify_security_hub_integration_enabled_per_region(
region, audit_info.audit_session
audit_info.audited_partition,
region,
audit_info.audit_session,
audit_info.audited_account,
):
aws_security_enabled_regions.append(region)

File diff suppressed because it is too large Load Diff

View File

@@ -468,27 +468,6 @@
},
{
"Id": "2.1.1",
"Description": "Ensure all S3 buckets employ encryption-at-rest",
"Checks": [
"s3_bucket_default_encryption"
],
"Attributes": [
{
"Section": "2.1. Simple Storage Service (S3)",
"Profile": "Level 2",
"AssessmentStatus": "Automated",
"Description": "Amazon S3 provides a variety of no, or low, cost encryption options to protect data at rest.",
"RationaleStatement": "Encrypting data at rest reduces the likelihood that it is unintentionally exposed and can nullify the impact of disclosure if the encryption remains unbroken.",
"ImpactStatement": "Amazon S3 buckets with default bucket encryption using SSE-KMS cannot be used as destination buckets for Amazon S3 server access logging. Only SSE-S3 default encryption is supported for server access log destination buckets.",
"RemediationProcedure": "**From Console:** 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select a Bucket. 3. Click on 'Properties'. 4. Click edit on `Default Encryption`. 5. Select either `AES-256`, `AWS-KMS`, `SSE-KMS` or `SSE-S3`. 6. Click `Save` 7. Repeat for all the buckets in your AWS account lacking encryption. **From Command Line:** Run either ``` aws s3api put-bucket-encryption --bucket <bucket name> --server-side-encryption-configuration '{\"Rules\": [{\"ApplyServerSideEncryptionByDefault\": {\"SSEAlgorithm\": \"AES256\"}}]}' ``` or ``` aws s3api put-bucket-encryption --bucket <bucket name> --server-side-encryption-configuration '{\"Rules\": [{\"ApplyServerSideEncryptionByDefault\": {\"SSEAlgorithm\": \"aws:kms\",\"KMSMasterKeyID\": \"aws/s3\"}}]}' ``` **Note:** the KMSMasterKeyID can be set to the master key of your choosing; aws/s3 is an AWS preconfigured default.",
"AuditProcedure": "**From Console:** 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select a Bucket. 3. Click on 'Properties'. 4. Verify that `Default Encryption` is enabled, and displays either `AES-256`, `AWS-KMS`, `SSE-KMS` or `SSE-S3`. 5. Repeat for all the buckets in your AWS account. **From Command Line:** 1. Run command to list buckets ``` aws s3 ls ``` 2. For each bucket, run ``` aws s3api get-bucket-encryption --bucket <bucket name> ``` 3. Verify that either ``` \"SSEAlgorithm\": \"AES256\" ``` or ``` \"SSEAlgorithm\": \"aws:kms\"``` is displayed.",
"AdditionalInformation": "S3 bucket encryption only applies to objects as they are placed in the bucket. Enabling S3 bucket encryption does **not** encrypt objects previously stored within the bucket.",
"References": "https://docs.aws.amazon.com/AmazonS3/latest/user-guide/default-bucket-encryption.html:https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html#bucket-encryption-related-resources"
}
]
},
{
"Id": "2.1.2",
"Description": "Ensure S3 Bucket Policy is set to deny HTTP requests",
"Checks": [
"s3_bucket_secure_transport_policy"
@@ -509,7 +488,7 @@
]
},
{
"Id": "2.1.3",
"Id": "2.1.2",
"Description": "Ensure MFA Delete is enabled on S3 buckets",
"Checks": [
"s3_bucket_no_mfa_delete"
@@ -530,7 +509,7 @@
]
},
{
"Id": "2.1.4",
"Id": "2.1.3",
"Description": "Ensure all data in Amazon S3 has been discovered, classified and secured when required.",
"Checks": [
"macie_is_enabled"
@@ -551,7 +530,7 @@
]
},
{
"Id": "2.1.5",
"Id": "2.1.4",
"Description": "Ensure that S3 Buckets are configured with 'Block public access (bucket settings)'",
"Checks": [
"s3_bucket_level_public_access_block",

File diff suppressed because it is too large Load Diff

View File

@@ -211,6 +211,31 @@
"iam_avoid_root_usage"
]
},
{
"Id": "op.acc.4.aws.iam.8",
"Description": "Proceso de gestión de derechos de acceso",
"Attributes": [
{
"IdGrupoControl": "op.acc.4",
"Marco": "operacional",
"Categoria": "control de acceso",
"DescripcionControl": "Se restringirá todo acceso a las acciones especificadas para el usuario root de una cuenta.",
"Nivel": "alto",
"Tipo": "requisito",
"Dimensiones": [
"confidencialidad",
"integridad",
"trazabilidad",
"autenticidad"
],
"ModoEjecucion": "automático"
}
],
"Checks": [
"organizations_account_part_of_organizations",
"organizations_scp_check_deny_regions"
]
},
{
"Id": "op.acc.4.aws.iam.9",
"Description": "Proceso de gestión de derechos de acceso",
@@ -789,7 +814,8 @@
}
],
"Checks": [
"inspector2_findings_exist"
"inspector2_is_enabled",
"inspector2_active_findings_exist"
]
},
{
@@ -1121,6 +1147,30 @@
"cloudtrail_insights_exist"
]
},
{
"Id": "op.exp.8.r1.aws.ct.3",
"Description": "Revisión de los registros",
"Attributes": [
{
"IdGrupoControl": "op.exp.8.r1",
"Marco": "operacional",
"Categoria": "explotación",
"DescripcionControl": "Registrar los eventos de lectura y escritura de datos.",
"Nivel": "alto",
"Tipo": "refuerzo",
"Dimensiones": [
"trazabilidad"
],
"ModoEjecucion": "automático"
}
],
"Checks": [
"cloudwatch_log_metric_filter_and_alarm_for_cloudtrail_configuration_changes_enabled",
"cloudtrail_s3_dataevents_write_enabled",
"cloudtrail_s3_dataevents_read_enabled",
"cloudtrail_insights_exist"
]
},
{
"Id": "op.exp.8.r1.aws.ct.4",
"Description": "Revisión de los registros",
@@ -1233,6 +1283,33 @@
"iam_role_cross_service_confused_deputy_prevention"
]
},
{
"Id": "op.exp.8.r4.aws.ct.1",
"Description": "Control de acceso",
"Attributes": [
{
"IdGrupoControl": "op.exp.8.r4",
"Marco": "operacional",
"Categoria": "explotación",
"DescripcionControl": "Asignar correctamente las políticas AWS IAM para el acceso y borrado de los registros y sus copias de seguridad haciendo uso del principio de mínimo privilegio.",
"Nivel": "alto",
"Tipo": "refuerzo",
"Dimensiones": [
"trazabilidad"
],
"ModoEjecucion": "automático"
}
],
"Checks": [
"iam_policy_allows_privilege_escalation",
"iam_customer_attached_policy_no_administrative_privileges",
"iam_customer_unattached_policy_no_administrative_privilege",
"iam_no_custom_policy_permissive_role_assumption",
"iam_policy_attached_only_to_group_or_roles",
"iam_role_cross_service_confused_deputy_prevention",
"iam_policy_no_full_access_to_cloudtrail"
]
},
{
"Id": "op.exp.8.r4.aws.ct.2",
"Description": "Control de acceso",
@@ -1859,7 +1936,8 @@
}
],
"Checks": [
"inspector2_findings_exist"
"inspector2_is_enabled",
"inspector2_active_findings_exist"
]
},
{
@@ -1934,7 +2012,8 @@
}
],
"Checks": [
"inspector2_findings_exist"
"inspector2_is_enabled",
"inspector2_active_findings_exist"
]
},
{
@@ -2110,7 +2189,7 @@
}
],
"Checks": [
"networkfirewall_in_all_vpc"
"fms_policy_compliant"
]
},
{
@@ -2251,6 +2330,31 @@
"cloudfront_distributions_https_enabled"
]
},
{
"Id": "mp.com.4.aws.ws.1",
"Description": "Separación de flujos de información en la red",
"Attributes": [
{
"IdGrupoControl": "mp.com.4",
"Marco": "medidas de protección",
"Categoria": "segregación de redes",
"DescripcionControl": "Se deberán abrir solo los puertos necesarios para el uso del servicio AWS WorkSpaces.",
"Nivel": "alto",
"Tipo": "requisito",
"Dimensiones": [
"confidencialidad",
"integridad",
"trazabilidad",
"autenticidad",
"disponibilidad"
],
"ModoEjecucion": "automático"
}
],
"Checks": [
"workspaces_vpc_2private_1public_subnets_nat"
]
},
{
"Id": "mp.com.4.aws.vpc.1",
"Description": "Separación de flujos de información en la red",
@@ -2323,7 +2427,8 @@
}
],
"Checks": [
"vpc_subnet_separate_private_public"
"vpc_subnet_separate_private_public",
"vpc_different_regions"
]
},
{
@@ -2370,7 +2475,8 @@
}
],
"Checks": [
"vpc_subnet_different_az"
"vpc_subnet_different_az",
"vpc_different_regions"
]
},
{

View File

@@ -29,7 +29,8 @@
"securityhub_enabled",
"elbv2_waf_acl_attached",
"guardduty_is_enabled",
"inspector2_findings_exist",
"inspector2_is_enabled",
"inspector2_active_findings_exist",
"awslambda_function_not_publicly_accessible",
"ec2_instance_public_ip"
],
@@ -576,7 +577,8 @@
"config_recorder_all_regions_enabled",
"securityhub_enabled",
"guardduty_is_enabled",
"inspector2_findings_exist"
"inspector2_is_enabled",
"inspector2_active_findings_exist"
],
"Attributes": [
{
@@ -737,7 +739,8 @@
"iam_user_hardware_mfa_enabled",
"iam_user_mfa_enabled_console_access",
"securityhub_enabled",
"inspector2_findings_exist"
"inspector2_is_enabled",
"inspector2_active_findings_exist"
],
"Attributes": [
{
@@ -1892,7 +1895,8 @@
"networkfirewall_in_all_vpc",
"elbv2_waf_acl_attached",
"guardduty_is_enabled",
"inspector2_findings_exist",
"inspector2_is_enabled",
"inspector2_active_findings_exist",
"ec2_networkacl_allow_ingress_any_port",
"ec2_networkacl_allow_ingress_tcp_port_22",
"ec2_networkacl_allow_ingress_tcp_port_3389",

View File

@@ -13,7 +13,7 @@
"ItemId": "cc_1_1",
"Section": "CC1.0 - Common Criteria Related to Control Environment",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -27,7 +27,7 @@
"ItemId": "cc_1_2",
"Section": "CC1.0 - Common Criteria Related to Control Environment",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -41,7 +41,7 @@
"ItemId": "cc_1_3",
"Section": "CC1.0 - Common Criteria Related to Control Environment",
"Service": "aws",
"Soc_Type": "automated"
"Type": "automated"
}
],
"Checks": [
@@ -62,7 +62,7 @@
"ItemId": "cc_1_4",
"Section": "CC1.0 - Common Criteria Related to Control Environment",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -76,7 +76,7 @@
"ItemId": "cc_1_5",
"Section": "CC1.0 - Common Criteria Related to Control Environment",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -90,7 +90,7 @@
"ItemId": "cc_2_1",
"Section": "CC2.0 - Common Criteria Related to Communication and Information",
"Service": "aws",
"Soc_Type": "automated"
"Type": "automated"
}
],
"Checks": [
@@ -109,7 +109,7 @@
"ItemId": "cc_2_2",
"Section": "CC2.0 - Common Criteria Related to Communication and Information",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -123,7 +123,7 @@
"ItemId": "cc_2_3",
"Section": "CC2.0 - Common Criteria Related to Communication and Information",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -137,7 +137,7 @@
"ItemId": "cc_3_1",
"Section": "CC3.0 - Common Criteria Related to Risk Assessment",
"Service": "aws",
"Soc_Type": "automated"
"Type": "automated"
}
],
"Checks": [
@@ -155,7 +155,7 @@
"ItemId": "cc_3_2",
"Section": "CC3.0 - Common Criteria Related to Risk Assessment",
"Service": "aws",
"Soc_Type": "automated"
"Type": "automated"
}
],
"Checks": [
@@ -175,7 +175,7 @@
"ItemId": "cc_3_3",
"Section": "CC3.0 - Common Criteria Related to Risk Assessment",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -189,7 +189,7 @@
"ItemId": "cc_3_4",
"Section": "CC3.0 - Common Criteria Related to Risk Assessment",
"Service": "config",
"Soc_Type": "automated"
"Type": "automated"
}
],
"Checks": [
@@ -205,7 +205,7 @@
"ItemId": "cc_4_1",
"Section": "CC4.0 - Monitoring Activities",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -219,7 +219,7 @@
"ItemId": "cc_4_2",
"Section": "CC4.0 - Monitoring Activities",
"Service": "guardduty",
"Soc_Type": "automated"
"Type": "automated"
}
],
"Checks": [
@@ -236,7 +236,7 @@
"ItemId": "cc_5_1",
"Section": "CC5.0 - Control Activities",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -250,7 +250,7 @@
"ItemId": "cc_5_2",
"Section": "CC5.0 - Control Activities",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -264,7 +264,7 @@
"ItemId": "cc_5_3",
"Section": "CC5.0 - Control Activities",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -278,7 +278,7 @@
"ItemId": "cc_6_1",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "s3",
"Soc_Type": "automated"
"Type": "automated"
}
],
"Checks": [
@@ -294,7 +294,7 @@
"ItemId": "cc_6_2",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "rds",
"Soc_Type": "automated"
"Type": "automated"
}
],
"Checks": [
@@ -310,7 +310,7 @@
"ItemId": "cc_6_3",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "iam",
"Soc_Type": "automated"
"Type": "automated"
}
],
"Checks": [
@@ -328,7 +328,7 @@
"ItemId": "cc_6_4",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -342,7 +342,7 @@
"ItemId": "cc_6_5",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -356,7 +356,7 @@
"ItemId": "cc_6_6",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "ec2",
"Soc_Type": "automated"
"Type": "automated"
}
],
"Checks": [
@@ -372,7 +372,7 @@
"ItemId": "cc_6_7",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "acm",
"Soc_Type": "automated"
"Type": "automated"
}
],
"Checks": [
@@ -388,7 +388,7 @@
"ItemId": "cc_6_8",
"Section": "CC6.0 - Logical and Physical Access",
"Service": "aws",
"Soc_Type": "automated"
"Type": "automated"
}
],
"Checks": [
@@ -405,7 +405,7 @@
"ItemId": "cc_7_1",
"Section": "CC7.0 - System Operations",
"Service": "aws",
"Soc_Type": "automated"
"Type": "automated"
}
],
"Checks": [
@@ -424,7 +424,7 @@
"ItemId": "cc_7_2",
"Section": "CC7.0 - System Operations",
"Service": "aws",
"Soc_Type": "automated"
"Type": "automated"
}
],
"Checks": [
@@ -460,7 +460,7 @@
"ItemId": "cc_7_3",
"Section": "CC7.0 - System Operations",
"Service": "aws",
"Soc_Type": "automated"
"Type": "automated"
}
],
"Checks": [
@@ -492,7 +492,7 @@
"ItemId": "cc_7_4",
"Section": "CC7.0 - System Operations",
"Service": "aws",
"Soc_Type": "automated"
"Type": "automated"
}
],
"Checks": [
@@ -523,7 +523,7 @@
"ItemId": "cc_7_5",
"Section": "CC7.0 - System Operations",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -537,7 +537,7 @@
"ItemId": "cc_8_1",
"Section": "CC8.0 - Change Management",
"Service": "aws",
"Soc_Type": "automated"
"Type": "automated"
}
],
"Checks": [
@@ -553,7 +553,7 @@
"ItemId": "cc_9_1",
"Section": "CC9.0 - Risk Mitigation",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -567,7 +567,7 @@
"ItemId": "cc_9_2",
"Section": "CC9.0 - Risk Mitigation",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -581,7 +581,7 @@
"ItemId": "cc_a_1_1",
"Section": "CCA1.0 - Additional Criterial for Availability",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -595,7 +595,7 @@
"ItemId": "cc_a_1_2",
"Section": "CCA1.0 - Additional Criterial for Availability",
"Service": "aws",
"Soc_Type": "automated"
"Type": "automated"
}
],
"Checks": [
@@ -626,7 +626,7 @@
"ItemId": "cc_a_1_3",
"Section": "CCA1.0 - Additional Criterial for Availability",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -640,7 +640,7 @@
"ItemId": "cc_c_1_1",
"Section": "CCC1.0 - Additional Criterial for Confidentiality",
"Service": "aws",
"Soc_Type": "automated"
"Type": "automated"
}
],
"Checks": [
@@ -656,7 +656,7 @@
"ItemId": "cc_c_1_2",
"Section": "CCC1.0 - Additional Criterial for Confidentiality",
"Service": "s3",
"Soc_Type": "automated"
"Type": "automated"
}
],
"Checks": [
@@ -672,7 +672,7 @@
"ItemId": "p_1_1",
"Section": "P1.0 - Privacy Criteria Related to Notice and Communication of Objectives Related to Privacy",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -686,7 +686,7 @@
"ItemId": "p_2_1",
"Section": "P2.0 - Privacy Criteria Related to Choice and Consent",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -700,7 +700,7 @@
"ItemId": "p_3_1",
"Section": "P3.0 - Privacy Criteria Related to Collection",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -714,7 +714,7 @@
"ItemId": "p_3_2",
"Section": "P3.0 - Privacy Criteria Related to Collection",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -728,7 +728,7 @@
"ItemId": "p_4_1",
"Section": "P4.0 - Privacy Criteria Related to Use, Retention, and Disposal",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -742,7 +742,7 @@
"ItemId": "p_4_2",
"Section": "P4.0 - Privacy Criteria Related to Use, Retention, and Disposal",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -756,7 +756,7 @@
"ItemId": "p_4_3",
"Section": "P4.0 - Privacy Criteria Related to Use, Retention, and Disposal",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -770,7 +770,7 @@
"ItemId": "p_5_1",
"Section": "P5.0 - Privacy Criteria Related to Access",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -784,7 +784,7 @@
"ItemId": "p_5_2",
"Section": "P5.0 - Privacy Criteria Related to Access",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -798,7 +798,7 @@
"ItemId": "p_6_1",
"Section": "P6.0 - Privacy Criteria Related to Disclosure and Notification",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -812,7 +812,7 @@
"ItemId": "p_6_2",
"Section": "P6.0 - Privacy Criteria Related to Disclosure and Notification",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -826,7 +826,7 @@
"ItemId": "p_6_3",
"Section": "P6.0 - Privacy Criteria Related to Disclosure and Notification",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -840,7 +840,7 @@
"ItemId": "p_6_4",
"Section": "P6.0 - Privacy Criteria Related to Disclosure and Notification",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -854,7 +854,7 @@
"ItemId": "p_6_5",
"Section": "P6.0 - Privacy Criteria Related to Disclosure and Notification",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -868,7 +868,7 @@
"ItemId": "p_6_6",
"Section": "P6.0 - Privacy Criteria Related to Disclosure and Notification",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -882,7 +882,7 @@
"ItemId": "p_6_7",
"Section": "P6.0 - Privacy Criteria Related to Disclosure and Notification",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -896,7 +896,7 @@
"ItemId": "p_7_1",
"Section": "P7.0 - Privacy Criteria Related to Quality",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []
@@ -910,7 +910,7 @@
"ItemId": "p_8_1",
"Section": "P8.0 - Privacy Criteria Related to Monitoring and Enforcement",
"Service": "aws",
"Soc_Type": "manual"
"Type": "manual"
}
],
"Checks": []

View File

@@ -11,7 +11,7 @@ from prowler.lib.logger import logger
timestamp = datetime.today()
timestamp_utc = datetime.now(timezone.utc).replace(tzinfo=timezone.utc)
prowler_version = "3.11.0"
prowler_version = "3.13.1"
html_logo_url = "https://github.com/prowler-cloud/prowler/"
html_logo_img = "https://user-images.githubusercontent.com/3985464/113734260-7ba06900-96fb-11eb-82bc-d4f68a1e2710.png"
square_logo_img = "https://user-images.githubusercontent.com/38561120/235905862-9ece5bd7-9aa3-4e48-807a-3a9035eb8bfb.png"
@@ -22,6 +22,9 @@ gcp_logo = "https://user-images.githubusercontent.com/38561120/235928332-eb4accd
orange_color = "\033[38;5;208m"
banner_color = "\033[1;92m"
# Severities
valid_severities = ["critical", "high", "medium", "low", "informational"]
# Compliance
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
@@ -70,7 +73,9 @@ def check_current_version():
if latest_version != prowler_version:
return f"{prowler_version_string} (latest is {latest_version}, upgrade for the latest features)"
else:
return f"{prowler_version_string} (it is the latest version, yay!)"
return (
f"{prowler_version_string} (You are running the latest version, yay!)"
)
except requests.RequestException:
return f"{prowler_version_string}"
except Exception:

View File

@@ -2,7 +2,7 @@
aws:
# AWS Global Configuration
# aws.allowlist_non_default_regions --> Set to True to allowlist failed findings in non-default regions for GuardDuty, SecurityHub, DRS and Config
# aws.allowlist_non_default_regions --> Set to True to allowlist failed findings in non-default regions for AccessAnalyzer, GuardDuty, SecurityHub, DRS and Config
allowlist_non_default_regions: False
# If you want to allowlist/mute failed findings only in specific regions, create a file with the following syntax and run it with `prowler aws -w allowlist.yaml`:
# Allowlist:
@@ -69,8 +69,8 @@ aws:
# AWS Organizations
# organizations_scp_check_deny_regions
# organizations_enabled_regions: [
# 'eu-central-1',
# 'eu-west-1',
# "eu-central-1",
# "eu-west-1",
# "us-east-1"
# ]
organizations_enabled_regions: []

View File

@@ -0,0 +1,15 @@
CustomChecksMetadata:
aws:
Checks:
s3_bucket_level_public_access_block:
Severity: high
s3_bucket_no_mfa_delete:
Severity: high
azure:
Checks:
storage_infrastructure_encryption_is_enabled:
Severity: medium
gcp:
Checks:
compute_instance_public_ip:
Severity: critical

View File

@@ -4,7 +4,7 @@ from prowler.config.config import banner_color, orange_color, prowler_version, t
def print_banner(args):
banner = f"""{banner_color} _
banner = rf"""{banner_color} _
_ __ _ __ _____ _| | ___ _ __
| '_ \| '__/ _ \ \ /\ / / |/ _ \ '__|
| |_) | | | (_) \ V V /| | __/ |

View File

@@ -16,6 +16,7 @@ from colorama import Fore, Style
import prowler
from prowler.config.config import orange_color
from prowler.lib.check.compliance_models import load_compliance_framework
from prowler.lib.check.custom_checks_metadata import update_check_metadata
from prowler.lib.check.models import Check, load_check_metadata
from prowler.lib.logger import logger
from prowler.lib.outputs.outputs import report
@@ -66,9 +67,9 @@ def bulk_load_compliance_frameworks(provider: str) -> dict:
# cis_v1.4_aws.json --> cis_v1.4_aws
compliance_framework_name = filename.split(".json")[0]
# Store the compliance info
bulk_compliance_frameworks[
compliance_framework_name
] = load_compliance_framework(file_path)
bulk_compliance_frameworks[compliance_framework_name] = (
load_compliance_framework(file_path)
)
except Exception as e:
logger.error(f"{e.__class__.__name__}[{e.__traceback__.tb_lineno}] -- {e}")
@@ -106,14 +107,20 @@ def exclude_services_to_run(
# Load checks from checklist.json
def parse_checks_from_file(input_file: str, provider: str) -> set:
checks_to_execute = set()
with open_file(input_file) as f:
json_file = parse_json_file(f)
"""parse_checks_from_file returns a set of checks read from the given file"""
try:
checks_to_execute = set()
with open_file(input_file) as f:
json_file = parse_json_file(f)
for check_name in json_file[provider]:
checks_to_execute.add(check_name)
for check_name in json_file[provider]:
checks_to_execute.add(check_name)
return checks_to_execute
return checks_to_execute
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
# Load checks from custom folder
@@ -210,7 +217,7 @@ def print_categories(categories: set):
singular_string = f"\nThere is {Fore.YELLOW}{categories_num}{Style.RESET_ALL} available category.\n"
message = plural_string if categories_num > 1 else singular_string
for category in categories:
for category in sorted(categories):
print(f"- {category}")
print(message)
@@ -239,7 +246,7 @@ def print_compliance_frameworks(
singular_string = f"\nThere is {Fore.YELLOW}{frameworks_num}{Style.RESET_ALL} available Compliance Framework.\n"
message = plural_string if frameworks_num > 1 else singular_string
for framework in bulk_compliance_frameworks.keys():
for framework in sorted(bulk_compliance_frameworks.keys()):
print(f"- {framework}")
print(message)
@@ -309,7 +316,7 @@ def print_checks(
def parse_checks_from_compliance_framework(
compliance_frameworks: list, bulk_compliance_frameworks: dict
) -> list:
"""Parse checks from compliance frameworks specification"""
"""parse_checks_from_compliance_framework returns a set of checks from the given compliance_frameworks"""
checks_to_execute = set()
try:
for framework in compliance_frameworks:
@@ -416,6 +423,7 @@ def execute_checks(
provider: str,
audit_info: Any,
audit_output_options: Provider_Output_Options,
custom_checks_metadata: Any,
) -> list:
# List to store all the check's findings
all_findings = []
@@ -461,6 +469,7 @@ def execute_checks(
audit_info,
services_executed,
checks_executed,
custom_checks_metadata,
)
all_findings.extend(check_findings)
@@ -506,6 +515,7 @@ def execute_checks(
audit_info,
services_executed,
checks_executed,
custom_checks_metadata,
)
all_findings.extend(check_findings)
@@ -531,6 +541,7 @@ def execute(
audit_info: Any,
services_executed: set,
checks_executed: set,
custom_checks_metadata: Any,
):
# Import check module
check_module_path = (
@@ -541,6 +552,10 @@ def execute(
check_to_execute = getattr(lib, check_name)
c = check_to_execute()
# Update check metadata to reflect that in the outputs
if custom_checks_metadata and custom_checks_metadata["Checks"].get(c.CheckID):
c = update_check_metadata(c, custom_checks_metadata["Checks"][c.CheckID])
# Run check
check_findings = run_check(c, audit_output_options)
@@ -598,22 +613,32 @@ def update_audit_metadata(
)
def recover_checks_from_service(service_list: list, provider: str) -> list:
checks = set()
service_list = [
"awslambda" if service == "lambda" else service for service in service_list
]
for service in service_list:
modules = recover_checks_from_provider(provider, service)
if not modules:
logger.error(f"Service '{service}' does not have checks.")
def recover_checks_from_service(service_list: list, provider: str) -> set:
"""
Recover all checks from the selected provider and service
else:
for check_module in modules:
# Recover check name and module name from import path
# Format: "providers.{provider}.services.{service}.{check_name}.{check_name}"
check_name = check_module[0].split(".")[-1]
# If the service is present in the group list passed as parameters
# if service_name in group_list: checks_from_arn.add(check_name)
checks.add(check_name)
return checks
Returns a set of checks from the given services
"""
try:
checks = set()
service_list = [
"awslambda" if service == "lambda" else service for service in service_list
]
for service in service_list:
service_checks = recover_checks_from_provider(provider, service)
if not service_checks:
logger.error(f"Service '{service}' does not have checks.")
else:
for check in service_checks:
# Recover check name and module name from import path
# Format: "providers.{provider}.services.{service}.{check_name}.{check_name}"
check_name = check[0].split(".")[-1]
# If the service is present in the group list passed as parameters
# if service_name in group_list: checks_from_arn.add(check_name)
checks.add(check_name)
return checks
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)

View File

@@ -1,5 +1,6 @@
from colorama import Fore, Style
from prowler.config.config import valid_severities
from prowler.lib.check.check import (
parse_checks_from_compliance_framework,
parse_checks_from_file,
@@ -10,7 +11,6 @@ from prowler.lib.logger import logger
# Generate the list of checks to execute
# PENDING Test for this function
def load_checks_to_execute(
bulk_checks_metadata: dict,
bulk_compliance_frameworks: dict,
@@ -22,80 +22,110 @@ def load_checks_to_execute(
categories: set,
provider: str,
) -> set:
"""Generate the list of checks to execute based on the cloud provider and input arguments specified"""
checks_to_execute = set()
"""Generate the list of checks to execute based on the cloud provider and the input arguments given"""
try:
# Local subsets
checks_to_execute = set()
check_aliases = {}
check_severities = {key: [] for key in valid_severities}
check_categories = {}
# Handle if there are checks passed using -c/--checks
if check_list:
for check_name in check_list:
checks_to_execute.add(check_name)
# First, loop over the bulk_checks_metadata to extract the needed subsets
for check, metadata in bulk_checks_metadata.items():
# Aliases
for alias in metadata.CheckAliases:
if alias not in check_aliases:
check_aliases[alias] = []
check_aliases[alias].append(check)
# Handle if there are some severities passed using --severity
elif severities:
for check in bulk_checks_metadata:
# Check check's severity
if bulk_checks_metadata[check].Severity in severities:
checks_to_execute.add(check)
# Severities
if metadata.Severity:
check_severities[metadata.Severity].append(check)
# Handle if there are checks passed using -C/--checks-file
elif checks_file:
try:
# Categories
for category in metadata.Categories:
if category not in check_categories:
check_categories[category] = []
check_categories[category].append(check)
# Handle if there are checks passed using -c/--checks
if check_list:
for check_name in check_list:
checks_to_execute.add(check_name)
# Handle if there are some severities passed using --severity
elif severities:
for severity in severities:
checks_to_execute.update(check_severities[severity])
if service_list:
checks_to_execute = (
recover_checks_from_service(service_list, provider)
& checks_to_execute
)
# Handle if there are checks passed using -C/--checks-file
elif checks_file:
checks_to_execute = parse_checks_from_file(checks_file, provider)
except Exception as e:
logger.error(f"{e.__class__.__name__}[{e.__traceback__.tb_lineno}] -- {e}")
# Handle if there are services passed using -s/--services
elif service_list:
checks_to_execute = recover_checks_from_service(service_list, provider)
# Handle if there are services passed using -s/--services
elif service_list:
checks_to_execute = recover_checks_from_service(service_list, provider)
# Handle if there are compliance frameworks passed using --compliance
elif compliance_frameworks:
try:
# Handle if there are compliance frameworks passed using --compliance
elif compliance_frameworks:
checks_to_execute = parse_checks_from_compliance_framework(
compliance_frameworks, bulk_compliance_frameworks
)
except Exception as e:
logger.error(f"{e.__class__.__name__}[{e.__traceback__.tb_lineno}] -- {e}")
# Handle if there are categories passed using --categories
elif categories:
for cat in categories:
for check in bulk_checks_metadata:
# Check check's categories
if cat in bulk_checks_metadata[check].Categories:
checks_to_execute.add(check)
# Handle if there are categories passed using --categories
elif categories:
for category in categories:
checks_to_execute.update(check_categories[category])
# If there are no checks passed as argument
else:
try:
# If there are no checks passed as argument
else:
# Get all check modules to run with the specific provider
checks = recover_checks_from_provider(provider)
except Exception as e:
logger.error(f"{e.__class__.__name__}[{e.__traceback__.tb_lineno}] -- {e}")
else:
for check_info in checks:
# Recover check name from import path (last part)
# Format: "providers.{provider}.services.{service}.{check_name}.{check_name}"
check_name = check_info[0]
checks_to_execute.add(check_name)
# Get Check Aliases mapping
check_aliases = {}
for check, metadata in bulk_checks_metadata.items():
for alias in metadata.CheckAliases:
check_aliases[alias] = check
# Check Aliases
checks_to_execute = update_checks_to_execute_with_aliases(
checks_to_execute, check_aliases
)
return checks_to_execute
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
def update_checks_to_execute_with_aliases(
checks_to_execute: set, check_aliases: dict
) -> set:
"""update_checks_to_execute_with_aliases returns the checks_to_execute updated using the check aliases."""
# Verify if any input check is an alias of another check
for input_check in checks_to_execute:
if (
input_check in check_aliases
and check_aliases[input_check] not in checks_to_execute
):
# Remove input check name and add the real one
checks_to_execute.remove(input_check)
checks_to_execute.add(check_aliases[input_check])
print(
f"\nUsing alias {Fore.YELLOW}{input_check}{Style.RESET_ALL} for check {Fore.YELLOW}{check_aliases[input_check]}{Style.RESET_ALL}...\n"
)
return checks_to_execute
try:
new_checks_to_execute = checks_to_execute.copy()
for input_check in checks_to_execute:
if input_check in check_aliases:
# Remove input check name and add the real one
new_checks_to_execute.remove(input_check)
for alias in check_aliases[input_check]:
if alias not in new_checks_to_execute:
new_checks_to_execute.add(alias)
print(
f"\nUsing alias {Fore.YELLOW}{input_check}{Style.RESET_ALL} for check {Fore.YELLOW}{alias}{Style.RESET_ALL}..."
)
return new_checks_to_execute
except Exception as error:
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)

View File

@@ -52,12 +52,12 @@ class ENS_Requirement_Attribute(BaseModel):
class Generic_Compliance_Requirement_Attribute(BaseModel):
"""Generic Compliance Requirement Attribute"""
ItemId: str
ItemId: Optional[str]
Section: Optional[str]
SubSection: Optional[str]
SubGroup: Optional[str]
Service: str
Soc_Type: Optional[str]
Service: Optional[str]
Type: Optional[str]
class CIS_Requirement_Attribute_Profile(str):

View File

@@ -0,0 +1,77 @@
import sys
import yaml
from jsonschema import validate
from prowler.config.config import valid_severities
from prowler.lib.logger import logger
custom_checks_metadata_schema = {
"type": "object",
"properties": {
"Checks": {
"type": "object",
"patternProperties": {
".*": {
"type": "object",
"properties": {
"Severity": {
"type": "string",
"enum": valid_severities,
}
},
"required": ["Severity"],
"additionalProperties": False,
}
},
"additionalProperties": False,
}
},
"required": ["Checks"],
"additionalProperties": False,
}
def parse_custom_checks_metadata_file(provider: str, parse_custom_checks_metadata_file):
"""parse_custom_checks_metadata_file returns the custom_checks_metadata object if it is valid, otherwise aborts the execution returning the ValidationError."""
try:
with open(parse_custom_checks_metadata_file) as f:
custom_checks_metadata = yaml.safe_load(f)["CustomChecksMetadata"][provider]
validate(custom_checks_metadata, schema=custom_checks_metadata_schema)
return custom_checks_metadata
except Exception as error:
logger.critical(
f"{error.__class__.__name__} -- {error}[{error.__traceback__.tb_lineno}]"
)
sys.exit(1)
def update_checks_metadata(bulk_checks_metadata, custom_checks_metadata):
"""update_checks_metadata returns the bulk_checks_metadata with the check's metadata updated based on the custom_checks_metadata provided."""
try:
# Update checks metadata from CustomChecksMetadata file
for check, custom_metadata in custom_checks_metadata["Checks"].items():
check_metadata = bulk_checks_metadata.get(check)
if check_metadata:
bulk_checks_metadata[check] = update_check_metadata(
check_metadata, custom_metadata
)
return bulk_checks_metadata
except Exception as error:
logger.critical(
f"{error.__class__.__name__} -- {error}[{error.__traceback__.tb_lineno}]"
)
sys.exit(1)
def update_check_metadata(check_metadata, custom_metadata):
"""update_check_metadata updates the check_metadata fields present in the custom_metadata and returns the updated version of the check_metadata. If some field is not present or valid the check_metadata is returned with the original fields."""
try:
if custom_metadata:
for attribute in custom_metadata:
try:
setattr(check_metadata, attribute, custom_metadata[attribute])
except ValueError:
pass
finally:
return check_metadata

View File

@@ -7,6 +7,7 @@ from prowler.config.config import (
check_current_version,
default_config_file_path,
default_output_directory,
valid_severities,
)
from prowler.providers.common.arguments import (
init_providers_parser,
@@ -49,6 +50,7 @@ Detailed documentation at https://docs.prowler.cloud
self.__init_exclude_checks_parser__()
self.__init_list_checks_parser__()
self.__init_config_parser__()
self.__init_custom_checks_metadata_parser__()
# Init Providers Arguments
init_providers_parser(self)
@@ -220,11 +222,11 @@ Detailed documentation at https://docs.prowler.cloud
group.add_argument(
"-s", "--services", nargs="+", help="List of services to be executed."
)
group.add_argument(
common_checks_parser.add_argument(
"--severity",
nargs="+",
help="List of severities to be executed [informational, low, medium, high, critical]",
choices=["informational", "low", "medium", "high", "critical"],
help=f"List of severities to be executed {valid_severities}",
choices=valid_severities,
)
group.add_argument(
"--compliance",
@@ -286,3 +288,15 @@ Detailed documentation at https://docs.prowler.cloud
default=default_config_file_path,
help="Set configuration file path",
)
def __init_custom_checks_metadata_parser__(self):
# CustomChecksMetadata
custom_checks_metadata_subparser = (
self.common_providers_parser.add_argument_group("Custom Checks Metadata")
)
custom_checks_metadata_subparser.add_argument(
"--custom-checks-metadata-file",
nargs="?",
default=None,
help="Path for the custom checks metadata YAML file. See example prowler/config/custom_checks_metadata_example.yaml for reference and format. See more in https://docs.prowler.cloud/en/latest/tutorials/custom-checks-metadata/",
)

View File

@@ -330,7 +330,7 @@ def fill_compliance(output_options, finding, audit_info, file_descriptors):
Requirements_Attributes_SubSection=attribute.SubSection,
Requirements_Attributes_SubGroup=attribute.SubGroup,
Requirements_Attributes_Service=attribute.Service,
Requirements_Attributes_Soc_Type=attribute.Soc_Type,
Requirements_Attributes_Type=attribute.Type,
Status=finding.status,
StatusExtended=finding.status_extended,
ResourceId=finding.resource_id,
@@ -401,7 +401,8 @@ def display_compliance_table(
"Bajo": 0,
}
if finding.status == "FAIL":
fail_count += 1
if attribute.Tipo != "recomendacion":
fail_count += 1
marcos[marco_categoria][
"Estado"
] = f"{Fore.RED}NO CUMPLE{Style.RESET_ALL}"
@@ -443,8 +444,8 @@ def display_compliance_table(
)
overview_table = [
[
f"{Fore.RED}{round(fail_count/(fail_count+pass_count)*100, 2)}% ({fail_count}) NO CUMPLE{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count/(fail_count+pass_count)*100, 2)}% ({pass_count}) CUMPLE{Style.RESET_ALL}",
f"{Fore.RED}{round(fail_count / (fail_count + pass_count) * 100, 2)}% ({fail_count}) NO CUMPLE{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count / (fail_count + pass_count) * 100, 2)}% ({pass_count}) CUMPLE{Style.RESET_ALL}",
]
]
print(tabulate(overview_table, tablefmt="rounded_grid"))
@@ -538,8 +539,8 @@ def display_compliance_table(
)
overview_table = [
[
f"{Fore.RED}{round(fail_count/(fail_count+pass_count)*100, 2)}% ({fail_count}) FAIL{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count/(fail_count+pass_count)*100, 2)}% ({pass_count}) PASS{Style.RESET_ALL}",
f"{Fore.RED}{round(fail_count / (fail_count + pass_count) * 100, 2)}% ({fail_count}) FAIL{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count / (fail_count + pass_count) * 100, 2)}% ({pass_count}) PASS{Style.RESET_ALL}",
]
]
print(tabulate(overview_table, tablefmt="rounded_grid"))
@@ -609,8 +610,8 @@ def display_compliance_table(
)
overview_table = [
[
f"{Fore.RED}{round(fail_count/(fail_count+pass_count)*100, 2)}% ({fail_count}) FAIL{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count/(fail_count+pass_count)*100, 2)}% ({pass_count}) PASS{Style.RESET_ALL}",
f"{Fore.RED}{round(fail_count / (fail_count + pass_count) * 100, 2)}% ({fail_count}) FAIL{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count / (fail_count + pass_count) * 100, 2)}% ({pass_count}) PASS{Style.RESET_ALL}",
]
]
print(tabulate(overview_table, tablefmt="rounded_grid"))

View File

@@ -12,8 +12,6 @@ from prowler.config.config import (
from prowler.lib.logger import logger
from prowler.lib.outputs.html import add_html_header
from prowler.lib.outputs.models import (
Aws_Check_Output_CSV,
Azure_Check_Output_CSV,
Check_Output_CSV_AWS_CIS,
Check_Output_CSV_AWS_ISO27001_2013,
Check_Output_CSV_AWS_Well_Architected,
@@ -21,19 +19,18 @@ from prowler.lib.outputs.models import (
Check_Output_CSV_GCP_CIS,
Check_Output_CSV_Generic_Compliance,
Check_Output_MITRE_ATTACK,
Gcp_Check_Output_CSV,
generate_csv_fields,
)
from prowler.lib.utils.utils import file_exists, open_file
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
from prowler.providers.azure.lib.audit_info.models import Azure_Audit_Info
from prowler.providers.common.outputs import get_provider_output_model
from prowler.providers.gcp.lib.audit_info.models import GCP_Audit_Info
def initialize_file_descriptor(
filename: str,
output_mode: str,
audit_info: AWS_Audit_Info,
audit_info: Any,
format: Any = None,
) -> TextIOWrapper:
"""Open/Create the output file. If needed include headers or the required format"""
@@ -75,27 +72,15 @@ def fill_file_descriptors(output_modes, output_directory, output_filename, audit
for output_mode in output_modes:
if output_mode == "csv":
filename = f"{output_directory}/{output_filename}{csv_file_suffix}"
if isinstance(audit_info, AWS_Audit_Info):
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Aws_Check_Output_CSV,
)
if isinstance(audit_info, Azure_Audit_Info):
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Azure_Check_Output_CSV,
)
if isinstance(audit_info, GCP_Audit_Info):
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
Gcp_Check_Output_CSV,
)
output_model = get_provider_output_model(
audit_info.__class__.__name__
)
file_descriptor = initialize_file_descriptor(
filename,
output_mode,
audit_info,
output_model,
)
file_descriptors.update({output_mode: file_descriptor})
elif output_mode == "json":

View File

@@ -338,8 +338,9 @@ def add_html_footer(output_filename, output_directory):
def get_aws_html_assessment_summary(audit_info):
try:
if isinstance(audit_info, AWS_Audit_Info):
if not audit_info.profile:
audit_info.profile = "ENV"
profile = (
audit_info.profile if audit_info.profile is not None else "default"
)
if isinstance(audit_info.audited_regions, list):
audited_regions = " ".join(audit_info.audited_regions)
elif not audit_info.audited_regions:
@@ -361,7 +362,7 @@ def get_aws_html_assessment_summary(audit_info):
</li>
<li class="list-group-item">
<b>AWS-CLI Profile:</b> """
+ audit_info.profile
+ profile
+ """
</li>
<li class="list-group-item">
@@ -406,7 +407,7 @@ def get_azure_html_assessment_summary(audit_info):
if isinstance(audit_info, Azure_Audit_Info):
printed_subscriptions = []
for key, value in audit_info.identity.subscriptions.items():
intermediate = key + " : " + value
intermediate = f"{key} : {value}"
printed_subscriptions.append(intermediate)
# check if identity is str(coming from SP) or dict(coming from browser or)

View File

@@ -31,6 +31,7 @@ from prowler.lib.outputs.models import (
unroll_dict_to_list,
)
from prowler.lib.utils.utils import hash_sha512, open_file, outputs_unix_timestamp
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
def fill_json_asff(finding_output, audit_info, finding, output_options):
@@ -50,9 +51,9 @@ def fill_json_asff(finding_output, audit_info, finding, output_options):
finding_output.GeneratorId = "prowler-" + finding.check_metadata.CheckID
finding_output.AwsAccountId = audit_info.audited_account
finding_output.Types = finding.check_metadata.CheckType
finding_output.FirstObservedAt = (
finding_output.UpdatedAt
) = finding_output.CreatedAt = timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ")
finding_output.FirstObservedAt = finding_output.UpdatedAt = (
finding_output.CreatedAt
) = timestamp_utc.strftime("%Y-%m-%dT%H:%M:%SZ")
finding_output.Severity = Severity(
Label=finding.check_metadata.Severity.upper()
)
@@ -155,6 +156,11 @@ def fill_json_ocsf(audit_info, finding, output_options) -> Check_Output_JSON_OCS
aws_org_uid = ""
account = None
org = None
profile = ""
if isinstance(audit_info, AWS_Audit_Info):
profile = (
audit_info.profile if audit_info.profile is not None else "default"
)
if (
hasattr(audit_info, "organizations_metadata")
and audit_info.organizations_metadata
@@ -249,9 +255,7 @@ def fill_json_ocsf(audit_info, finding, output_options) -> Check_Output_JSON_OCS
original_time=outputs_unix_timestamp(
output_options.unix_timestamp, timestamp
),
profiles=[audit_info.profile]
if hasattr(audit_info, "organizations_metadata")
else [],
profiles=[profile],
)
compliance = Compliance_OCSF(
status=generate_json_ocsf_status(finding.status),

View File

@@ -55,9 +55,9 @@ def generate_provider_output_csv(
data["resource_name"] = finding.resource_name
data["subscription"] = finding.subscription
data["tenant_domain"] = audit_info.identity.domain
data[
"finding_unique_id"
] = f"prowler-{provider}-{finding.check_metadata.CheckID}-{finding.subscription}-{finding.resource_id}"
data["finding_unique_id"] = (
f"prowler-{provider}-{finding.check_metadata.CheckID}-{finding.subscription}-{finding.resource_id}"
)
data["compliance"] = unroll_dict(
get_check_compliance(finding, provider, output_options)
)
@@ -68,9 +68,9 @@ def generate_provider_output_csv(
data["resource_name"] = finding.resource_name
data["project_id"] = finding.project_id
data["location"] = finding.location.lower()
data[
"finding_unique_id"
] = f"prowler-{provider}-{finding.check_metadata.CheckID}-{finding.project_id}-{finding.resource_id}"
data["finding_unique_id"] = (
f"prowler-{provider}-{finding.check_metadata.CheckID}-{finding.project_id}-{finding.resource_id}"
)
data["compliance"] = unroll_dict(
get_check_compliance(finding, provider, output_options)
)
@@ -82,9 +82,9 @@ def generate_provider_output_csv(
data["region"] = finding.region
data["resource_id"] = finding.resource_id
data["resource_arn"] = finding.resource_arn
data[
"finding_unique_id"
] = f"prowler-{provider}-{finding.check_metadata.CheckID}-{audit_info.audited_account}-{finding.region}-{finding.resource_id}"
data["finding_unique_id"] = (
f"prowler-{provider}-{finding.check_metadata.CheckID}-{audit_info.audited_account}-{finding.region}-{finding.resource_id}"
)
data["compliance"] = unroll_dict(
get_check_compliance(finding, provider, output_options)
)
@@ -614,8 +614,8 @@ class Check_Output_CSV_Generic_Compliance(BaseModel):
Requirements_Attributes_Section: Optional[str]
Requirements_Attributes_SubSection: Optional[str]
Requirements_Attributes_SubGroup: Optional[str]
Requirements_Attributes_Service: str
Requirements_Attributes_Soc_Type: Optional[str]
Requirements_Attributes_Service: Optional[str]
Requirements_Attributes_Type: Optional[str]
Status: str
StatusExtended: str
ResourceId: str

View File

@@ -13,7 +13,7 @@ def send_slack_message(token, channel, stats, provider, audit_info):
response = client.chat_postMessage(
username="Prowler",
icon_url=square_logo_img,
channel="#" + channel,
channel=f"#{channel}",
blocks=create_message_blocks(identity, logo, stats),
)
return response
@@ -35,7 +35,7 @@ def create_message_identity(provider, audit_info):
elif provider == "azure":
printed_subscriptions = []
for key, value in audit_info.identity.subscriptions.items():
intermediate = "- *" + key + ": " + value + "*\n"
intermediate = f"- *{key}: {value}*\n"
printed_subscriptions.append(intermediate)
identity = f"Azure Subscriptions:\n{''.join(printed_subscriptions)}"
logo = azure_logo
@@ -66,14 +66,14 @@ def create_message_blocks(identity, logo, stats):
"type": "section",
"text": {
"type": "mrkdwn",
"text": f"\n:white_check_mark: *{stats['total_pass']} Passed findings* ({round(stats['total_pass']/stats['findings_count']*100,2)}%)\n",
"text": f"\n:white_check_mark: *{stats['total_pass']} Passed findings* ({round(stats['total_pass'] / stats['findings_count'] * 100 , 2)}%)\n",
},
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": f"\n:x: *{stats['total_fail']} Failed findings* ({round(stats['total_fail']/stats['findings_count']*100,2)}%)\n ",
"text": f"\n:x: *{stats['total_fail']} Failed findings* ({round(stats['total_fail'] / stats['findings_count'] * 100 , 2)}%)\n ",
},
},
{

View File

@@ -96,8 +96,8 @@ def display_summary_table(
print("\nOverview Results:")
overview_table = [
[
f"{Fore.RED}{round(fail_count/len(findings)*100, 2)}% ({fail_count}) Failed{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count/len(findings)*100, 2)}% ({pass_count}) Passed{Style.RESET_ALL}",
f"{Fore.RED}{round(fail_count / len(findings) * 100, 2)}% ({fail_count}) Failed{Style.RESET_ALL}",
f"{Fore.GREEN}{round(pass_count / len(findings) * 100, 2)}% ({pass_count}) Passed{Style.RESET_ALL}",
]
]
print(tabulate(overview_table, tablefmt="rounded_grid"))

View File

@@ -10,7 +10,10 @@ from prowler.config.config import aws_services_json_file
from prowler.lib.check.check import list_modules, recover_checks_from_service
from prowler.lib.logger import logger
from prowler.lib.utils.utils import open_file, parse_json_file
from prowler.providers.aws.config import AWS_STS_GLOBAL_ENDPOINT_REGION
from prowler.providers.aws.config import (
AWS_STS_GLOBAL_ENDPOINT_REGION,
ROLE_SESSION_NAME,
)
from prowler.providers.aws.lib.audit_info.models import AWS_Assume_Role, AWS_Audit_Info
from prowler.providers.aws.lib.credentials.credentials import create_sts_session
@@ -113,9 +116,15 @@ def assume_role(
sts_endpoint_region: str = None,
) -> dict:
try:
role_session_name = (
assumed_role_info.role_session_name
if assumed_role_info.role_session_name
else ROLE_SESSION_NAME
)
assume_role_arguments = {
"RoleArn": assumed_role_info.role_arn,
"RoleSessionName": "ProwlerAsessmentSession",
"RoleSessionName": role_session_name,
"DurationSeconds": assumed_role_info.session_duration,
}
@@ -152,23 +161,31 @@ def input_role_mfa_token_and_code() -> tuple[str]:
def generate_regional_clients(
service: str, audit_info: AWS_Audit_Info, global_service: bool = False
service: str,
audit_info: AWS_Audit_Info,
) -> dict:
"""generate_regional_clients returns a dict with the following format for the given service:
Example:
{"eu-west-1": boto3_service_client}
"""
try:
regional_clients = {}
service_regions = get_available_aws_service_regions(service, audit_info)
# Check if it is global service to gather only one region
if global_service:
if service_regions:
if audit_info.profile_region in service_regions:
service_regions = [audit_info.profile_region]
service_regions = service_regions[:1]
for region in service_regions:
# Get the regions enabled for the account and get the intersection with the service available regions
if audit_info.enabled_regions:
enabled_regions = service_regions.intersection(audit_info.enabled_regions)
else:
enabled_regions = service_regions
for region in enabled_regions:
regional_client = audit_info.audit_session.client(
service, region_name=region, config=audit_info.session_config
)
regional_client.region = region
regional_clients[region] = regional_client
return regional_clients
except Exception as error:
logger.error(
@@ -176,6 +193,26 @@ def generate_regional_clients(
)
def get_aws_enabled_regions(audit_info: AWS_Audit_Info) -> set:
"""get_aws_enabled_regions returns a set of enabled AWS regions"""
# EC2 Client to check enabled regions
service = "ec2"
default_region = get_default_region(service, audit_info)
ec2_client = audit_info.audit_session.client(service, region_name=default_region)
enabled_regions = set()
try:
# With AllRegions=False we only get the enabled regions for the account
for region in ec2_client.describe_regions(AllRegions=False).get("Regions", []):
enabled_regions.add(region.get("RegionName"))
except Exception as error:
logger.warning(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
return enabled_regions
def get_aws_available_regions():
try:
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
@@ -216,6 +253,8 @@ def get_checks_from_input_arn(audit_resources: list, provider: str) -> set:
service = "efs"
elif service == "logs":
service = "cloudwatch"
elif service == "cognito":
service = "cognito-idp"
# Check if Prowler has checks in service
try:
list_modules(provider, service)
@@ -267,17 +306,18 @@ def get_regions_from_audit_resources(audit_resources: list) -> set:
return audited_regions
def get_available_aws_service_regions(service: str, audit_info: AWS_Audit_Info) -> list:
def get_available_aws_service_regions(service: str, audit_info: AWS_Audit_Info) -> set:
# Get json locally
actual_directory = pathlib.Path(os.path.dirname(os.path.realpath(__file__)))
with open_file(f"{actual_directory}/{aws_services_json_file}") as f:
data = parse_json_file(f)
# Check if it is a subservice
json_regions = data["services"][service]["regions"][audit_info.audited_partition]
if audit_info.audited_regions: # Check for input aws audit_info.audited_regions
regions = list(
set(json_regions).intersection(audit_info.audited_regions)
) # Get common regions between input and json
json_regions = set(
data["services"][service]["regions"][audit_info.audited_partition]
)
# Check for input aws audit_info.audited_regions
if audit_info.audited_regions:
# Get common regions between input and json
regions = json_regions.intersection(audit_info.audited_regions)
else: # Get all regions from json of the service and partition
regions = json_regions
return regions

File diff suppressed because it is too large Load Diff

View File

@@ -1,2 +1,3 @@
AWS_STS_GLOBAL_ENDPOINT_REGION = "us-east-1"
BOTO3_USER_AGENT_EXTRA = "APN_1826889"
ROLE_SESSION_NAME = "ProwlerAssessmentSession"

View File

@@ -135,32 +135,31 @@ def allowlist_findings(
def is_allowlisted(
allowlist: dict, audited_account: str, check: str, region: str, resource: str, tags
allowlist: dict,
audited_account: str,
check: str,
finding_region: str,
finding_resource: str,
finding_tags,
):
try:
allowlisted_checks = {}
# By default is not allowlisted
is_finding_allowlisted = False
# First set account key from allowlist dict
if audited_account in allowlist["Accounts"]:
allowlisted_checks = allowlist["Accounts"][audited_account]["Checks"]
# If there is a *, it affects to all accounts
# This cannot be elif since in the case of * and single accounts we
# want to merge allowlisted checks from * to the other accounts check list
if "*" in allowlist["Accounts"]:
checks_multi_account = allowlist["Accounts"]["*"]["Checks"]
allowlisted_checks.update(checks_multi_account)
# Test if it is allowlisted
if is_allowlisted_in_check(
allowlisted_checks,
audited_account,
audited_account,
check,
region,
resource,
tags,
):
is_finding_allowlisted = True
# We always check all the accounts present in the allowlist
# if one allowlists the finding we set the finding as allowlisted
for account in allowlist["Accounts"]:
if account == audited_account or account == "*":
if is_allowlisted_in_check(
allowlist["Accounts"][account]["Checks"],
audited_account,
check,
finding_region,
finding_resource,
finding_tags,
):
is_finding_allowlisted = True
break
return is_finding_allowlisted
except Exception as error:
@@ -171,43 +170,69 @@ def is_allowlisted(
def is_allowlisted_in_check(
allowlisted_checks, audited_account, account, check, region, resource, tags
allowlisted_checks,
audited_account,
check,
finding_region,
finding_resource,
finding_tags,
):
try:
# Default value is not allowlisted
is_check_allowlisted = False
for allowlisted_check, allowlisted_check_info in allowlisted_checks.items():
# map lambda to awslambda
allowlisted_check = re.sub("^lambda", "awslambda", allowlisted_check)
# extract the exceptions
# Check if the finding is excepted
exceptions = allowlisted_check_info.get("Exceptions")
# Check if there are exceptions
if is_excepted(
exceptions,
audited_account,
region,
resource,
tags,
finding_region,
finding_resource,
finding_tags,
):
# Break loop and return default value since is excepted
break
allowlisted_regions = allowlisted_check_info.get("Regions")
allowlisted_resources = allowlisted_check_info.get("Resources")
allowlisted_tags = allowlisted_check_info.get("Tags")
allowlisted_tags = allowlisted_check_info.get("Tags", "*")
# We need to set the allowlisted_tags if None, "" or [], so the falsy helps
if not allowlisted_tags:
allowlisted_tags = "*"
# If there is a *, it affects to all checks
if (
"*" == allowlisted_check
or check == allowlisted_check
or re.search(allowlisted_check, check)
):
if is_allowlisted_in_region(
allowlisted_regions,
allowlisted_resources,
allowlisted_tags,
region,
resource,
tags,
allowlisted_in_check = True
allowlisted_in_region = is_allowlisted_in_region(
allowlisted_regions, finding_region
)
allowlisted_in_resource = is_allowlisted_in_resource(
allowlisted_resources, finding_resource
)
allowlisted_in_tags = is_allowlisted_in_tags(
allowlisted_tags, finding_tags
)
# For a finding to be allowlisted requires the following set to True:
# - allowlisted_in_check -> True
# - allowlisted_in_region -> True
# - allowlisted_in_tags -> True
# - allowlisted_in_resource -> True
# - excepted -> False
if (
allowlisted_in_check
and allowlisted_in_region
and allowlisted_in_tags
and allowlisted_in_resource
):
is_check_allowlisted = True
@@ -220,25 +245,11 @@ def is_allowlisted_in_check(
def is_allowlisted_in_region(
allowlist_regions, allowlist_resources, allowlisted_tags, region, resource, tags
allowlisted_regions,
finding_region,
):
try:
# By default is not allowlisted
is_region_allowlisted = False
# If there is a *, it affects to all regions
if "*" in allowlist_regions or region in allowlist_regions:
for elem in allowlist_resources:
if is_allowlisted_in_tags(
allowlisted_tags,
elem,
resource,
tags,
):
is_region_allowlisted = True
# if we find the element there is no point in continuing with the loop
break
return is_region_allowlisted
return __is_item_matched__(allowlisted_regions, finding_region)
except Exception as error:
logger.critical(
f"{error.__class__.__name__} -- {error}[{error.__traceback__.tb_lineno}]"
@@ -246,25 +257,9 @@ def is_allowlisted_in_region(
sys.exit(1)
def is_allowlisted_in_tags(allowlisted_tags, elem, resource, tags):
def is_allowlisted_in_tags(allowlisted_tags, finding_tags):
try:
# By default is not allowlisted
is_tag_allowlisted = False
# Check if it is an *
if elem == "*":
elem = ".*"
# Check if there are allowlisted tags
if allowlisted_tags:
for allowlisted_tag in allowlisted_tags:
if re.search(allowlisted_tag, tags):
is_tag_allowlisted = True
break
else:
if re.search(elem, resource):
is_tag_allowlisted = True
return is_tag_allowlisted
return __is_item_matched__(allowlisted_tags, finding_tags)
except Exception as error:
logger.critical(
f"{error.__class__.__name__} -- {error}[{error.__traceback__.tb_lineno}]"
@@ -272,7 +267,25 @@ def is_allowlisted_in_tags(allowlisted_tags, elem, resource, tags):
sys.exit(1)
def is_excepted(exceptions, audited_account, region, resource, tags):
def is_allowlisted_in_resource(allowlisted_resources, finding_resource):
try:
return __is_item_matched__(allowlisted_resources, finding_resource)
except Exception as error:
logger.critical(
f"{error.__class__.__name__} -- {error}[{error.__traceback__.tb_lineno}]"
)
sys.exit(1)
def is_excepted(
exceptions,
audited_account,
finding_region,
finding_resource,
finding_tags,
):
"""is_excepted returns True if the account, region, resource and tags are excepted"""
try:
excepted = False
is_account_excepted = False
@@ -281,39 +294,57 @@ def is_excepted(exceptions, audited_account, region, resource, tags):
is_tag_excepted = False
if exceptions:
excepted_accounts = exceptions.get("Accounts", [])
is_account_excepted = __is_item_matched__(
excepted_accounts, audited_account
)
excepted_regions = exceptions.get("Regions", [])
is_region_excepted = __is_item_matched__(excepted_regions, finding_region)
excepted_resources = exceptions.get("Resources", [])
is_resource_excepted = __is_item_matched__(
excepted_resources, finding_resource
)
excepted_tags = exceptions.get("Tags", [])
if exceptions:
if audited_account in excepted_accounts:
is_account_excepted = True
if region in excepted_regions:
is_region_excepted = True
for excepted_resource in excepted_resources:
if re.search(excepted_resource, resource):
is_resource_excepted = True
for tag in excepted_tags:
if tag in tags:
is_tag_excepted = True
if (
(
(excepted_accounts and is_account_excepted)
or not excepted_accounts
)
and (
(excepted_regions and is_region_excepted)
or not excepted_regions
)
and (
(excepted_resources and is_resource_excepted)
or not excepted_resources
)
and ((excepted_tags and is_tag_excepted) or not excepted_tags)
):
excepted = True
is_tag_excepted = __is_item_matched__(excepted_tags, finding_tags)
if (
not is_account_excepted
and not is_region_excepted
and not is_resource_excepted
and not is_tag_excepted
):
excepted = False
elif (
(is_account_excepted or not excepted_accounts)
and (is_region_excepted or not excepted_regions)
and (is_resource_excepted or not excepted_resources)
and (is_tag_excepted or not excepted_tags)
):
excepted = True
return excepted
except Exception as error:
logger.critical(
f"{error.__class__.__name__} -- {error}[{error.__traceback__.tb_lineno}]"
)
sys.exit(1)
def __is_item_matched__(matched_items, finding_items):
"""__is_item_matched__ return True if any of the matched_items are present in the finding_items, otherwise returns False."""
try:
is_item_matched = False
if matched_items and (finding_items or finding_items == ""):
for item in matched_items:
if item == "*":
item = ".*"
if re.search(item, finding_items):
is_item_matched = True
break
return is_item_matched
except Exception as error:
logger.critical(
f"{error.__class__.__name__} -- {error}[{error.__traceback__.tb_lineno}]"
)
sys.exit(1)

View File

@@ -1,6 +1,8 @@
from argparse import ArgumentTypeError, Namespace
from re import fullmatch, search
from prowler.providers.aws.aws_provider import get_aws_available_regions
from prowler.providers.aws.config import ROLE_SESSION_NAME
from prowler.providers.aws.lib.arn.arn import arn_type
@@ -26,6 +28,13 @@ def init_parser(self):
help="ARN of the role to be assumed",
# Pending ARN validation
)
aws_auth_subparser.add_argument(
"--role-session-name",
nargs="?",
default=ROLE_SESSION_NAME,
help="An identifier for the assumed role session. Defaults to ProwlerAssessmentSession",
type=validate_role_session_name,
)
aws_auth_subparser.add_argument(
"--sts-endpoint-region",
nargs="?",
@@ -84,6 +93,11 @@ def init_parser(self):
action="store_true",
help="Skip updating previous findings of Prowler in Security Hub",
)
aws_security_hub_subparser.add_argument(
"--send-sh-only-fails",
action="store_true",
help="Send only Prowler failed findings to SecurityHub",
)
# AWS Quick Inventory
aws_quick_inventory_subparser = aws_parser.add_argument_group("Quick Inventory")
aws_quick_inventory_subparser.add_argument(
@@ -99,6 +113,7 @@ def init_parser(self):
"-B",
"--output-bucket",
nargs="?",
type=validate_bucket,
default=None,
help="Custom output bucket, requires -M <mode> and it can work also with -o flag.",
)
@@ -106,6 +121,7 @@ def init_parser(self):
"-D",
"--output-bucket-no-assume",
nargs="?",
type=validate_bucket,
default=None,
help="Same as -B but do not use the assumed role credentials to put objects to the bucket, instead uses the initial credentials.",
)
@@ -126,6 +142,7 @@ def init_parser(self):
default=None,
help="Path for allowlist yaml file. See example prowler/config/aws_allowlist.yaml for reference and format. It also accepts AWS DynamoDB Table or Lambda ARNs or S3 URIs, see more in https://docs.prowler.cloud/en/latest/tutorials/allowlist/",
)
# Based Scans
aws_based_scans_subparser = aws_parser.add_argument_group("AWS Based Scans")
aws_based_scans_parser = aws_based_scans_subparser.add_mutually_exclusive_group()
@@ -178,9 +195,37 @@ def validate_arguments(arguments: Namespace) -> tuple[bool, str]:
# Handle if session_duration is not the default value or external_id is set
if (
arguments.session_duration and arguments.session_duration != 3600
) or arguments.external_id:
(arguments.session_duration and arguments.session_duration != 3600)
or arguments.external_id
or arguments.role_session_name != ROLE_SESSION_NAME
):
if not arguments.role:
return (False, "To use -I/-T options -R option is needed")
return (
False,
"To use -I/--external-id, -T/--session-duration or --role-session-name options -R/--role option is needed",
)
return (True, "")
def validate_bucket(bucket_name):
"""validate_bucket validates that the input bucket_name is valid"""
if search("(?!(^xn--|.+-s3alias$))^[a-z0-9][a-z0-9-]{1,61}[a-z0-9]$", bucket_name):
return bucket_name
else:
raise ArgumentTypeError(
"Bucket name must be valid (https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html)"
)
def validate_role_session_name(session_name):
"""
validates that the role session name is valid
Documentation: https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html
"""
if fullmatch("[\w+=,.@-]{2,64}", session_name):
return session_name
else:
raise ArgumentTypeError(
"Role Session Name must be 2-64 characters long and consist only of upper- and lower-case alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@-"
)

View File

@@ -30,6 +30,7 @@ current_audit_info = AWS_Audit_Info(
session_duration=None,
external_id=None,
mfa_enabled=None,
role_session_name=None,
),
mfa_enabled=None,
audit_resources=None,
@@ -38,4 +39,5 @@ current_audit_info = AWS_Audit_Info(
audit_metadata=None,
audit_config=None,
ignore_unused_services=False,
enabled_regions=set(),
)

View File

@@ -1,4 +1,4 @@
from dataclasses import dataclass
from dataclasses import dataclass, field
from datetime import datetime
from typing import Any, Optional
@@ -20,6 +20,7 @@ class AWS_Assume_Role:
session_duration: int
external_id: str
mfa_enabled: bool
role_session_name: str
@dataclass
@@ -53,3 +54,4 @@ class AWS_Audit_Info:
audit_metadata: Optional[Any] = None
audit_config: Optional[dict] = None
ignore_unused_services: bool = False
enabled_regions: set = field(default_factory=set)

View File

@@ -1,8 +1,11 @@
def is_account_only_allowed_in_condition(
condition_statement: dict, source_account: str
def is_condition_block_restrictive(
condition_statement: dict, source_account: str, is_cross_account_allowed=False
):
"""
is_account_only_allowed_in_condition parses the IAM Condition policy block and returns True if the source_account passed as argument is within, False if not.
is_condition_block_restrictive parses the IAM Condition policy block and, by default, returns True if the source_account passed as argument is within, False if not.
If argument is_cross_account_allowed is True it tests if the Condition block includes any of the operators allowlisted returning True if does, False if not.
@param condition_statement: dict with an IAM Condition block, e.g.:
{
@@ -54,23 +57,32 @@ def is_account_only_allowed_in_condition(
condition_statement[condition_operator][value],
list,
):
# if there is an arn/account without the source account -> we do not consider it safe
# here by default we assume is true and look for false entries
is_condition_valid = True
for item in condition_statement[condition_operator][value]:
if source_account not in item:
is_condition_valid = False
break
is_condition_key_restrictive = True
# if cross account is not allowed check for each condition block looking for accounts
# different than default
if not is_cross_account_allowed:
# if there is an arn/account without the source account -> we do not consider it safe
# here by default we assume is true and look for false entries
for item in condition_statement[condition_operator][value]:
if source_account not in item:
is_condition_key_restrictive = False
break
if is_condition_key_restrictive:
is_condition_valid = True
# value is a string
elif isinstance(
condition_statement[condition_operator][value],
str,
):
if (
source_account
in condition_statement[condition_operator][value]
):
if is_cross_account_allowed:
is_condition_valid = True
else:
if (
source_account
in condition_statement[condition_operator][value]
):
is_condition_valid = True
return is_condition_valid

View File

@@ -211,9 +211,13 @@ def create_inventory_table(resources: list, resources_in_region: dict) -> dict:
def create_output(resources: list, audit_info: AWS_Audit_Info, args):
json_output = []
output_file = (
f"prowler-inventory-{audit_info.audited_account}-{output_file_timestamp}"
)
# Check if custom output filename was input, if not, set the default
if not hasattr(args, "output_filename") or args.output_filename is None:
output_file = (
f"prowler-inventory-{audit_info.audited_account}-{output_file_timestamp}"
)
else:
output_file = args.output_filename
for item in sorted(resources, key=lambda d: d["arn"]):
resource = {}
@@ -275,8 +279,8 @@ def create_output(resources: list, audit_info: AWS_Audit_Info, args):
f"\n{Fore.YELLOW}WARNING: Only resources that have or have had tags will appear (except for IAM and S3).\nSee more in https://docs.prowler.cloud/en/latest/tutorials/quick-inventory/#objections{Style.RESET_ALL}"
)
print("\nMore details in files:")
print(f" - CSV: {args.output_directory}/{output_file+csv_file_suffix}")
print(f" - JSON: {args.output_directory}/{output_file+json_file_suffix}")
print(f" - CSV: {args.output_directory}/{output_file + csv_file_suffix}")
print(f" - JSON: {args.output_directory}/{output_file + json_file_suffix}")
# Send output to S3 if needed (-B / -D)
for mode in ["json", "csv"]:

View File

@@ -1,5 +1,3 @@
import sys
from prowler.config.config import (
csv_file_suffix,
html_file_suffix,
@@ -29,7 +27,7 @@ def send_to_s3_bucket(
else: # Compliance output mode
filename = f"{output_filename}_{output_mode}{csv_file_suffix}"
logger.info(f"Sending outputs to S3 bucket {output_bucket_name}")
logger.info(f"Sending output file {filename} to S3 bucket {output_bucket_name}")
# File location
file_name = output_directory + "/" + filename
@@ -41,10 +39,9 @@ def send_to_s3_bucket(
s3_client.upload_file(file_name, output_bucket_name, object_name)
except Exception as error:
logger.critical(
logger.error(
f"{error.__class__.__name__}[{error.__traceback__.tb_lineno}] -- {error}"
)
sys.exit(1)
def get_s3_object_path(output_directory: str) -> str:

View File

@@ -14,9 +14,11 @@ def prepare_security_hub_findings(
findings: [], audit_info: AWS_Audit_Info, output_options, enabled_regions: []
) -> dict:
security_hub_findings_per_region = {}
# Create a key per region
for region in audit_info.audited_regions:
# Create a key per audited region
for region in enabled_regions:
security_hub_findings_per_region[region] = []
for finding in findings:
# We don't send the INFO findings to AWS Security Hub
if finding.status == "INFO":
@@ -27,7 +29,9 @@ def prepare_security_hub_findings(
continue
# Handle quiet mode
if output_options.is_quiet and finding.status != "FAIL":
if (
output_options.is_quiet or output_options.send_sh_only_fails
) and finding.status != "FAIL":
continue
# Get the finding region
@@ -47,8 +51,10 @@ def prepare_security_hub_findings(
def verify_security_hub_integration_enabled_per_region(
partition: str,
region: str,
session: session.Session,
aws_account_number: str,
) -> bool:
f"""verify_security_hub_integration_enabled returns True if the {SECURITY_HUB_INTEGRATION_NAME} is enabled for the given region. Otherwise returns false."""
prowler_integration_enabled = False
@@ -62,7 +68,8 @@ def verify_security_hub_integration_enabled_per_region(
security_hub_client.describe_hub()
# Check if Prowler integration is enabled in Security Hub
if "prowler/prowler" not in str(
security_hub_prowler_integration_arn = f"arn:{partition}:securityhub:{region}:{aws_account_number}:product-subscription/{SECURITY_HUB_INTEGRATION_NAME}"
if security_hub_prowler_integration_arn not in str(
security_hub_client.list_enabled_products_for_import()
):
logger.error(

View File

@@ -1,17 +1,21 @@
import threading
from concurrent.futures import ThreadPoolExecutor, as_completed
from prowler.lib.logger import logger
from prowler.providers.aws.aws_provider import (
generate_regional_clients,
get_default_region,
)
from prowler.providers.aws.lib.audit_info.models import AWS_Audit_Info
MAX_WORKERS = 10
class AWSService:
"""The AWSService class offers a parent class for each AWS Service to generate:
- AWS Regional Clients
- Shared information like the account ID and ARN, the the AWS partition and the checks audited
- AWS Session
- Thread pool for the __threading_call__
- Also handles if the AWS Service is Global
"""
@@ -34,9 +38,7 @@ class AWSService:
# Generate Regional Clients
if not global_service:
self.regional_clients = generate_regional_clients(
self.service, audit_info, global_service
)
self.regional_clients = generate_regional_clients(self.service, audit_info)
# Get a single region and client if the service needs it (e.g. AWS Global Service)
# We cannot include this within an else because some services needs both the regional_clients
@@ -44,14 +46,40 @@ class AWSService:
self.region = get_default_region(self.service, audit_info)
self.client = self.session.client(self.service, self.region)
# Thread pool for __threading_call__
self.thread_pool = ThreadPoolExecutor(max_workers=MAX_WORKERS)
def __get_session__(self):
return self.session
def __threading_call__(self, call):
threads = []
for regional_client in self.regional_clients.values():
threads.append(threading.Thread(target=call, args=(regional_client,)))
for t in threads:
t.start()
for t in threads:
t.join()
def __threading_call__(self, call, iterator=None):
# Use the provided iterator, or default to self.regional_clients
items = iterator if iterator is not None else self.regional_clients.values()
# Determine the total count for logging
item_count = len(items)
# Trim leading and trailing underscores from the call's name
call_name = call.__name__.strip("_")
# Add Capitalization
call_name = " ".join([x.capitalize() for x in call_name.split("_")])
# Print a message based on the call's name, and if its regional or processing a list of items
if iterator is None:
logger.info(
f"{self.service.upper()} - Starting threads for '{call_name}' function across {item_count} regions..."
)
else:
logger.info(
f"{self.service.upper()} - Starting threads for '{call_name}' function to process {item_count} items..."
)
# Submit tasks to the thread pool
futures = [self.thread_pool.submit(call, item) for item in items]
# Wait for all tasks to complete
for future in as_completed(futures):
try:
future.result() # Raises exceptions from the thread, if any
except Exception:
# Handle exceptions if necessary
pass # Replace 'pass' with any additional exception handling logic. Currently handled within the called function

View File

@@ -19,17 +19,23 @@ class accessanalyzer_enabled(Check):
f"IAM Access Analyzer {analyzer.name} is enabled."
)
elif analyzer.status == "NOT_AVAILABLE":
report.status = "FAIL"
report.status_extended = (
f"IAM Access Analyzer in account {analyzer.name} is not enabled."
)
else:
report.status = "FAIL"
report.status_extended = (
f"IAM Access Analyzer {analyzer.name} is not active."
)
if analyzer.status == "NOT_AVAILABLE":
report.status = "FAIL"
report.status_extended = f"IAM Access Analyzer in account {analyzer.name} is not enabled."
else:
report.status = "FAIL"
report.status_extended = (
f"IAM Access Analyzer {analyzer.name} is not active."
)
if (
accessanalyzer_client.audit_config.get(
"allowlist_non_default_regions", False
)
and not analyzer.region == accessanalyzer_client.region
):
report.status = "WARNING"
findings.append(report)

View File

@@ -85,21 +85,36 @@ class AccessAnalyzer(AWSService):
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
# TODO: We need to include ListFindingsV2
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/accessanalyzer/client/list_findings_v2.html
def __list_findings__(self):
logger.info("AccessAnalyzer - Listing Findings per Analyzer...")
try:
for analyzer in self.analyzers:
if analyzer.status == "ACTIVE":
regional_client = self.regional_clients[analyzer.region]
list_findings_paginator = regional_client.get_paginator(
"list_findings"
try:
if analyzer.status == "ACTIVE":
regional_client = self.regional_clients[analyzer.region]
list_findings_paginator = regional_client.get_paginator(
"list_findings"
)
for page in list_findings_paginator.paginate(
analyzerArn=analyzer.arn
):
for finding in page["findings"]:
analyzer.findings.append(Finding(id=finding["id"]))
except ClientError as error:
if error.response["Error"]["Code"] == "ValidationException":
logger.warning(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
for page in list_findings_paginator.paginate(
analyzerArn=analyzer.arn
):
for finding in page["findings"]:
analyzer.findings.append(Finding(id=finding["id"]))
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -19,7 +19,11 @@ class acm_certificates_expiration_check(Check):
report.resource_tags = certificate.tags
else:
report.status = "FAIL"
report.status_extended = f"ACM Certificate {certificate.id} for {certificate.name} is about to expire in {DAYS_TO_EXPIRE_THRESHOLD} days."
if certificate.expiration_days < 0:
report.status_extended = f"ACM Certificate {certificate.id} for {certificate.name} has expired ({abs(certificate.expiration_days)} days ago)."
else:
report.status_extended = f"ACM Certificate {certificate.id} for {certificate.name} is about to expire in {certificate.expiration_days} days."
report.resource_id = certificate.id
report.resource_details = certificate.name
report.resource_arn = certificate.arn

View File

@@ -1,7 +1,7 @@
{
"Provider": "aws",
"CheckID": "apigateway_restapi_authorizers_enabled",
"CheckTitle": "Check if API Gateway has configured authorizers.",
"CheckTitle": "Check if API Gateway has configured authorizers at api or method level.",
"CheckAliases": [
"apigateway_authorizers_enabled"
],
@@ -13,7 +13,7 @@
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
"Severity": "medium",
"ResourceType": "AwsApiGatewayRestApi",
"Description": "Check if API Gateway has configured authorizers.",
"Description": "Check if API Gateway has configured authorizers at api or method level.",
"Risk": "If no authorizer is enabled anyone can use the service.",
"RelatedUrl": "",
"Remediation": {

View File

@@ -13,12 +13,41 @@ class apigateway_restapi_authorizers_enabled(Check):
report.resource_id = rest_api.name
report.resource_arn = rest_api.arn
report.resource_tags = rest_api.tags
# it there are not authorizers at api level and resources without methods (default case) ->
report.status = "FAIL"
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} does not have an authorizer configured at api level."
if rest_api.authorizer:
report.status = "PASS"
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} has an authorizer configured."
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} has an authorizer configured at api level"
else:
report.status = "FAIL"
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} does not have an authorizer configured."
# we want to know if api has not authorizers and all the resources don't have methods configured
resources_have_methods = False
all_methods_authorized = True
resource_paths_with_unathorized_methods = []
for resource in rest_api.resources:
# if the resource has methods test if they have all configured authorizer
if resource.resource_methods:
resources_have_methods = True
for (
http_method,
authorization_method,
) in resource.resource_methods.items():
if authorization_method == "NONE":
all_methods_authorized = False
unauthorized_method = (
f"{resource.path} -> {http_method}"
)
resource_paths_with_unathorized_methods.append(
unauthorized_method
)
# if there are methods in at least one resource and are all authorized
if all_methods_authorized and resources_have_methods:
report.status = "PASS"
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} has all methods authorized"
# if there are methods in at least one result but some of then are not authorized-> list it
elif not all_methods_authorized:
report.status_extended = f"API Gateway {rest_api.name} ID {rest_api.id} does not have authorizers at api level and the following paths and methods are unauthorized: {'; '.join(resource_paths_with_unathorized_methods)}."
findings.append(report)
return findings

View File

@@ -17,6 +17,7 @@ class APIGateway(AWSService):
self.__get_authorizers__()
self.__get_rest_api__()
self.__get_stages__()
self.__get_resources__()
def __get_rest_apis__(self, regional_client):
logger.info("APIGateway - Getting Rest APIs...")
@@ -53,7 +54,9 @@ class APIGateway(AWSService):
if authorizers:
rest_api.authorizer = True
except Exception as error:
logger.error(f"{error.__class__.__name__}: {error}")
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_rest_api__(self):
logger.info("APIGateway - Describing Rest API...")
@@ -64,7 +67,9 @@ class APIGateway(AWSService):
if rest_api_info["endpointConfiguration"]["types"] == ["PRIVATE"]:
rest_api.public_endpoint = False
except Exception as error:
logger.error(f"{error.__class__.__name__}: {error}")
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_stages__(self):
logger.info("APIGateway - Getting stages for Rest APIs...")
@@ -95,7 +100,46 @@ class APIGateway(AWSService):
)
)
except Exception as error:
logger.error(f"{error.__class__.__name__}: {error}")
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __get_resources__(self):
logger.info("APIGateway - Getting API resources...")
try:
for rest_api in self.rest_apis:
regional_client = self.regional_clients[rest_api.region]
get_resources_paginator = regional_client.get_paginator("get_resources")
for page in get_resources_paginator.paginate(restApiId=rest_api.id):
for resource in page["items"]:
id = resource["id"]
resource_methods = []
methods_auth = {}
for resource_method in resource.get(
"resourceMethods", {}
).keys():
resource_methods.append(resource_method)
for resource_method in resource_methods:
if resource_method != "OPTIONS":
method_config = regional_client.get_method(
restApiId=rest_api.id,
resourceId=id,
httpMethod=resource_method,
)
auth_type = method_config["authorizationType"]
methods_auth.update({resource_method: auth_type})
rest_api.resources.append(
PathResourceMethods(
path=resource["path"], resource_methods=methods_auth
)
)
except Exception as error:
logger.error(
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
class Stage(BaseModel):
@@ -107,6 +151,11 @@ class Stage(BaseModel):
tags: Optional[list] = []
class PathResourceMethods(BaseModel):
path: str
resource_methods: dict
class RestAPI(BaseModel):
id: str
arn: str
@@ -116,3 +165,4 @@ class RestAPI(BaseModel):
public_endpoint: bool = True
stages: list[Stage] = []
tags: Optional[list] = []
resources: list[PathResourceMethods] = []

View File

@@ -14,13 +14,13 @@ class apigatewayv2_api_access_logging_enabled(Check):
if stage.logging:
report.status = "PASS"
report.status_extended = f"API Gateway V2 {api.name} ID {api.id} in stage {stage.name} has access logging enabled."
report.resource_id = api.name
report.resource_id = f"{api.name}-{stage.name}"
report.resource_arn = api.arn
report.resource_tags = api.tags
else:
report.status = "FAIL"
report.status_extended = f"API Gateway V2 {api.name} ID {api.id} in stage {stage.name} has access logging disabled."
report.resource_id = api.name
report.resource_id = f"{api.name}-{stage.name}"
report.resource_arn = api.arn
report.resource_tags = api.tags
findings.append(report)

View File

@@ -54,10 +54,8 @@ class Athena(AWSService):
)
wg_configuration = wg.get("WorkGroup").get("Configuration")
self.workgroups[
workgroup.arn
].enforce_workgroup_configuration = wg_configuration.get(
"EnforceWorkGroupConfiguration", False
self.workgroups[workgroup.arn].enforce_workgroup_configuration = (
wg_configuration.get("EnforceWorkGroupConfiguration", False)
)
# We include an empty EncryptionConfiguration to handle if the workgroup does not have encryption configured
@@ -77,9 +75,9 @@ class Athena(AWSService):
encryption_configuration = EncryptionConfiguration(
encryption_option=encryption, encrypted=True
)
self.workgroups[
workgroup.arn
].encryption_configuration = encryption_configuration
self.workgroups[workgroup.arn].encryption_configuration = (
encryption_configuration
)
except Exception as error:
logger.error(
f"{self.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -11,57 +11,55 @@ from prowler.providers.aws.services.awslambda.awslambda_client import awslambda_
class awslambda_function_no_secrets_in_code(Check):
def execute(self):
findings = []
for function in awslambda_client.functions.values():
if function.code:
report = Check_Report_AWS(self.metadata())
report.region = function.region
report.resource_id = function.name
report.resource_arn = function.arn
report.resource_tags = function.tags
if awslambda_client.functions:
for function, function_code in awslambda_client.__get_function_code__():
if function_code:
report = Check_Report_AWS(self.metadata())
report.region = function.region
report.resource_id = function.name
report.resource_arn = function.arn
report.resource_tags = function.tags
report.status = "PASS"
report.status_extended = (
f"No secrets found in Lambda function {function.name} code."
)
with tempfile.TemporaryDirectory() as tmp_dir_name:
function.code.code_zip.extractall(tmp_dir_name)
# List all files
files_in_zip = next(os.walk(tmp_dir_name))[2]
secrets_findings = []
for file in files_in_zip:
secrets = SecretsCollection()
with default_settings():
secrets.scan_file(f"{tmp_dir_name}/{file}")
detect_secrets_output = secrets.json()
if detect_secrets_output:
for (
file_name
) in (
detect_secrets_output.keys()
): # Appears that only 1 file is being scanned at a time, so could rework this
output_file_name = file_name.replace(
f"{tmp_dir_name}/", ""
)
secrets_string = ", ".join(
[
f"{secret['type']} on line {secret['line_number']}"
for secret in detect_secrets_output[file_name]
]
)
secrets_findings.append(
f"{output_file_name}: {secrets_string}"
)
report.status = "PASS"
report.status_extended = (
f"No secrets found in Lambda function {function.name} code."
)
with tempfile.TemporaryDirectory() as tmp_dir_name:
function_code.code_zip.extractall(tmp_dir_name)
# List all files
files_in_zip = next(os.walk(tmp_dir_name))[2]
secrets_findings = []
for file in files_in_zip:
secrets = SecretsCollection()
with default_settings():
secrets.scan_file(f"{tmp_dir_name}/{file}")
detect_secrets_output = secrets.json()
if detect_secrets_output:
for (
file_name
) in (
detect_secrets_output.keys()
): # Appears that only 1 file is being scanned at a time, so could rework this
output_file_name = file_name.replace(
f"{tmp_dir_name}/", ""
)
secrets_string = ", ".join(
[
f"{secret['type']} on line {secret['line_number']}"
for secret in detect_secrets_output[
file_name
]
]
)
secrets_findings.append(
f"{output_file_name}: {secrets_string}"
)
if secrets_findings:
final_output_string = "; ".join(secrets_findings)
report.status = "FAIL"
# report.status_extended = f"Potential {'secrets' if len(secrets_findings)>1 else 'secret'} found in Lambda function {function.name} code. {final_output_string}."
if len(secrets_findings) > 1:
report.status_extended = f"Potential secrets found in Lambda function {function.name} code -> {final_output_string}."
else:
report.status_extended = f"Potential secret found in Lambda function {function.name} code -> {final_output_string}."
# break // Don't break as there may be additional findings
if secrets_findings:
final_output_string = "; ".join(secrets_findings)
report.status = "FAIL"
report.status_extended = f"Potential {'secrets' if len(secrets_findings) > 1 else 'secret'} found in Lambda function {function.name} code -> {final_output_string}."
findings.append(report)
findings.append(report)
return findings

View File

@@ -42,7 +42,7 @@ class awslambda_function_no_secrets_in_variables(Check):
environment_variable_names = list(function.environment.keys())
secrets_string = ", ".join(
[
f"{secret['type']} in variable {environment_variable_names[int(secret['line_number'])-2]}"
f"{secret['type']} in variable {environment_variable_names[int(secret['line_number']) - 2]}"
for secret in detect_secrets_output[temp_env_data_file.name]
]
)

View File

@@ -1,6 +1,7 @@
import io
import json
import zipfile
from concurrent.futures import as_completed
from enum import Enum
from typing import Any, Optional
@@ -21,15 +22,6 @@ class Lambda(AWSService):
self.functions = {}
self.__threading_call__(self.__list_functions__)
self.__list_tags_for_resource__()
# We only want to retrieve the Lambda code if the
# awslambda_function_no_secrets_in_code check is set
if (
"awslambda_function_no_secrets_in_code"
in audit_info.audit_metadata.expected_checks
):
self.__threading_call__(self.__get_function__)
self.__threading_call__(self.__get_policy__)
self.__threading_call__(self.__get_function_url_config__)
@@ -70,28 +62,45 @@ class Lambda(AWSService):
f" {error}"
)
def __get_function__(self, regional_client):
logger.info("Lambda - Getting Function...")
try:
for function in self.functions.values():
if function.region == regional_client.region:
function_information = regional_client.get_function(
FunctionName=function.name
)
if "Location" in function_information["Code"]:
code_location_uri = function_information["Code"]["Location"]
raw_code_zip = requests.get(code_location_uri).content
self.functions[function.arn].code = LambdaCode(
location=code_location_uri,
code_zip=zipfile.ZipFile(io.BytesIO(raw_code_zip)),
)
def __get_function_code__(self):
logger.info("Lambda - Getting Function Code...")
# Use a thread pool handle the queueing and execution of the __fetch_function_code__ tasks, up to max_workers tasks concurrently.
lambda_functions_to_fetch = {
self.thread_pool.submit(
self.__fetch_function_code__, function.name, function.region
): function
for function in self.functions.values()
}
for fetched_lambda_code in as_completed(lambda_functions_to_fetch):
function = lambda_functions_to_fetch[fetched_lambda_code]
try:
function_code = fetched_lambda_code.result()
if function_code:
yield function, function_code
except Exception as error:
logger.error(
f"{function.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
def __fetch_function_code__(self, function_name, function_region):
try:
regional_client = self.regional_clients[function_region]
function_information = regional_client.get_function(
FunctionName=function_name
)
if "Location" in function_information["Code"]:
code_location_uri = function_information["Code"]["Location"]
raw_code_zip = requests.get(code_location_uri).content
return LambdaCode(
location=code_location_uri,
code_zip=zipfile.ZipFile(io.BytesIO(raw_code_zip)),
)
except Exception as error:
logger.error(
f"{regional_client.region} --"
f" {error.__class__.__name__}[{error.__traceback__.tb_lineno}]:"
f" {error}"
f"{regional_client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
raise
def __get_policy__(self, regional_client):
logger.info("Lambda - Getting Policy...")

View File

@@ -152,5 +152,5 @@ class BackupReportPlan(BaseModel):
arn: str
region: str
name: str
last_attempted_execution_date: datetime
last_attempted_execution_date: Optional[datetime]
last_successful_execution_date: Optional[datetime]

View File

@@ -53,12 +53,12 @@ class CloudFront(AWSService):
distributions[distribution_id].logging_enabled = distribution_config[
"DistributionConfig"
]["Logging"]["Enabled"]
distributions[
distribution_id
].geo_restriction_type = GeoRestrictionType(
distribution_config["DistributionConfig"]["Restrictions"][
"GeoRestriction"
]["RestrictionType"]
distributions[distribution_id].geo_restriction_type = (
GeoRestrictionType(
distribution_config["DistributionConfig"]["Restrictions"][
"GeoRestriction"
]["RestrictionType"]
)
)
distributions[distribution_id].web_acl_id = distribution_config[
"DistributionConfig"
@@ -78,9 +78,9 @@ class CloudFront(AWSService):
"DefaultCacheBehavior"
].get("FieldLevelEncryptionId"),
)
distributions[
distribution_id
].default_cache_config = default_cache_config
distributions[distribution_id].default_cache_config = (
default_cache_config
)
except Exception as error:
logger.error(

View File

@@ -140,7 +140,16 @@ class Cloudtrail(AWSService):
error.response["Error"]["Code"]
== "InsightNotEnabledException"
):
continue
logger.warning(
f"{client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
elif (
error.response["Error"]["Code"]
== "UnsupportedOperationException"
):
logger.warning(
f"{client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"
)
else:
logger.error(
f"{client.region} -- {error.__class__.__name__}[{error.__traceback__.tb_lineno}]: {error}"

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -22,26 +23,13 @@ class cloudwatch_changes_to_network_acls_alarm_configured(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -22,26 +23,13 @@ class cloudwatch_changes_to_network_gateways_alarm_configured(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,7 +1,7 @@
{
"Provider": "aws",
"CheckID": "cloudwatch_changes_to_network_route_tables_alarm_configured",
"CheckTitle": "Ensure a log metric filter and alarm exist for route table changes.",
"CheckTitle": "Ensure route table changes are monitored",
"CheckType": [
"Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
],
@@ -10,8 +10,8 @@
"ResourceIdTemplate": "arn:partition:cloudwatch:region:account-id:certificate/resource-id",
"Severity": "medium",
"ResourceType": "AwsCloudTrailTrail",
"Description": "Ensure a log metric filter and alarm exist for route table changes.",
"Risk": "Monitoring unauthorized API calls will help reveal application errors and may reduce time to detect malicious activity.",
"Description": "Real-time monitoring of API calls can be achieved by directing Cloud Trail Logs to CloudWatch Logs, or an external Security information and event management (SIEM)environment, and establishing corresponding metric filters and alarms. Routing tablesare used to route network traffic between subnets and to network gateways. It isrecommended that a metric filter and alarm be established for changes to route tables.",
"Risk": "CloudWatch is an AWS native service that allows you to ob serve and monitor resources and applications. CloudTrail Logs can also be sent to an external Security informationand event management (SIEM) environment for monitoring and alerting.Monitoring changes to route tables will help ensure that all VPC traffic flows through anexpected path and prevent any accidental or intentional modifications that may lead touncontrolled network traffic. An alarm should be triggered every time an AWS API call isperformed to create, replace, delete, or disassociate a Route Table.",
"RelatedUrl": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html",
"Remediation": {
"Code": {
@@ -21,12 +21,12 @@
"Terraform": "https://docs.bridgecrew.io/docs/monitoring_13#fix---buildtime"
},
"Recommendation": {
"Text": "It is recommended that a metric filter and alarm be established for unauthorized requests.",
"Text": "If you are using CloudTrails and CloudWatch, perform the following to setup the metric filter, alarm, SNS topic, and subscription: 1. Create a metric filter based on filter pattern provided which checks for route table changes and the <cloudtrail_log_group_name> taken from audit step 1. aws logs put-metric-filter --log-group-name <cloudtrail_log_group_name> -- filter-name `<route_table_changes_metric>` --metric-transformations metricName= `<route_table_changes_metric>` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable) }' Note: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together. 2. Create an SNS topic that the alarm will notify aws sns create-topic --name <sns_topic_name> Note: you can execute this command once and then re-use the same topic for all monitoring alarms. 3. Create an SNS subscription to the topic created in step 2 aws sns subscribe --topic-arn <sns_topic_arn> --protocol <protocol_for_sns> - -notification-endpoint <sns_subscription_endpoints> Note: you can execute this command once and then re-use the SNS subscription for all monitoring alarms. 4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 aws cloudwatch put-metric-alarm --alarm-name `<route_table_changes_alarm>` --metric-name `<route_table_changes_metric>` --statistic Sum --period 300 - -threshold 1 --comparison-operator GreaterThanOrEqualToThreshold -- evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions <sns_topic_arn>",
"Url": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html"
}
},
"Categories": [],
"DependsOn": [],
"RelatedTo": [],
"Notes": "Logging and Monitoring"
"Notes": ""
}

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,12 +5,15 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
class cloudwatch_changes_to_network_route_tables_alarm_configured(Check):
def execute(self):
pattern = r"\$\.eventName\s*=\s*.?CreateRoute.+\$\.eventName\s*=\s*.?CreateRouteTable.+\$\.eventName\s*=\s*.?ReplaceRoute.+\$\.eventName\s*=\s*.?ReplaceRouteTableAssociation.+\$\.eventName\s*=\s*.?DeleteRouteTable.+\$\.eventName\s*=\s*.?DeleteRoute.+\$\.eventName\s*=\s*.?DisassociateRouteTable.?"
pattern = r"\$\.eventSource\s*=\s*.?ec2.amazonaws.com.+\$\.eventName\s*=\s*.?CreateRoute.+\$\.eventName\s*=\s*.?CreateRouteTable.+\$\.eventName\s*=\s*.?ReplaceRoute.+\$\.eventName\s*=\s*.?ReplaceRouteTableAssociation.+\$\.eventName\s*=\s*.?DeleteRouteTable.+\$\.eventName\s*=\s*.?DeleteRoute.+\$\.eventName\s*=\s*.?DisassociateRouteTable.?"
findings = []
report = Check_Report_AWS(self.metadata())
report.status = "FAIL"
@@ -22,26 +23,13 @@ class cloudwatch_changes_to_network_route_tables_alarm_configured(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -22,26 +23,13 @@ class cloudwatch_changes_to_vpcs_alarm_configured(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -24,26 +25,13 @@ class cloudwatch_log_metric_filter_and_alarm_for_aws_config_configuration_change
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -24,26 +25,13 @@ class cloudwatch_log_metric_filter_and_alarm_for_cloudtrail_configuration_change
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -22,26 +23,13 @@ class cloudwatch_log_metric_filter_authentication_failures(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -22,26 +23,13 @@ class cloudwatch_log_metric_filter_aws_organizations_changes(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -22,26 +23,13 @@ class cloudwatch_log_metric_filter_disable_or_scheduled_deletion_of_kms_cmk(Chec
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

View File

@@ -1,5 +1,3 @@
import re
from prowler.lib.check.models import Check, Check_Report_AWS
from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
cloudtrail_client,
@@ -7,6 +5,9 @@ from prowler.providers.aws.services.cloudtrail.cloudtrail_client import (
from prowler.providers.aws.services.cloudwatch.cloudwatch_client import (
cloudwatch_client,
)
from prowler.providers.aws.services.cloudwatch.lib.metric_filters import (
check_cloudwatch_log_metric_filter,
)
from prowler.providers.aws.services.cloudwatch.logs_client import logs_client
@@ -22,26 +23,14 @@ class cloudwatch_log_metric_filter_for_s3_bucket_policy_changes(Check):
report.region = cloudwatch_client.region
report.resource_id = cloudtrail_client.audited_account
report.resource_arn = cloudtrail_client.audited_account_arn
# 1. Iterate for CloudWatch Log Group in CloudTrail trails
log_groups = []
for trail in cloudtrail_client.trails:
if trail.log_group_arn:
log_groups.append(trail.log_group_arn.split(":")[6])
# 2. Describe metric filters for previous log groups
for metric_filter in logs_client.metric_filters:
if metric_filter.log_group in log_groups:
if re.search(pattern, metric_filter.pattern, flags=re.DOTALL):
report.resource_id = metric_filter.log_group
report.resource_arn = metric_filter.arn
report.region = metric_filter.region
report.status = "FAIL"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} but no alarms associated."
# 3. Check if there is an alarm for the metric
for alarm in cloudwatch_client.metric_alarms:
if alarm.metric == metric_filter.metric:
report.status = "PASS"
report.status_extended = f"CloudWatch log group {metric_filter.log_group} found with metric filter {metric_filter.name} and alarms set."
break
report = check_cloudwatch_log_metric_filter(
pattern,
cloudtrail_client.trails,
logs_client.metric_filters,
cloudwatch_client.metric_alarms,
report,
)
findings.append(report)
return findings

Some files were not shown because too many files have changed in this diff Show More