Compare commits

...

99 Commits

Author SHA1 Message Date
Pepe Fagoaga
8818f47333 fix(ecr): typo (#1438) 2022-10-27 19:47:06 +02:00
Pepe Fagoaga
3cffe72273 fix(ecr): Platform (#1437) 2022-10-27 19:30:59 +02:00
Nacho Rivera
135aaca851 fix(): delete old commented versions (#1436) 2022-10-27 16:19:33 +02:00
Nacho Rivera
cf8df051de feat(README): Include versions info (#1435)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2022-10-27 12:51:53 +02:00
Pepe Fagoaga
bef42f3f2d feat(release): Prowler 2.12.0 (#1434) 2022-10-27 12:10:37 +02:00
Nacho Rivera
3d86bf1705 fix(): Cloudtrail checks (#1433) 2022-10-27 11:47:00 +02:00
Olivier Gendron
5a43ec951a docs(spelling): Typo corrections (#1394) 2022-10-24 12:58:44 +02:00
Nacho Rivera
b0e6ab6e31 feat(stable tag): Inclusion of stable tag point to last release (#1419) 2022-10-20 08:01:00 +02:00
Nacho Rivera
b7fb38cc9e fix(extra7184): Error handling GetSnapshotLimits api call (#1411) 2022-10-17 14:03:55 +02:00
Nacho Rivera
f29f7fc239 fix(extra7183): Exception handling error UnsupportedOperationException (#1410) 2022-10-17 13:39:17 +02:00
Nacho Rivera
2997ff0f1c fix(extra77): Deleted resource id from exception results (#1409) 2022-10-17 13:17:51 +02:00
Nacho Rivera
11dc0aa5b2 feat(extra7111): Exception handling (#1408) 2022-10-17 12:51:09 +02:00
Sergio Garcia
8bddb9b265 fix(extra740): Remove additional info and fix max_items (#1405) 2022-10-14 11:37:31 +02:00
Sergio Garcia
689e292585 fix(region_bugs): Remove duplicate outputs (#1390) 2022-10-13 13:18:37 +02:00
Sergio Garcia
bff2aabda6 fix(missing permissions): Add missing permissions of checks (#1403) 2022-10-13 12:59:48 +02:00
Kay Agahd
4b29293362 fix(check_extra77): Add missing check_resource_id to the report (#1402) 2022-10-13 09:53:31 +02:00
Sergio Garcia
4e24103dc6 feat(slack): add Slack badge to README (#1401) 2022-10-13 09:42:06 +02:00
Sergio Garcia
3b90347849 fix(inventory): quick inventory input fixed (#1397)
Co-authored-by: sergargar <sergio@verica.io>
2022-10-10 17:21:46 +02:00
Pepe Fagoaga
6a7a037cec delete(shortcut.sh): Remove ScoutSuite (#1388) 2022-10-06 16:42:09 +02:00
Gábor Lipták
927c13b9c6 chore(actions): Bump Trufflehog to v3.13.0 (#1382) 2022-10-06 09:24:54 +02:00
Nacho Rivera
11cc8e998b fix(checks): Handle checks not returning result (#1383)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2022-10-05 13:50:49 +02:00
Nacho Rivera
4a71739c56 Prwlr 879 fix prowler 2 x checks (#1380)
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2022-10-03 10:19:02 +02:00
Pepe Fagoaga
aedc5cd0ad fix(postgresql): Missing space (#1374) 2022-09-22 15:25:32 +02:00
Pepe Fagoaga
3d81307e56 fix(postgresql): Connector field (#1372) 2022-09-20 10:26:20 +02:00
Andrew Walker
918661bd7a Dockerfile build instructions (#1370) 2022-09-16 11:14:37 +02:00
Nacho Rivera
99f9abe3f6 feat(db-connector): Include UUID for findings ID (#1368) 2022-09-14 17:23:38 +02:00
Sergio Garcia
f2950764f0 feat(audit_id): add optional audit_id field to postgres connector (#1362)
Co-authored-by: sergargar <sergio@verica.io>
2022-09-13 13:29:19 +02:00
Pepe Fagoaga
d9777a68c7 chore(lint&test): Prowler 3.0 (#1357) 2022-09-01 16:37:10 +02:00
Richard Carpenter
2a4cc9a5f8 feat(group): CIS Critical Security Controls v8 (#1347)
Co-authored-by: sergargar <sergio@verica.io>
2022-08-31 15:14:04 +02:00
Ignacio Dominguez
1f0c210926 feat(extra7195): Added check for dependency confusion in codeartifact (#1329)
Co-authored-by: sergargar <sergio@verica.io>
2022-08-31 09:49:50 +02:00
JArmandoG
dd64c7d226 fix(check120): correct AWS support policy name (#1328)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2022-08-23 11:34:25 +01:00
JArmandoG
865f79f5b3 fix(quick_inventory): Handle math expression (#1283) 2022-08-05 12:55:07 +02:00
Pepe Fagoaga
1f8a4c1022 fix(credential_report): Do not generate for 117 and 118 (#1322) 2022-08-05 11:03:59 +02:00
Pepe Fagoaga
1e422f20aa fix(security-groups): Include TCP as the IpProtocol (#1323) 2022-08-05 11:02:35 +02:00
Pepe Fagoaga
29eda28bf3 docs(outputs): structure (#1313) 2022-08-04 10:05:08 +02:00
Pepe Fagoaga
f67f0cc66d chore(issues): Link Q&A (#1305) 2022-08-03 12:46:51 +02:00
Pepe Fagoaga
721cafa0cd fix(appstream): Handle timeout errors (#1296) 2022-08-02 12:30:53 +02:00
Kay Agahd
c1d60054e9 feat(extra780): Check for Cognito or SAML authentication on OpenSearch (#1291)
* extend check_extra780 to check for cognito or SAML authentication on opensearch

* chore(extra780): Error handling

Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2022-08-02 09:51:38 +02:00
Pepe Fagoaga
b95b3f68d3 fix(permissions): Include missing appstream:DescribeFleets permission (#1278)
* fix(permissions): AWS AppStream

Include missing appstream:DescribeFleets permission

* fix(permissions): AWS AppStream
2022-08-02 09:47:04 +02:00
Jonathan Jenkyn
81b6e27eb8 feat(checks): Adding commands for checks 117 and 118 (#1289)
* Adding commands for checks 117 and 118

* fix(check118): Minor fixes and error handling

* fix(check117): Minor fixes and error handling

Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2022-08-02 09:18:46 +02:00
William
d69678424b fix(extra712): changed Macie service detection (#1286)
* changed Macie service detection

* fix(regions): add region context and more.

Co-authored-by: sergargar <sergio@verica.io>
2022-07-28 13:53:54 -04:00
Pepe Fagoaga
a43c1aceec fix(check12): Improve remediation (#1281) 2022-07-26 14:37:35 -04:00
Pepe Fagoaga
f70cf8d81e fix(ci): Release edited (#1276) 2022-07-21 17:44:26 +02:00
Pepe Fagoaga
83b6c79203 fix(ci): Remove check-update (#1275) 2022-07-21 17:33:28 +02:00
Andrew
1192c038b2 docs(readme): Fix spelling errors (#1274) 2022-07-21 17:06:03 +02:00
Pepe Fagoaga
4ebbf6553e chore(release): 2.11.0 (#1272) 2022-07-21 10:48:32 +02:00
r8bhavneet
c501d63382 docs(readme): Fix spelling (#1271) 2022-07-21 10:42:40 +02:00
Toni de la Fuente
72d6d3f535 feat(inventory): Prowler quick inventory including IAM resources (#1258)
* chore(inventory): option included in main

* chore(inventory): quick inventory

* chore(inventory): functional version

* chore(inventory): functional version without echo

* Update include/quick_inventory

Co-authored-by: Pepe Fagoaga <pepe@verica.io>

* Update prowler

Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>

* Added new line at report line

* Added more information from IAM

Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2022-07-21 10:37:28 +02:00
Mitch
ddd34dc9cc fix(extra7173): Correct check and alternative name (#1270) 2022-07-20 08:36:34 +02:00
Sergio Garcia
03b1c10d13 fix(codebuild): expired token error using Instance Metadata
Co-authored-by: sergargar <sergio@verica.io>
2022-07-14 07:32:01 +02:00
Sergio Garcia
4cd5b8fd04 fix(codebuild): expired token error (#1262) 2022-07-12 07:38:44 +02:00
Phil Massyn
f0ce17182b feat(ecr_lifecycle): Check Lifecycle policy (#1260)
* Create checks_7194

ECR Repositories contain docker containers.  When automated processes create containers, the old ones tend to take up space.  With a lot of containers on the system, the account owner will be paying additional fees for images that are no longer in use.  By defining a lifecycle policy, a best practice is followed by reducing the total volume of data being consumed.

* Minor changes

* fix: Include bash header

Co-authored-by: n4ch04 <nachor1992@gmail.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2022-07-11 13:03:31 +02:00
Sergio Garcia
2a8a7d844b fix(apigatewayv2): handle BadRequestException (#1261)
Co-authored-by: sergargar <sergio@verica.io>
2022-07-11 12:21:39 +02:00
Pepe Fagoaga
ff33f426e5 docs(readme): Update inventory and checks (#1257)
* docs(readme): Update inventory and checks

* docs(readme): inventory path

Co-authored-by: Toni de la Fuente <toni@blyx.com>

* Update README.md

Co-authored-by: Toni de la Fuente <toni@blyx.com>

Co-authored-by: Toni de la Fuente <toni@blyx.com>
2022-07-08 12:42:46 +02:00
Toni de la Fuente
f691046c1f feat(inventory): Prowler quick inventory (#1245)
* chore(inventory): option included in main

* chore(inventory): quick inventory

* chore(inventory): functional version

* chore(inventory): functional version without echo

* Update include/quick_inventory

Co-authored-by: Pepe Fagoaga <pepe@verica.io>

* Update prowler

Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>

* Added new line at report line

Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2022-07-08 12:41:54 +02:00
Pepe Fagoaga
9fad8735b8 fix(Dockerfile): Prowler path (#1254) 2022-07-07 10:03:07 +02:00
Pepe Fagoaga
c632055517 fix(dockerfile): Python path (#1250) 2022-07-06 07:54:37 +02:00
Sergio Garcia
fd850790d5 fix(add-checks-regions): Missing regions in checks (#1247)
* add regions to checks

* add root as resource

Co-authored-by: sergargar <sergio@verica.io>
2022-07-04 09:46:08 +02:00
Sergio Garcia
912d5d7f8c fix(postgres): Fix postgres connector issues. (#1244)
* fix(postgres): Fix postgres connector issues.

* fix(postgres): Update documentation

Co-authored-by: sergargar <sergio@verica.io>
2022-06-30 18:12:33 +02:00
Pepe Fagoaga
d88a136ac3 feat(db-connector): Include env variables (#1236)
* feat(db-connector): Include env variables

* fix(typo)

* fix(psql-test): Remove PGPASSWORD
2022-06-30 08:43:41 +02:00
Pepe Fagoaga
172484cf08 feat(dockerfile): Include psql client in the Prowler scanner image (#1238)
* fix(dockerignore): Include files

* fix(dockerfile): Keep python2 and organize

* feat(db-connector): Include postgres dependencies

* feat(dockerfile): Include hadolint pre-commit
2022-06-30 08:28:29 +02:00
Pepe Fagoaga
821083639a fix(bckCredentials): Do nothing if no initial creds (#1239) 2022-06-29 16:52:08 +02:00
rajarshidas
e4f0f3ec87 feat(check): Ensure default internet access from Amazon AppStream fleet should be disabled. (#1233)
Co-authored-by: sergargar <sergio@verica.io>
2022-06-29 12:51:58 +02:00
rajarshidas
cc6302f7b8 feat(checks): Amazon AppStream checks (#1216)
Co-authored-by: sergargar <sergio@verica.io>
2022-06-29 12:31:42 +02:00
Bayron Carranza
c89fd82856 feat(check7164): 365 days or more in a Cloudwatch log retention should be consider PASS (#1240)
* 365 DAYS or More Retention log group in cloudwatch

* fix(extra7162): Fix comparison errors

Also include minor changes to texts

* fix(extra7162): Set as Pass log groups that never expires

* fix(typo)

Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2022-06-28 08:58:41 +02:00
Pepe Fagoaga
0e29a92d42 fix(extra7162): Query AWS log groups using LOG_GROUP_RETENTION_PERIOD_DAYS (#1232) 2022-06-27 09:18:39 +02:00
Sergio Garcia
835d8ffe5d feat(Actions): Update refresh_aws_services_regions.yml (#1227) 2022-06-23 11:21:50 +02:00
Sergio Garcia
21ee2068a6 feat(actions): Create refresh_aws_services_regions.yml (#1225) 2022-06-23 11:07:26 +02:00
Sergio Garcia
0ad149942b fix(security_hub_integration): Treat failed findings as failed in Security Hub (#1219)
Co-authored-by: sergargar <sergio@verica.io>
2022-06-22 14:03:16 +02:00
Nacho Rivera
66305768c0 fix(instance metadata): Missing raw flag in JQ parser (#1214) 2022-06-21 10:14:12 +02:00
Sergio Garcia
05f98fe993 fix(junit_xml output): Fix XML output integration (#1210)
Co-authored-by: sergargar <sergio@verica.io>
2022-06-20 13:27:54 +02:00
rajarshidas
89416f37af feat(check): Directory Service - Ensure Radius server is using the recommended security protocol (#1203)
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
2022-06-20 11:37:02 +02:00
Pepe Fagoaga
7285ddcb4e feat(actions): Trigger (#1209) 2022-06-20 10:38:19 +02:00
Pepe Fagoaga
8993a4f707 fix(actions): Dockerfile path (#1208) 2022-06-20 09:22:40 +02:00
Sergio Garcia
633d7bd8a8 fix(instance-metadata): Credentials recovering (#1207)
* fix(instance-metadata): Credentials recovering

* fix(expr): Dockerfile to root and expr in SESSION_TIME_REMAINING.

Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: sergargar <sergio@verica.io>
2022-06-17 14:23:56 +02:00
Pepe Fagoaga
3944ea2055 fix(session_duration): Use jq with TZ=UTC (#1195) 2022-06-15 13:25:43 +02:00
zsecducna
d85d0f5877 fix(extra767): Remove false positive (#1198)
* Remove fail positive

Exclude distributions that does not support `POST` requests

* fix(extra767): Overall changes

- Quoted and braced variables
- Fix DefaultCacheBehavior twice in a AWS CLI query
- Use regex =~ to match values

* fix(check767): Change textInfo for textPass

* fix(extra767): Include AWS CLI error handling

Co-authored-by: Pepe Fagoaga <pepe@verica.io>
2022-06-15 09:38:56 +02:00
Pepe Fagoaga
d32a7986a5 fix(shellcheck): Main variables (#1194) 2022-06-14 10:43:15 +02:00
Pepe Fagoaga
71813425bd fix(pre-commit): Recover shellcheck (#1193) 2022-06-14 07:46:12 +02:00
Pepe Fagoaga
da000b54ca refactor(Prowler): Main logic refactor (#1189)
* fix(aws_profile_loader): New functions

* fix(shellcheck): Temporary remove Shellcheck

* fix(aws_cli_detector): new function

* fix(jq_detector): New function

* fix(os_detector): New function

* fix(output_bucket): Output bucket input check in main

* fix(python_detector): deleted unused python detector

* fix(credentials): credentials check out of whoami

* [break]refactor(main)

* [BREAK] Get list of checks parsing all input options

* [break]refactor(main): execute checks functions

* [break]refactor(main): move functions to libs

* fix(validations): custom check validation and typos

* refactor(validate_options): Include comments

* fix(custom_checks): Minor fixes

* refactor(closing_files): include libraries

* refactor(loader): Include ignored checks

* refactor(main): Fix shellcheck

* refactor(loader): beautify

* refactor(monochrome): without variables

* refactor(modes): MODES array not needed

* refactor(whoami): get error from AWSCLI

* refactor(secrets-detector)

* refactor(secrets-detector)

* fix(html_scoring): html scoring was fixed.

* fix(load_checks_from_file)

* fix(color-code): Print if not mono

* fix(not extra): Fixed if EXCLUDE_CHECK_ID is empty

* fix(IFS): Restore default IFS once modes are parsed

* fix(bucket): validate before whoami

* fix(bucket): validate before whoami

Co-authored-by: n4ch04 <nachor1992@gmail.com>
Co-authored-by: sergargar <sergio@verica.io>
Co-authored-by: Nacho Rivera <59198746+n4ch04@users.noreply.github.com>
2022-06-13 17:34:31 +02:00
Sergio Garcia
74a9b42d9f Update codebuild-prowler-audit-account-cfn.yaml (#1192) 2022-06-13 12:17:31 +02:00
Nacho Rivera
f9322ab3aa fix(outputs): Replace each comma occurrence before sending to csv file (#1188) 2022-06-08 09:19:50 +02:00
Pepe Fagoaga
5becaca2c4 fix(extra7187): Remove commas from the metadata (#1187) 2022-06-08 09:02:38 +02:00
Sergio Garcia
50a670fbc4 fix(codebuild_update): AWS CLI and permissions update. (#1183) 2022-06-07 14:49:22 +02:00
Sergio Garcia
48f405a696 fix(check119_remediation): Update check remediation text. (#1185) 2022-06-07 14:48:13 +02:00
Nacho Rivera
bc56c4242e refactor(outputs): Consolidate Prowler output functions (#1180)
* chore(db providers): db providers first version

* chore(db provider): added db provider setup into Readme

* fix(csv_line): csv_line out of conditional

* fix(README): text instead of varchar in table

* fix(help): help message extended

Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>

* fix(typo): Update README.md

Co-authored-by: Pepe Fagoaga <pepe@verica.io>

* fix(table): add if not exists

Co-authored-by: Pepe Fagoaga <pepe@verica.io>

* fix(typo): Readme postgreSQL

Co-authored-by: Pepe Fagoaga <pepe@verica.io>

* fix(db_connector): details to add a new provider

* fix(typo): Uppercase Prowler

Co-authored-by: Toni de la Fuente <toni@blyx.com>

* fix(prowler): deleted unused variable

* chore(checks): test db connector previous to send data

* chore(input tests): input tests moved to main

* fix(typo): Readme typos

* chore(table): table name from pgpass file

* fix(grep test): Added missing -E flag

* chore(table): check of table name and Readme

* chore(error colors): Added error colors

* chore(inputcheck): checks about mode and output inputs into main

* fix(inputs) custom output file name

* fix(outputs): comment profile

* chore(textXXX): both 3 textfunctions using general

* fix(allowlist): allowlist check included as function

* fix(headers): Add headers to certain output files

* fix(reformulate): change structure and delete comments

* fix(testing): Input test after load includes

* fix(variables): Added named vars

* fix(colors): Deleted unused colors

* fix(outputs): fine tuning

* fix(outputs): allowlist parameters read

* fix(allowlist): allowlist logic reformulated

* fix(REPREGION): REPREGION change by REGION_FROM_CHECK

Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: Toni de la Fuente <toni@blyx.com>
2022-06-06 12:56:21 +02:00
Pepe Fagoaga
1b63256b9c fix(assume_role): Use date instead of jq (#1181)
* fix(date): Use  instead of date

* fix(assume_role): Use date instead of jq

JQ parses datetimes using the local timezone and not UTC
2022-06-03 08:31:43 -07:00
Sergio Garcia
7930b449b3 fix(apigateway_iam): Error handling and permissions for extra745. (#1176)
* fix(apigateway_iam): Error handling and permissions for extra745.

* Update check_extra745

Co-authored-by: sergargar <sergio@verica.io>
2022-06-02 15:16:43 +02:00
Pepe Fagoaga
e5cd42da55 fix(typo): Max session duration error message (#1179) 2022-06-02 15:08:30 +02:00
Sergio Garcia
2a54bbf901 fix(SQS_encryption_type): Add SQS encryption types to extra728. (#1175)
* fix(SQS_encryption_type): Add SQS encryption types to extra728.

* Update check_extra728

* Update check_extra728

Co-authored-by: sergargar <sergio@verica.io>
2022-06-02 15:01:02 +02:00
Nacho Rivera
2e134ed947 feat(db_connector): Create a PostgreSQL connector for Prowler (#1171)
* chore(db providers): db providers first version

* chore(db provider): added db provider setup into Readme

* fix(csv_line): csv_line out of conditional

* fix(README): text instead of varchar in table

* fix(help): help message extended

Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>

* fix(typo): Update README.md

Co-authored-by: Pepe Fagoaga <pepe@verica.io>

* fix(table): add if not exists

Co-authored-by: Pepe Fagoaga <pepe@verica.io>

* fix(typo): Readme postgreSQL

Co-authored-by: Pepe Fagoaga <pepe@verica.io>

* fix(db_connector): details to add a new provider

* fix(typo): Uppercase Prowler

Co-authored-by: Toni de la Fuente <toni@blyx.com>

* fix(prowler): deleted unused variable

* chore(checks): test db connector previous to send data

* chore(input tests): input tests moved to main

* fix(typo): Readme typos

* chore(table): table name from pgpass file

* fix(grep test): Added missing -E flag

* chore(table): check of table name and Readme

* chore(error colors): Added error colors

* fix(tablename): table name in readme

* fix(typo)

* fix(db_provider): Exact match

* fix(error): One line message

* chore(pgpass check): Check added for pgpass file

* fix(pgpass): pgpass file and permissions test

* fix(unused vars): Deleted unused vars

* fix(TOP_PID): Deleted TOP_PID unused var and comment

* chore(db tests): Credentials, database and table tests added

* fix(empty pgpass): Look for empty fields at pgpass file

Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: Toni de la Fuente <toni@blyx.com>
2022-06-02 13:15:14 +02:00
Sergio Garcia
ba727391db fix(runtimes_extra762): Detect nodejs versions correctly. (#1177)
Co-authored-by: sergargar <sergio@verica.io>
2022-06-02 13:14:22 +02:00
Sergio Garcia
d4346149fa fix(severity): High severity for check extra7185 (#1178) 2022-06-01 14:04:36 +02:00
Pepe Fagoaga
2637fc5132 feat(checks): New IAM privilege escalation check (#1168) 2022-06-01 13:58:31 +02:00
Sergio Garcia
ac5135470b fix(update_deprecate_runtimes): Deprecated runtimes for lambda were updated (#1170) 2022-05-31 17:03:11 +02:00
rajarshidas
613966aecf feat(check): Amazon WorkSpaces storage volumes are encrypted
If the value listed in the Volume Encryption column is Disabled, the selected AWS WorkSpaces instance volumes (root and user volumes) are not encrypted
2022-05-31 17:01:20 +02:00
Pepe Fagoaga
83ddcb9c39 feat(check): PublicAccessBlockConfiguration (#1167) 2022-05-31 16:54:05 +02:00
Lucas L Lopes
957c2433cf feat(checks): New checks for Directory Service (#1164) 2022-05-30 14:24:44 +02:00
Pepe Fagoaga
c10b367070 fix(actions): Bad PRO repository (#1163) 2022-05-25 12:47:22 +02:00
149 changed files with 3889 additions and 2095 deletions

View File

@@ -1,6 +1,17 @@
# Ignore git files
.git/
.github/
# Ignore Dodckerfile
Dockerfile
# Ignore hidden files
.pre-commit-config.yaml
.dockerignore
.gitignore
.pytest*
.DS_Store
# Ignore output directories
output/
junit-reports/

5
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,5 @@
blank_issues_enabled: false
contact_links:
- name: Questions & Help
url: https://github.com/prowler-cloud/prowler/discussions/categories/q-a
about: Please ask and answer questions here.

View File

@@ -7,17 +7,19 @@ on:
paths-ignore:
- '.github/**'
- 'README.md'
release:
types: [published]
types: [published, edited]
env:
AWS_REGION_STG: eu-west-1
AWS_REGION_PLATFORM: eu-west-1
AWS_REGION_PRO: us-east-1
IMAGE_NAME: prowler
LATEST_TAG: latest
STABLE_TAG: stable
TEMPORARY_TAG: temporary
DOCKERFILE_PATH: util/Dockerfile
DOCKERFILE_PATH: ./Dockerfile
jobs:
# Lint Dockerfile using Hadolint
@@ -145,25 +147,25 @@ jobs:
with:
registry: ${{ secrets.STG_ECR }}
-
name: Configure AWS Credentials -- PRO
name: Configure AWS Credentials -- PLATFORM
if: github.event_name == 'release'
uses: aws-actions/configure-aws-credentials@v1
with:
aws-region: ${{ env.AWS_REGION_PRO }}
role-to-assume: ${{ secrets.PRO_IAM_ROLE_ARN }}
aws-region: ${{ env.AWS_REGION_PLATFORM }}
role-to-assume: ${{ secrets.STG_IAM_ROLE_ARN }}
role-session-name: build-lint-containers-pro
-
name: Login to ECR -- PRO
name: Login to ECR -- PLATFORM
if: github.event_name == 'release'
uses: docker/login-action@v2
with:
registry: ${{ secrets.PRO_ECR }}
registry: ${{ secrets.PLATFORM_ECR }}
-
# Push to master branch - push "latest" tag
name: Tag (latest)
if: github.event_name == 'push'
run: |
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.STG_ECR }}/${{ secrets.STG_ECR_REPOSITORY }}:${{ env.LATEST_TAG }}
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.PLATFORM_ECR }}/${{ secrets.PLATFORM_ECR_REPOSITORY }}:${{ env.LATEST_TAG }}
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.LATEST_TAG }}
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.LATEST_TAG }}
-
@@ -171,25 +173,34 @@ jobs:
name: Push (latest)
if: github.event_name == 'push'
run: |
docker push ${{ secrets.STG_ECR }}/${{ secrets.STG_ECR_REPOSITORY }}:${{ env.LATEST_TAG }}
docker push ${{ secrets.PLATFORM_ECR }}/${{ secrets.PLATFORM_ECR_REPOSITORY }}:${{ env.LATEST_TAG }}
docker push ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.LATEST_TAG }}
docker push ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.LATEST_TAG }}
-
# Push the new release
# Tag the new release (stable and release tag)
name: Tag (release)
if: github.event_name == 'release'
run: |
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.PRO_ECR }}/${{ secrets.PRO_ECR }}:${{ github.event.release.tag_name }}
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.PLATFORM_ECR }}/${{ secrets.PLATFORM_ECR_REPOSITORY }}:${{ github.event.release.tag_name }}
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ github.event.release.tag_name }}
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ github.event.release.tag_name }}
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.PLATFORM_ECR }}/${{ secrets.PLATFORM_ECR_REPOSITORY }}:${{ env.STABLE_TAG }}
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
docker tag ${{ env.IMAGE_NAME }}:${{ env.TEMPORARY_TAG }} ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
-
# Push the new release
# Push the new release (stable and release tag)
name: Push (release)
if: github.event_name == 'release'
run: |
docker push ${{ secrets.PRO_ECR }}/${{ secrets.PRO_ECR }}:${{ github.event.release.tag_name }}
docker push ${{ secrets.PLATFORM_ECR }}/${{ secrets.PLATFORM_ECR_REPOSITORY }}:${{ github.event.release.tag_name }}
docker push ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ github.event.release.tag_name }}
docker push ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ github.event.release.tag_name }}
docker push ${{ secrets.PLATFORM_ECR }}/${{ secrets.PLATFORM_ECR_REPOSITORY }}:${{ env.STABLE_TAG }}
docker push ${{ secrets.DOCKER_HUB_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
docker push ${{ secrets.PUBLIC_ECR_REPOSITORY }}/${{ env.IMAGE_NAME }}:${{ env.STABLE_TAG }}
-
name: Delete artifacts
if: always()

View File

@@ -11,7 +11,7 @@ jobs:
with:
fetch-depth: 0
- name: TruffleHog OSS
uses: trufflesecurity/trufflehog@v3.4.4
uses: trufflesecurity/trufflehog@v3.13.0
with:
path: ./
base: ${{ github.event.repository.default_branch }}

41
.github/workflows/pull-request.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: Lint & Test
on:
push:
branches:
- 'prowler-3.0-dev'
pull_request:
branches:
- 'prowler-3.0-dev'
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9"]
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pipenv
pipenv install
- name: Bandit
run: |
pipenv run bandit -q -lll -x '*_test.py,./contrib/' -r .
- name: Safety
run: |
pipenv run safety check
- name: Vulture
run: |
pipenv run vulture --exclude "contrib" --min-confidence 100 .
- name: Test with pytest
run: |
pipenv run pytest -n auto

View File

@@ -0,0 +1,50 @@
# This is a basic workflow to help you get started with Actions
name: Refresh regions of AWS services
on:
schedule:
- cron: "0 9 * * *" #runs at 09:00 UTC everyday
env:
GITHUB_BRANCH: "prowler-3.0-dev"
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v3
with:
ref: ${{ env.GITHUB_BRANCH }}
- name: setup python
uses: actions/setup-python@v2
with:
python-version: 3.9 #install the python needed
# Runs a single command using the runners shell
- name: Run a one-line script
run: python3 util/update_aws_services_regions.py
# Create pull request
- name: Create Pull Request
uses: peter-evans/create-pull-request@v4
with:
token: ${{ secrets.GITHUB_TOKEN }}
commit-message: "feat(regions_update): Update regions for AWS services."
branch: "aws-services-regions-updated"
labels: "status/waiting-for-revision, severity/low"
title: "feat(regions_update): Changes in regions for AWS services."
body: |
### Description
This PR updates the regions for AWS services.
### License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

View File

@@ -8,6 +8,22 @@ repos:
- id: check-json
- id: end-of-file-fixer
- id: trailing-whitespace
exclude: 'README.md'
- id: no-commit-to-branch
- id: pretty-format-json
args: ['--autofix']
- repo: https://github.com/koalaman/shellcheck-precommit
rev: v0.8.0
hooks:
- id: shellcheck
- repo: https://github.com/hadolint/hadolint
rev: v2.10.0
hooks:
- id: hadolint
name: Lint Dockerfiles
description: Runs hadolint to lint Dockerfiles
language: system
types: ["dockerfile"]
entry: hadolint

64
Dockerfile Normal file
View File

@@ -0,0 +1,64 @@
# Build command
# docker build --platform=linux/amd64 --no-cache -t prowler:latest -f ./Dockerfile .
# hadolint ignore=DL3007
FROM public.ecr.aws/amazonlinux/amazonlinux:latest
LABEL maintainer="https://github.com/prowler-cloud/prowler"
ARG USERNAME=prowler
ARG USERID=34000
# Prepare image as root
USER 0
# System dependencies
# hadolint ignore=DL3006,DL3013,DL3033
RUN yum upgrade -y && \
yum install -y python3 bash curl jq coreutils py3-pip which unzip shadow-utils && \
yum clean all && \
rm -rf /var/cache/yum
RUN amazon-linux-extras install -y epel postgresql14 && \
yum clean all && \
rm -rf /var/cache/yum
# Create non-root user
RUN useradd -l -s /bin/bash -U -u ${USERID} ${USERNAME}
USER ${USERNAME}
# Python dependencies
# hadolint ignore=DL3006,DL3013,DL3042
RUN pip3 install --upgrade pip && \
pip3 install --no-cache-dir boto3 detect-secrets==1.0.3 && \
pip3 cache purge
# Set Python PATH
ENV PATH="/home/${USERNAME}/.local/bin:${PATH}"
USER 0
# Install AWS CLI
RUN curl https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip -o awscliv2.zip && \
unzip -q awscliv2.zip && \
aws/install && \
rm -rf aws awscliv2.zip
# Keep Python2 for yum
RUN sed -i '1 s/python/python2.7/' /usr/bin/yum
# Set Python3
RUN rm /usr/bin/python && \
ln -s /usr/bin/python3 /usr/bin/python
# Set working directory
WORKDIR /prowler
# Copy all files
COPY . ./
# Set files ownership
RUN chown -R prowler .
USER ${USERNAME}
ENTRYPOINT ["./prowler"]

564
README.md
View File

@@ -10,7 +10,7 @@
<img src="https://user-images.githubusercontent.com/3985464/113734260-7ba06900-96fb-11eb-82bc-d4f68a1e2710.png" />
</p>
<p align="center">
<a href="https://discord.gg/UjSMCVnxSB"><img alt="Discord Shield" src="https://img.shields.io/discord/807208614288818196"></a>
<a href="https://join.slack.com/t/prowler-workspace/shared_invite/zt-1hix76xsl-2uq222JIXrC7Q8It~9ZNog"><img alt="Slack Shield" src="https://img.shields.io/badge/slack-prowler-brightgreen.svg?logo=slack"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/toniblyx/prowler"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker" src="https://img.shields.io/docker/cloud/build/toniblyx/prowler"></a>
<a href="https://hub.docker.com/r/toniblyx/prowler"><img alt="Docker" src="https://img.shields.io/docker/image-size/toniblyx/prowler"></a>
@@ -26,12 +26,13 @@
</p>
<p align="center">
<i>Prowler</i> is an Open Source security tool to perform AWS security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness. It contains more than 200 controls covering CIS, PCI-DSS, ISO27001, GDPR, HIPAA, FFIEC, SOC2, AWS FTR, ENS and custome security frameworks.
<i>Prowler</i> is an Open Source security tool to perform AWS security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness. It contains more than 240 controls covering CIS, PCI-DSS, ISO27001, GDPR, HIPAA, FFIEC, SOC2, AWS FTR, ENS and custom security frameworks.
</p>
## Table of Contents
- [Description](#description)
- [Prowler Container Versions](#prowler-container-versions)
- [Features](#features)
- [High level architecture](#high-level-architecture)
- [Requirements and Installation](#requirements-and-installation)
@@ -41,6 +42,7 @@
- [Security Hub integration](#security-hub-integration)
- [CodeBuild deployment](#codebuild-deployment)
- [Allowlist](#allowlist-or-remove-a-fail-from-resources)
- [Inventory](#inventory)
- [Fix](#how-to-fix-every-fail)
- [Troubleshooting](#troubleshooting)
- [Extras](#extras)
@@ -58,21 +60,32 @@
Prowler is a command line tool that helps you with AWS security assessment, auditing, hardening and incident response.
It follows guidelines of the CIS Amazon Web Services Foundations Benchmark (49 checks) and has more than 100 additional checks including related to GDPR, HIPAA, PCI-DSS, ISO-27001, FFIEC, SOC2 and others.
It follows guidelines of the CIS Amazon Web Services Foundations Benchmark (49 checks) and has more than 190 additional checks including related to GDPR, HIPAA, PCI-DSS, ISO-27001, FFIEC, SOC2 and others.
Read more about [CIS Amazon Web Services Foundations Benchmark v1.2.0 - 05-23-2018](https://d0.awsstatic.com/whitepapers/compliance/AWS_CIS_Foundations_Benchmark.pdf)
## Prowler container versions
The available versions of Prowler are the following:
- latest: in sync with master branch (bear in mind that it is not a stable version)
- <x.y.z> (release): you can find the releases [here](https://github.com/prowler-cloud/prowler/releases), those are stable releases.
- stable: this tag always point to the latest release.
The container images are available here:
- [DockerHub](https://hub.docker.com/r/toniblyx/prowler/tags)
- [AWS Public ECR](https://gallery.ecr.aws/o4g1s5r6/prowler)
## Features
+200 checks covering security best practices across all AWS regions and most of AWS services and related to the next groups:
+240 checks covering security best practices across all AWS regions and most of AWS services and related to the next groups:
- Identity and Access Management [group1]
- Logging [group2]
- Logging [group2]
- Monitoring [group3]
- Networking [group4]
- CIS Level 1 [cislevel1]
- CIS Level 2 [cislevel2]
- Extras *see Extras section* [extras]
- Extras _see Extras section_ [extras]
- Forensics related group of checks [forensics-ready]
- GDPR [gdpr] Read more [here](#gdpr-checks)
- HIPAA [hipaa] Read more [here](#hipaa-checks)
@@ -87,9 +100,10 @@ With Prowler you can:
- Get a direct colorful or monochrome report
- A HTML, CSV, JUNIT, JSON or JSON ASFF (Security Hub) format report
- Send findings directly to Security Hub
- Send findings directly to the Security Hub
- Run specific checks and groups or create your own
- Check multiple AWS accounts in parallel or sequentially
- Get an inventory of your AWS resources
- And more! Read examples below
## High level architecture
@@ -97,6 +111,7 @@ With Prowler you can:
You can run Prowler from your workstation, an EC2 instance, Fargate or any other container, Codebuild, CloudShell and Cloud9.
![Prowler high level architecture](https://user-images.githubusercontent.com/3985464/109143232-1488af80-7760-11eb-8d83-726790fda592.jpg)
## Requirements and Installation
Prowler has been written in bash using AWS-CLI underneath and it works in Linux, Mac OS or Windows with cygwin or virtualization. Also requires `jq` and `detect-secrets` to work properly.
@@ -104,140 +119,143 @@ Prowler has been written in bash using AWS-CLI underneath and it works in Linux,
- Make sure the latest version of AWS-CLI is installed. It works with either v1 or v2, however _latest v2 is recommended if using new regions since they require STS v2 token_, and other components needed, with Python pip already installed.
- For Amazon Linux (`yum` based Linux distributions and AWS CLI v2):
```
sudo yum update -y
sudo yum remove -y awscli
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
sudo yum install -y python3 jq git
sudo pip3 install detect-secrets==1.0.3
git clone https://github.com/prowler-cloud/prowler
```
```
sudo yum update -y
sudo yum remove -y awscli
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
sudo yum install -y python3 jq git
sudo pip3 install detect-secrets==1.0.3
git clone https://github.com/prowler-cloud/prowler
```
- For Ubuntu Linux (`apt` based Linux distributions and AWS CLI v2):
```
sudo apt update
sudo apt install python3 python3-pip jq git zip
pip install detect-secrets==1.0.3
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
git clone https://github.com/prowler-cloud/prowler
```
> NOTE: detect-secrets Yelp version is no longer supported, the one from IBM is mantained now. Use the one mentioned below or the specific Yelp version 1.0.3 to make sure it works as expected (`pip install detect-secrets==1.0.3`):
```sh
pip install "git+https://github.com/ibm/detect-secrets.git@master#egg=detect-secrets"
```
```
sudo apt update
sudo apt install python3 python3-pip jq git zip
pip install detect-secrets==1.0.3
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
git clone https://github.com/prowler-cloud/prowler
```
AWS-CLI can be also installed it using other methods, refer to official documentation for more details: <https://aws.amazon.com/cli/>, but `detect-secrets` has to be installed using `pip` or `pip3`.
> NOTE: detect-secrets Yelp version is no longer supported, the one from IBM is mantained now. Use the one mentioned below or the specific Yelp version 1.0.3 to make sure it works as expected (`pip install detect-secrets==1.0.3`):
```sh
pip install "git+https://github.com/ibm/detect-secrets.git@master#egg=detect-secrets"
```
AWS-CLI can be also installed it using other methods, refer to official documentation for more details: <https://aws.amazon.com/cli/>, but `detect-secrets` has to be installed using `pip` or `pip3`.
- Once Prowler repository is cloned, get into the folder and you can run it:
```sh
cd prowler
./prowler
```
```sh
cd prowler
./prowler
```
- Since Prowler users AWS CLI under the hood, you can follow any authentication method as described [here](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-precedence). Make sure you have properly configured your AWS-CLI with a valid Access Key and Region or declare AWS variables properly (or instance profile/role):
```sh
aws configure
```
```sh
aws configure
```
or
or
```sh
export AWS_ACCESS_KEY_ID="ASXXXXXXX"
export AWS_SECRET_ACCESS_KEY="XXXXXXXXX"
export AWS_SESSION_TOKEN="XXXXXXXXX"
```
```sh
export AWS_ACCESS_KEY_ID="ASXXXXXXX"
export AWS_SECRET_ACCESS_KEY="XXXXXXXXX"
export AWS_SESSION_TOKEN="XXXXXXXXX"
```
- Those credentials must be associated to a user or role with proper permissions to do all checks. To make sure, add the AWS managed policies, SecurityAudit and ViewOnlyAccess, to the user or role being used. Policy ARNs are:
- Those credentials must be associated to a user or role with proper permissions to do all checks. To make sure, add the AWS managed policies, SecurityAudit and ViewOnlyAccess, to the user or role being used. Policy ARNs are:
```sh
arn:aws:iam::aws:policy/SecurityAudit
arn:aws:iam::aws:policy/job-function/ViewOnlyAccess
```
```sh
arn:aws:iam::aws:policy/SecurityAudit
arn:aws:iam::aws:policy/job-function/ViewOnlyAccess
```
> Additional permissions needed: to make sure Prowler can scan all services included in the group *Extras*, make sure you attach also the custom policy [prowler-additions-policy.json](https://github.com/prowler-cloud/prowler/blob/master/iam/prowler-additions-policy.json) to the role you are using. If you want Prowler to send findings to [AWS Security Hub](https://aws.amazon.com/security-hub), make sure you also attach the custom policy [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/iam/prowler-security-hub.json).
> Additional permissions needed: to make sure Prowler can scan all services included in the group _Extras_, make sure you attach also the custom policy [prowler-additions-policy.json](https://github.com/prowler-cloud/prowler/blob/master/iam/prowler-additions-policy.json) to the role you are using. If you want Prowler to send findings to [AWS Security Hub](https://aws.amazon.com/security-hub), make sure you also attach the custom policy [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/iam/prowler-security-hub.json).
## Usage
1. Run the `prowler` command without options (it will use your environment variable credentials if they exist or will default to using the `~/.aws/credentials` file and run checks over all regions when needed. The default region is us-east-1):
```sh
./prowler
```
```sh
./prowler
```
Use `-l` to list all available checks and the groups (sections) that reference them. To list all groups use `-L` and to list content of a group use `-l -g <groupname>`.
Use `-l` to list all available checks and the groups (sections) that reference them. To list all groups use `-L` and to list content of a group use `-l -g <groupname>`.
If you want to avoid installing dependencies run it using Docker:
If you want to avoid installing dependencies run it using Docker:
```sh
docker run -ti --rm --name prowler --env AWS_ACCESS_KEY_ID --env AWS_SECRET_ACCESS_KEY --env AWS_SESSION_TOKEN toniblyx/prowler:latest
```
```sh
docker run -ti --rm --name prowler --env AWS_ACCESS_KEY_ID --env AWS_SECRET_ACCESS_KEY --env AWS_SESSION_TOKEN toniblyx/prowler:latest
```
In case you want to get reports created by Prowler use docker volume option like in the example below:
```sh
docker run -ti --rm -v /your/local/output:/prowler/output --name prowler --env AWS_ACCESS_KEY_ID --env AWS_SECRET_ACCESS_KEY --env AWS_SESSION_TOKEN toniblyx/prowler:latest -g hipaa -M csv,json,html
```
In case you want to get reports created by Prowler use docker volume option like in the example below:
```sh
docker run -ti --rm -v /your/local/output:/prowler/output --name prowler --env AWS_ACCESS_KEY_ID --env AWS_SECRET_ACCESS_KEY --env AWS_SESSION_TOKEN toniblyx/prowler:latest -g hipaa -M csv,json,html
```
1. For custom AWS-CLI profile and region, use the following: (it will use your custom profile and run checks over all regions when needed):
```sh
./prowler -p custom-profile -r us-east-1
```
```sh
./prowler -p custom-profile -r us-east-1
```
1. For a single check use option `-c`:
```sh
./prowler -c check310
```
```sh
./prowler -c check310
```
With Docker:
With Docker:
```sh
docker run -ti --rm --name prowler --env AWS_ACCESS_KEY_ID --env AWS_SECRET_ACCESS_KEY --env AWS_SESSION_TOKEN toniblyx/prowler:latest "-c check310"
```
```sh
docker run -ti --rm --name prowler --env AWS_ACCESS_KEY_ID --env AWS_SECRET_ACCESS_KEY --env AWS_SESSION_TOKEN toniblyx/prowler:latest "-c check310"
```
or multiple checks separated by comma:
or multiple checks separated by comma:
```sh
./prowler -c check310,check722
```
```sh
./prowler -c check310,check722
```
or all checks but some of them:
or all checks but some of them:
```sh
./prowler -E check42,check43
```
```sh
./prowler -E check42,check43
```
or for custom profile and region:
or for custom profile and region:
```sh
./prowler -p custom-profile -r us-east-1 -c check11
```
```sh
./prowler -p custom-profile -r us-east-1 -c check11
```
or for a group of checks use group name:
or for a group of checks use group name:
```sh
./prowler -g group1 # for iam related checks
```
```sh
./prowler -g group1 # for iam related checks
```
or exclude some checks in the group:
or exclude some checks in the group:
```sh
./prowler -g group4 -E check42,check43
```
```sh
./prowler -g group4 -E check42,check43
```
Valid check numbers are based on the AWS CIS Benchmark guide, so 1.1 is check11 and 3.10 is check310
Valid check numbers are based on the AWS CIS Benchmark guide, so 1.1 is check11 and 3.10 is check310
### Regions
By default, Prowler scans all opt-in regions available, that might take a long execution time depending on the number of resources and regions used. Same applies for GovCloud or China regions. See below Advance usage for examples.
Prowler has two parameters related to regions: `-r` that is used query AWS services API endpoints (it uses `us-east-1` by default and required for GovCloud or China) and the option `-f` that is to filter those regions you only want to scan. For example if you want to scan Dublin only use `-f eu-west-1` and if you want to scan Dublin and Ohio `-f eu-west-1,us-east-1`, note the regions are separated by a comma deliminator (it can be used as before with `-f 'eu-west-1,us-east-1'`).
Prowler has two parameters related to regions: `-r` that is used query AWS services API endpoints (it uses `us-east-1` by default and required for GovCloud or China) and the option `-f` that is to filter those regions you only want to scan. For example if you want to scan Dublin only use `-f eu-west-1` and if you want to scan Dublin and Ohio `-f eu-west-1,us-east-1`, note the regions are separated by a comma delimiter (it can be used as before with `-f 'eu-west-1,us-east-1'`).
## Screenshots
@@ -261,81 +279,256 @@ Prowler has two parameters related to regions: `-r` that is used query AWS servi
1. If you want to save your report for later analysis thare are different ways, natively (supported text, mono, csv, json, json-asff, junit-xml and html, see note below for more info):
```sh
./prowler -M csv
```
```sh
./prowler -M csv
```
or with multiple formats at the same time:
or with multiple formats at the same time:
```sh
./prowler -M csv,json,json-asff,html
```
```sh
./prowler -M csv,json,json-asff,html
```
or just a group of checks in multiple formats:
or just a group of checks in multiple formats:
```sh
./prowler -g gdpr -M csv,json,json-asff
```
```sh
./prowler -g gdpr -M csv,json,json-asff
```
or if you want a sorted and dynamic HTML report do:
or if you want a sorted and dynamic HTML report do:
```sh
./prowler -M html
```
```sh
./prowler -M html
```
Now `-M` creates a file inside the prowler `output` directory named `prowler-output-AWSACCOUNTID-YYYYMMDDHHMMSS.format`. You don't have to specify anything else, no pipes, no redirects.
Now `-M` creates a file inside the prowler `output` directory named `prowler-output-AWSACCOUNTID-YYYYMMDDHHMMSS.format`. You don't have to specify anything else, no pipes, no redirects.
or just saving the output to a file like below:
or just saving the output to a file like below:
```sh
./prowler -M mono > prowler-report.txt
```
```sh
./prowler -M mono > prowler-report.txt
```
To generate JUnit report files, include the junit-xml format. This can be combined with any other format. Files are written inside a prowler root directory named `junit-reports`:
To generate JUnit report files, include the junit-xml format. This can be combined with any other format. Files are written inside a prowler root directory named `junit-reports`:
```sh
./prowler -M text,junit-xml
```
```sh
./prowler -M text,junit-xml
```
>Note about output formats to use with `-M`: "text" is the default one with colors, "mono" is like default one but monochrome, "csv" is comma separated values, "json" plain basic json (without comma between lines) and "json-asff" is also json with Amazon Security Finding Format that you can ship to Security Hub using `-S`.
> Note about output formats to use with `-M`: "text" is the default one with colors, "mono" is like default one but monochrome, "csv" is comma separated values, "json" plain basic json (without comma between lines) and "json-asff" is also json with Amazon Security Finding Format that you can ship to Security Hub using `-S`.
To save your report in an S3 bucket, use `-B` to define a custom output bucket along with `-M` to define the output format that is going to be uploaded to S3:
To save your report in an S3 bucket, use `-B` to define a custom output bucket along with `-M` to define the output format that is going to be uploaded to S3:
```sh
./prowler -M csv -B my-bucket/folder/
```
>In the case you do not want to use the assumed role credentials but the initial credentials to put the reports into the S3 bucket, use `-D` instead of `-B`. Make sure that the used credentials have s3:PutObject permissions in the S3 path where the reports are going to be uploaded.
```sh
./prowler -M csv -B my-bucket/folder/
```
When generating multiple formats and running using Docker, to retrieve the reports, bind a local directory to the container, e.g.:
> In the case you do not want to use the assumed role credentials but the initial credentials to put the reports into the S3 bucket, use `-D` instead of `-B`. Make sure that the used credentials have s3:PutObject permissions in the S3 path where the reports are going to be uploaded.
```sh
docker run -ti --rm --name prowler --volume "$(pwd)":/prowler/output --env AWS_ACCESS_KEY_ID --env AWS_SECRET_ACCESS_KEY --env AWS_SESSION_TOKEN toniblyx/prowler:latest -M csv,json
```
When generating multiple formats and running using Docker, to retrieve the reports, bind a local directory to the container, e.g.:
```sh
docker run -ti --rm --name prowler --volume "$(pwd)":/prowler/output --env AWS_ACCESS_KEY_ID --env AWS_SECRET_ACCESS_KEY --env AWS_SESSION_TOKEN toniblyx/prowler:latest -M csv,json
```
1. To perform an assessment based on CIS Profile Definitions you can use cislevel1 or cislevel2 with `-g` flag, more information about this [here, page 8](https://d0.awsstatic.com/whitepapers/compliance/AWS_CIS_Foundations_Benchmark.pdf):
```sh
./prowler -g cislevel1
```
```sh
./prowler -g cislevel1
```
1. If you want to run Prowler to check multiple AWS accounts in parallel (runs up to 4 simultaneously `-P 4`) but you may want to read below in Advanced Usage section to do so assuming a role:
```sh
grep -E '^\[([0-9A-Aa-z_-]+)\]' ~/.aws/credentials | tr -d '][' | shuf | \
xargs -n 1 -L 1 -I @ -r -P 4 ./prowler -p @ -M csv 2> /dev/null >> all-accounts.csv
```
```sh
grep -E '^\[([0-9A-Aa-z_-]+)\]' ~/.aws/credentials | tr -d '][' | shuf | \
xargs -n 1 -L 1 -I @ -r -P 4 ./prowler -p @ -M csv 2> /dev/null >> all-accounts.csv
```
1. For help about usage run:
```
./prowler -h
```
```
./prowler -h
```
## Database providers connector
You can send the Prowler's output to different databases (right now only PostgreSQL is supported).
Jump into the section for the database provider you want to use and follow the required steps to configure it.
### PostgreSQL
Install psql
- Mac -> `brew install libpq`
- Ubuntu -> `sudo apt-get install postgresql-client `
- RHEL/Centos -> `sudo yum install postgresql10`
#### Audit ID Field
Prowler can add an optional `audit_id` field to identify each audit that has been made in the database. You can do this by adding the `-u audit_id` flag to the prowler command.
#### Credentials
There are two options to pass the PostgreSQL credentials to Prowler:
##### Using a .pgpass file
Configure a `~/.pgpass` file into the root folder of the user that is going to launch Prowler ([pgpass file doc](https://www.postgresql.org/docs/current/libpq-pgpass.html)), including an extra field at the end of the line, separated by `:`, to name the table, using the following format:
`hostname:port:database:username:password:table`
##### Using environment variables
- Configure the following environment variables:
- `POSTGRES_HOST`
- `POSTGRES_PORT`
- `POSTGRES_USER`
- `POSTGRES_PASSWORD`
- `POSTGRES_DB`
- `POSTGRES_TABLE`
> _Note_: If you are using a schema different than postgres please include it at the beginning of the `POSTGRES_TABLE` variable, like: `export POSTGRES_TABLE=prowler.findings`
Also you need to have enabled the `uuid` postgresql extension, to enable it:
`CREATE EXTENSION IF NOT EXISTS "uuid-ossp";`
Create a table in your PostgreSQL database to store the Prowler's data. You can use the following SQL statement to create the table:
```
CREATE TABLE IF NOT EXISTS prowler_findings (
id uuid,
audit_id uuid ,
profile text,
account_number text,
region text,
check_id text,
result text,
item_scored text,
item_level text,
check_title text,
result_extended text,
check_asff_compliance_type text,
severity text,
service_name text,
check_asff_resource_type text,
check_asff_type text,
risk text,
remediation text,
documentation text,
check_caf_epic text,
resource_id text,
account_details_email text,
account_details_name text,
account_details_arn text,
account_details_org text,
account_details_tags text,
prowler_start_time text
);
```
- Execute Prowler with `-d` flag, for example:
`./prowler -M csv -d postgresql`
> _Note_: This command creates a `csv` output file and stores the Prowler output in the configured PostgreSQL DB. It's an example, `-d` flag **does not** require `-M` to run.
## Output Formats
Prowler supports natively the following output formats:
- CSV
- JSON
- JSON-ASFF
- HTML
- JUNIT-XML
Hereunder is the structure for each of them
### CSV
| PROFILE | ACCOUNT_NUM | REGION | TITLE_ID | CHECK_RESULT | ITEM_SCORED | ITEM_LEVEL | TITLE_TEXT | CHECK_RESULT_EXTENDED | CHECK_ASFF_COMPLIANCE_TYPE | CHECK_SEVERITY | CHECK_SERVICENAME | CHECK_ASFF_RESOURCE_TYPE | CHECK_ASFF_TYPE | CHECK_RISK | CHECK_REMEDIATION | CHECK_DOC | CHECK_CAF_EPIC | CHECK_RESOURCE_ID | PROWLER_START_TIME | ACCOUNT_DETAILS_EMAIL | ACCOUNT_DETAILS_NAME | ACCOUNT_DETAILS_ARN | ACCOUNT_DETAILS_ORG | ACCOUNT_DETAILS_TAGS |
| ------- | ----------- | ------ | -------- | ------------ | ----------- | ---------- | ---------- | --------------------- | -------------------------- | -------------- | ----------------- | ------------------------ | --------------- | ---------- | ----------------- | --------- | -------------- | ----------------- | ------------------ | --------------------- | -------------------- | ------------------- | ------------------- | -------------------- |
### JSON
```
{
"Profile": "ENV",
"Account Number": "1111111111111",
"Control": "[check14] Ensure access keys are rotated every 90 days or less",
"Message": "us-west-2: user has not rotated access key 2 in over 90 days",
"Severity": "Medium",
"Status": "FAIL",
"Scored": "",
"Level": "CIS Level 1",
"Control ID": "1.4",
"Region": "us-west-2",
"Timestamp": "2022-05-18T10:33:48Z",
"Compliance": "ens-op.acc.1.aws.iam.4 ens-op.acc.5.aws.iam.3",
"Service": "iam",
"CAF Epic": "IAM",
"Risk": "Access keys consist of an access key ID and secret access key which are used to sign programmatic requests that you make to AWS. AWS users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI)- Tools for Windows PowerShell- the AWS SDKs- or direct HTTP calls using the APIs for individual AWS services. It is recommended that all access keys be regularly rotated.",
"Remediation": "Use the credential report to ensure access_key_X_last_rotated is less than 90 days ago.",
"Doc link": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html",
"Resource ID": "terraform-user",
"Account Email": "",
"Account Name": "",
"Account ARN": "",
"Account Organization": "",
"Account tags": ""
}
```
> NOTE: Each finding is a `json` object.
### JSON-ASFF
```
{
"SchemaVersion": "2018-10-08",
"Id": "prowler-1.4-1111111111111-us-west-2-us-west-2_user_has_not_rotated_access_key_2_in_over_90_days",
"ProductArn": "arn:aws:securityhub:us-west-2::product/prowler/prowler",
"RecordState": "ACTIVE",
"ProductFields": {
"ProviderName": "Prowler",
"ProviderVersion": "2.9.0-13April2022",
"ProwlerResourceName": "user"
},
"GeneratorId": "prowler-check14",
"AwsAccountId": "1111111111111",
"Types": [
"ens-op.acc.1.aws.iam.4 ens-op.acc.5.aws.iam.3"
],
"FirstObservedAt": "2022-05-18T10:33:48Z",
"UpdatedAt": "2022-05-18T10:33:48Z",
"CreatedAt": "2022-05-18T10:33:48Z",
"Severity": {
"Label": "MEDIUM"
},
"Title": "iam.[check14] Ensure access keys are rotated every 90 days or less",
"Description": "us-west-2: user has not rotated access key 2 in over 90 days",
"Resources": [
{
"Type": "AwsIamUser",
"Id": "user",
"Partition": "aws",
"Region": "us-west-2"
}
],
"Compliance": {
"Status": "FAILED",
"RelatedRequirements": [
"ens-op.acc.1.aws.iam.4 ens-op.acc.5.aws.iam.3"
]
}
}
```
> NOTE: Each finding is a `json` object.
## Advanced Usage
### Assume Role:
Prowler uses the AWS CLI underneath so it uses the same authentication methods. However, there are few ways to run Prowler against multiple accounts using IAM Assume Role feature depending on eachg use case. You can just set up your custom profile inside `~/.aws/config` with all needed information about the role to assume then call it with `./prowler -p your-custom-profile`. Additionally you can use `-A 123456789012` and `-R RemoteRoleToAssume` and Prowler will get those temporary credentials using `aws sts assume-role`, set them up as environment variables and run against that given account. To create a role to assume in multiple accounts easier eather as CFN Stack or StackSet, look at [this CloudFormation template](iam/create_role_to_assume_cfn.yaml) and adapt it.
Prowler uses the AWS CLI underneath so it uses the same authentication methods. However, there are few ways to run Prowler against multiple accounts using IAM Assume Role feature depending on eachg use case. You can just set up your custom profile inside `~/.aws/config` with all needed information about the role to assume then call it with `./prowler -p your-custom-profile`. Additionally you can use `-A 123456789012` and `-R RemoteRoleToAssume` and Prowler will get those temporary credentials using `aws sts assume-role`, set them up as environment variables and run against that given account. To create a role to assume in multiple accounts easier either as CFN Stack or StackSet, look at [this CloudFormation template](iam/create_role_to_assume_cfn.yaml) and adapt it.
```sh
./prowler -A 123456789012 -R ProwlerRole
@@ -345,16 +538,18 @@ Prowler uses the AWS CLI underneath so it uses the same authentication methods.
./prowler -A 123456789012 -R ProwlerRole -I 123456
```
> *NOTE 1 about Session Duration*: By default it gets credentials valid for 1 hour (3600 seconds). Depending on the mount of checks you run and the size of your infrastructure, Prowler may require more than 1 hour to finish. Use option `-T <seconds>` to allow up to 12h (43200 seconds). To allow more than 1h you need to modify *"Maximum CLI/API session duration"* for that particular role, read more [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session).
> _NOTE 1 about Session Duration_: By default it gets credentials valid for 1 hour (3600 seconds). Depending on the mount of checks you run and the size of your infrastructure, Prowler may require more than 1 hour to finish. Use option `-T <seconds>` to allow up to 12h (43200 seconds). To allow more than 1h you need to modify _"Maximum CLI/API session duration"_ for that particular role, read more [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session).
> *NOTE 2 about Session Duration*: Bear in mind that if you are using roles assumed by role chaining there is a hard limit of 1 hour so consider not using role chaining if possible, read more about that, in foot note 1 below the table [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html).
> _NOTE 2 about Session Duration_: Bear in mind that if you are using roles assumed by role chaining there is a hard limit of 1 hour so consider not using role chaining if possible, read more about that, in foot note 1 below the table [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html).
For example, if you want to get only the fails in CSV format from all checks regarding RDS without banner from the AWS Account 123456789012 assuming the role RemoteRoleToAssume and set a fixed session duration of 1h:
```sh
./prowler -A 123456789012 -R RemoteRoleToAssume -T 3600 -b -M cvs -q -g rds
```
or with a given External ID:
```sh
./prowler -A 123456789012 -R RemoteRoleToAssume -T 3600 -I 123456 -b -M cvs -q -g rds
```
@@ -364,22 +559,28 @@ or with a given External ID:
If you want to run Prowler or just a check or a group across all accounts of AWS Organizations you can do this:
First get a list of accounts that are not suspended:
```
ACCOUNTS_IN_ORGS=$(aws organizations list-accounts --query Accounts[?Status==`ACTIVE`].Id --output text)
```
Then run Prowler to assume a role (same in all members) per each account, in this example it is just running one particular check:
```
for accountId in $ACCOUNTS_IN_ORGS; do ./prowler -A $accountId -R RemoteRoleToAssume -c extra79; done
```
Using the same for loop it can be scanned a list of accounts with a variable like `ACCOUNTS_LIST='11111111111 2222222222 333333333'`
### Get AWS Account details from your AWS Organization:
From Prowler v2.8, you can get additional information of the scanned account in CSV and JSON outputs. When scanning a single account you get the Account ID as part of the output. Now, if you have AWS Organizations and are scanning multiple accounts using the assume role functionality, Prowler can get your account details like Account Name, Email, ARN, Organization ID and Tags and you will have them next to every finding in the CSV and JSON outputs.
In order to do that you can use the new option `-O <management account id>`, requires `-R <role to assume>` and also needs permissions `organizations:ListAccounts*` and `organizations:ListTagsForResource`. See the following sample command:
```
./prowler -R ProwlerScanRole -A 111111111111 -O 222222222222 -M json,csv
```
In that command Prowler will scan the account `111111111111` assuming the role `ProwlerScanRole` and getting the account details from the AWS Organizatiosn management account `222222222222` assuming the same role `ProwlerScanRole` for that and creating two reports with those details in JSON and CSV.
In the JSON output below (redacted) you can see tags coded in base64 to prevent breaking CSV or JSON due to its format:
@@ -391,6 +592,7 @@ In the JSON output below (redacted) you can see tags coded in base64 to prevent
"Account Organization": "o-abcde1234",
"Account tags": "\"eyJUYWdzIjpasf0=\""
```
The additional fields in CSV header output are as follow:
```csv
@@ -400,9 +602,11 @@ ACCOUNT_DETAILS_EMAIL,ACCOUNT_DETAILS_NAME,ACCOUNT_DETAILS_ARN,ACCOUNT_DETAILS_O
### GovCloud
Prowler runs in GovCloud regions as well. To make sure it points to the right API endpoint use `-r` to either `us-gov-west-1` or `us-gov-east-1`. If not filter region is used it will look for resources in both GovCloud regions by default:
```sh
./prowler -r us-gov-west-1
```
> For Security Hub integration see below in Security Hub section.
### Custom folder for custom checks
@@ -410,11 +614,12 @@ Prowler runs in GovCloud regions as well. To make sure it points to the right AP
Flag `-x /my/own/checks` will include any check in that particular directory (files must start by check). To see how to write checks see [Add Custom Checks](#add-custom-checks) section.
S3 URIs are also supported as custom folders for custom checks, e.g. `s3://bucket/prefix/checks`. Prowler will download the folder locally and run the checks as they are called with default execution,`-c` or `-g`.
>Make sure that the used credentials have s3:GetObject permissions in the S3 path where the custom checks are located.
> Make sure that the used credentials have s3:GetObject permissions in the S3 path where the custom checks are located.
### Show or log only FAILs
In order to remove noise and get only FAIL findings there is a `-q` flag that makes Prowler to show and log only FAILs.
In order to remove noise and get only FAIL findings there is a `-q` flag that makes Prowler to show and log only FAILs.
It can be combined with any other option.
Will show WARNINGS when a resource is excluded, just to take into consideration.
@@ -432,34 +637,39 @@ Sets the entropy limit for high entropy hex strings from environment variable `H
export BASE64_LIMIT=4.5
export HEX_LIMIT=3.0
```
### Run Prowler using AWS CloudShell
An easy way to run Prowler to scan your account is using AWS CloudShell. Read more and learn how to do it [here](util/cloudshell/README.md).
## Security Hub integration
Since October 30th 2020 (version v2.3RC5), Prowler supports natively and as **official integration** sending findings to [AWS Security Hub](https://aws.amazon.com/security-hub). This integration allows Prowler to import its findings to AWS Security Hub. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Identity and Access Management (IAM) Access Analyzer, and AWS Firewall Manager, as well as from AWS Partner solutions and from Prowler for free.
Since October 30th 2020 (version v2.3RC5), Prowler supports natively and as **official integration** sending findings to [AWS Security Hub](https://aws.amazon.com/security-hub). This integration allows Prowler to import its findings to AWS Security Hub. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Identity and Access Management (IAM) Access Analyzer, and AWS Firewall Manager, as well as from AWS Partner solutions and from Prowler for free.
Before sending findings to Prowler, you need to perform next steps:
1. Since Security Hub is a region based service, enable it in the region or regions you require. Use the AWS Management Console or using the AWS CLI with this command if you have enough permissions:
- `aws securityhub enable-security-hub --region <region>`.
2. Enable Prowler as partner integration integration. Use the AWS Management Console or using the AWS CLI with this command if you have enough permissions:
- `aws securityhub enable-import-findings-for-product --region <region> --product-arn arn:aws:securityhub:<region>::product/prowler/prowler` (change region also inside the ARN).
- Using the AWS Management Console:
![Screenshot 2020-10-29 at 10 26 02 PM](https://user-images.githubusercontent.com/3985464/97634660-5ade3400-1a36-11eb-9a92-4a45cc98c158.png)
1. Since Security Hub is a region based service, enable it in the region or regions you require. Use the AWS Management Console or using the AWS CLI with this command if you have enough permissions:
- `aws securityhub enable-security-hub --region <region>`.
2. Enable Prowler as partner integration integration. Use the AWS Management Console or using the AWS CLI with this command if you have enough permissions:
- `aws securityhub enable-import-findings-for-product --region <region> --product-arn arn:aws:securityhub:<region>::product/prowler/prowler` (change region also inside the ARN).
- Using the AWS Management Console:
![Screenshot 2020-10-29 at 10 26 02 PM](https://user-images.githubusercontent.com/3985464/97634660-5ade3400-1a36-11eb-9a92-4a45cc98c158.png)
3. As mentioned in section "Custom IAM Policy", to allow Prowler to import its findings to AWS Security Hub you need to add the policy below to the role or user running Prowler:
- [iam/prowler-security-hub.json](iam/prowler-security-hub.json)
- [iam/prowler-security-hub.json](iam/prowler-security-hub.json)
Once it is enabled, it is as simple as running the command below (for all regions):
```sh
./prowler -M json-asff -S
```
or for only one filtered region like eu-west-1:
```sh
./prowler -M json-asff -q -S -f eu-west-1
```
> Note 1: It is recommended to send only fails to Security Hub and that is possible adding `-q` to the command.
> Note 1: It is recommended to send only fails to Security Hub and that is possible adding `-q` to the command.
> Note 2: Since Prowler perform checks to all regions by defaults you may need to filter by region when runing Security Hub integration, as shown in the example above. Remember to enable Security Hub in the region or regions you need by calling `aws securityhub enable-security-hub --region <region>` and run Prowler with the option `-f <region>` (if no region is used it will try to push findings in all regions hubs).
@@ -472,6 +682,7 @@ Once you run findings for first time you will be able to see Prowler findings in
### Security Hub in GovCloud regions
To use Prowler and Security Hub integration in GovCloud there is an additional requirement, usage of `-r` is needed to point the API queries to the right API endpoint. Here is a sample command that sends only failed findings to Security Hub in region `us-gov-west-1`:
```
./prowler -r us-gov-west-1 -f us-gov-west-1 -S -M csv,json-asff -q
```
@@ -479,6 +690,7 @@ To use Prowler and Security Hub integration in GovCloud there is an additional r
### Security Hub in China regions
To use Prowler and Security Hub integration in China regions there is an additional requirement, usage of `-r` is needed to point the API queries to the right API endpoint. Here is a sample command that sends only failed findings to Security Hub in region `cn-north-1`:
```
./prowler -r cn-north-1 -f cn-north-1 -q -S -M csv,json-asff
```
@@ -487,7 +699,7 @@ To use Prowler and Security Hub integration in China regions there is an additio
Either to run Prowler once or based on a schedule this template makes it pretty straight forward. This template will create a CodeBuild environment and run Prowler directly leaving all reports in a bucket and creating a report also inside CodeBuild basedon the JUnit output from Prowler. Scheduling can be cron based like `cron(0 22 * * ? *)` or rate based like `rate(5 hours)` since CloudWatch Event rules (or Eventbridge) is used here.
The Cloud Formation template that helps you doing that is [here](https://github.com/prowler-cloud/prowler/blob/master/util/codebuild/codebuild-prowler-audit-account-cfn.yaml).
The Cloud Formation template that helps you to do that is [here](https://github.com/prowler-cloud/prowler/blob/master/util/codebuild/codebuild-prowler-audit-account-cfn.yaml).
> This is a simple solution to monitor one account. For multiples accounts see [Multi Account and Continuous Monitoring](util/org-multi-account/README.md).
@@ -500,19 +712,27 @@ Sometimes you may find resources that are intentionally configured in a certain
```
S3 URIs are also supported as allowlist file, e.g. `s3://bucket/prefix/allowlist_sample.txt`
>Make sure that the used credentials have s3:GetObject permissions in the S3 path where the allowlist file is located.
> Make sure that the used credentials have s3:GetObject permissions in the S3 path where the allowlist file is located.
DynamoDB table ARNs are also supported as allowlist file, e.g. `arn:aws:dynamodb:us-east-1:111111222222:table/allowlist`
>Make sure that the table has `account_id` as partition key and `rule` as sort key, and that the used credentials have `dynamodb:PartiQLSelect` permissions in the table.
><p align="left"><img src="https://user-images.githubusercontent.com/38561120/165769502-296f9075-7cc8-445e-8158-4b21804bfe7e.png" alt="image" width="397" height="252" /></p>
>The field `account_id` can contains either an account ID or an `*` (which applies to all the accounts that use this table as a whitelist). As in the traditional allowlist file, the `rule` field must contain `checkID:resourcename` pattern.
><p><img src="https://user-images.githubusercontent.com/38561120/165770610-ed5c2764-7538-44c2-9195-bcfdecc4ef9b.png" alt="image" width="394" /></p>
> Make sure that the table has `account_id` as partition key and `rule` as sort key, and that the used credentials have `dynamodb:PartiQLSelect` permissions in the table.
>
> <p align="left"><img src="https://user-images.githubusercontent.com/38561120/165769502-296f9075-7cc8-445e-8158-4b21804bfe7e.png" alt="image" width="397" height="252" /></p>
> The field `account_id` can contain either an account ID or an `*` (which applies to all the accounts that use this table as a whitelist). As in the traditional allowlist file, the `rule` field must contain `checkID:resourcename` pattern.
>
> <p><img src="https://user-images.githubusercontent.com/38561120/165770610-ed5c2764-7538-44c2-9195-bcfdecc4ef9b.png" alt="image" width="394" /></p>
Allowlist option works along with other options and adds a `WARNING` instead of `INFO`, `PASS` or `FAIL` to any output format except for `json-asff`.
## Inventory
With Prowler you can get an inventory of your AWS resources. To do so, run `./prowler -i` to see what AWS resources you have deployed in your AWS account. This feature lists almost all resources in all regions based on [this](https://docs.aws.amazon.com/resourcegroupstagging/latest/APIReference/API_GetResources.html) API call. Note that it does not cover 100% of resource types.
The inventory will be stored in an output `csv` file by default, under common Prowler `output` folder, with the following format: `prowler-inventory-${ACCOUNT_NUM}-${OUTPUT_DATE}.csv`
## How to fix every FAIL
Check your report and fix the issues following all specific guidelines per check in <https://d0.awsstatic.com/whitepapers/compliance/AWS_CIS_Foundations_Benchmark.pdf>
@@ -553,10 +773,12 @@ There are some helpfull tools to save time in this process like [aws-mfa-script]
### AWS Managed IAM Policies
[ViewOnlyAccess](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html#jf_view-only-user)
- Use case: This user can view a list of AWS resources and basic metadata in the account across all services. The user cannot read resource content or metadata that goes beyond the quota and list information for resources.
- Policy description: This policy grants List*, Describe*, Get*, View*, and Lookup* access to resources for most AWS services. To see what actions this policy includes for each service, see [ViewOnlyAccess Permissions](https://console.aws.amazon.com/iam/home#policies/arn:aws:iam::aws:policy/job-function/ViewOnlyAccess)
- Policy description: This policy grants List*, Describe*, Get*, View*, and Lookup\* access to resources for most AWS services. To see what actions this policy includes for each service, see [ViewOnlyAccess Permissions](https://console.aws.amazon.com/iam/home#policies/arn:aws:iam::aws:policy/job-function/ViewOnlyAccess)
[SecurityAudit](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html#jf_security-auditor)
- Use case: This user monitors accounts for compliance with security requirements. This user can access logs and events to investigate potential security breaches or potential malicious activity.
- Policy description: This policy grants permissions to view configuration data for many AWS services and to review their logs. To see what actions this policy includes for each service, see [SecurityAudit Permissions](https://console.aws.amazon.com/iam/home#policies/arn:aws:iam::aws:policy/SecurityAudit)
@@ -576,7 +798,7 @@ Allows Prowler to import its findings to [AWS Security Hub](https://aws.amazon.c
### Bootstrap Script
Quick bash script to set up a "prowler" IAM user with "SecurityAudit" and "ViewOnlyAccess" group with the required permissions (including "Prowler-Additions-Policy"). To run the script below, you need user with administrative permissions; set the `AWS_DEFAULT_PROFILE` to use that account:
Quick bash script to set up a "prowler" IAM user with "SecurityAudit" and "ViewOnlyAccess" group with the required permissions (including "Prowler-Additions-Policy"). To run the script below, you need a user with administrative permissions; set the `AWS_DEFAULT_PROFILE` to use that account:
```sh
export AWS_DEFAULT_PROFILE=default
@@ -592,7 +814,7 @@ aws iam create-access-key --user-name prowler
unset ACCOUNT_ID AWS_DEFAULT_PROFILE
```
The `aws iam create-access-key` command will output the secret access key and the key id; keep these somewhere safe, and add them to `~/.aws/credentials` with an appropriate profile name to use them with Prowler. This is the only time they secret key will be shown. If you lose it, you will need to generate a replacement.
The `aws iam create-access-key` command will output the secret access key and the key id; keep these somewhere safe, and add them to `~/.aws/credentials` with an appropriate profile name to use them with Prowler. This is the only time the secret key will be shown. If you lose it, you will need to generate a replacement.
> [This CloudFormation template](iam/create_role_to_assume_cfn.yaml) may also help you on that task.
@@ -608,7 +830,7 @@ To list all existing checks in the extras group run the command below:
./prowler -l -g extras
```
>There are some checks not included in that list, they are experimental or checks that takes long to run like `extra759` and `extra760` (search for secrets in Lambda function variables and code).
> There are some checks not included in that list, they are experimental or checks that take long to run like `extra759` and `extra760` (search for secrets in Lambda function variables and code).
To check all extras in one command:
@@ -628,7 +850,6 @@ or to run multiple extras in one go:
./prowler -c extraNumber,extraNumber
```
## Forensics Ready Checks
With this group of checks, Prowler looks if each service with logging or audit capabilities has them enabled to ensure all needed evidences are recorded and collected for an eventual digital forensic investigation in case of incident. List of checks part of this group (you can also see all groups with `./prowler -L`). The list of checks can be seen in the group file at:
@@ -700,6 +921,7 @@ AWS is made to be flexible for service links within and between different AWS ac
This group of checks helps to analyse a particular AWS account (subject) on existing links to other AWS accounts across various AWS services, in order to identify untrusted links.
### Run
To give it a quick shot just call:
```sh
@@ -716,10 +938,10 @@ Currently, this check group supports two different scenarios:
### Coverage
Current coverage of Amazon Web Service (AWS) taken from [here](https://docs.aws.amazon.com/whitepapers/latest/aws-overview/introduction.html):
| Topic | Service | Trust Boundary |
| Topic | Service | Trust Boundary |
|---------------------------------|------------|---------------------------------------------------------------------------|
| Networking and Content Delivery | Amazon VPC | VPC endpoints connections ([extra786](checks/check_extra786)) |
| | | VPC endpoints allowlisted principals ([extra787](checks/check_extra787)) |
| Networking and Content Delivery | Amazon VPC | VPC endpoints connections ([extra786](checks/check_extra786)) |
| | | VPC endpoints allowlisted principals ([extra787](checks/check_extra787)) |
All ideas or recommendations to extend this group are very welcome [here](https://github.com/prowler-cloud/prowler/issues/new/choose).
@@ -737,10 +959,12 @@ Multi Account environments assumes a minimum of two trusted or known accounts. F
![multi-account-environment](/docs/images/prowler-multi-account-environment.png)
## Custom Checks
Using `./prowler -c extra9999 -a` you can build your own on-the-fly custom check by specifying the AWS CLI command to execute.
Using `./prowler -c extra9999 -a` you can build your own on-the-fly custom check by specifying the AWS CLI command to execute.
> Omit the "aws" command and only use its parameters within quotes and do not nest quotes in the aws parameter, --output text is already included in the check.
>
Here is an example of a check to find SGs with inbound port 80:
>
> Here is an example of a check to find SGs with inbound port 80:
```sh
./prowler -c extra9999 -a 'ec2 describe-security-groups --filters Name=ip-permission.to-port,Values=80 --query SecurityGroups[*].GroupId[]]'

View File

@@ -26,9 +26,9 @@ CHECK_CAF_EPIC_check115='IAM'
check115(){
if [[ "${REGION}" == "us-gov-west-1" || "${REGION}" == "us-gov-east-1" ]]; then
textInfo "${REGION}: This is an AWS GovCloud account and there is no root account to perform checks."
textInfo "${REGION}: This is an AWS GovCloud account and there is no root account to perform checks." "$REGION" "root"
else
# "Ensure security questions are registered in the AWS account (Not Scored)"
textInfo "${REGION}: No command available for check 1.15. Login to the AWS Console as root & click on the Account. Name -> My Account -> Configure Security Challenge Questions."
textInfo "${REGION}: No command available for check 1.15. Login to the AWS Console as root & click on the Account. Name -> My Account -> Configure Security Challenge Questions." "$REGION" "root"
fi
}

View File

@@ -29,21 +29,26 @@ CHECK_CAF_EPIC_check116='IAM'
check116(){
# "Ensure IAM policies are attached only to groups or roles (Scored)"
LIST_USERS=$($AWSCLI iam list-users --query 'Users[*].UserName' --output text $PROFILE_OPT --region $REGION)
for user in $LIST_USERS;do
USER_ATTACHED_POLICY=$($AWSCLI iam list-attached-user-policies --output text $PROFILE_OPT --region $REGION --user-name $user)
USER_INLINE_POLICY=$($AWSCLI iam list-user-policies --output text $PROFILE_OPT --region $REGION --user-name $user)
if [[ $USER_ATTACHED_POLICY ]] || [[ $USER_INLINE_POLICY ]]
then
if [[ $USER_ATTACHED_POLICY ]]
if [[ "${LIST_USERS}" ]]
then
for user in $LIST_USERS;do
USER_ATTACHED_POLICY=$($AWSCLI iam list-attached-user-policies --output text $PROFILE_OPT --region $REGION --user-name $user)
USER_INLINE_POLICY=$($AWSCLI iam list-user-policies --output text $PROFILE_OPT --region $REGION --user-name $user)
if [[ $USER_ATTACHED_POLICY ]] || [[ $USER_INLINE_POLICY ]]
then
textFail "$REGION: $user has managed policy directly attached" "$REGION" "$user"
if [[ $USER_ATTACHED_POLICY ]]
then
textFail "$REGION: $user has managed policy directly attached" "$REGION" "$user"
fi
if [[ $USER_INLINE_POLICY ]]
then
textFail "$REGION: $user has inline policy directly attached" "$REGION" "$user"
fi
else
textPass "$REGION: No policies attached to user $user" "$REGION" "$user"
fi
if [[ $USER_INLINE_POLICY ]]
then
textFail "$REGION: $user has inline policy directly attached" "$REGION" "$user"
fi
else
textPass "$REGION: No policies attached to user $user" "$REGION" "$user"
fi
done
done
else
textPass "$REGION: No users found" "$REGION" "No users found"
fi
}

View File

@@ -13,7 +13,7 @@
CHECK_ID_check117="1.17"
CHECK_TITLE_check117="[check117] Maintain current contact details"
CHECK_SCORED_check117="NOT_SCORED"
CHECK_SCORED_check117="SCORED"
CHECK_CIS_LEVEL_check117="LEVEL1"
CHECK_SEVERITY_check117="Medium"
CHECK_ASFF_TYPE_check117="Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
@@ -26,10 +26,18 @@ CHECK_CAF_EPIC_check117='IAM'
check117(){
if [[ "${REGION}" == "us-gov-west-1" || "${REGION}" == "us-gov-east-1" ]]; then
textInfo "${REGION}: This is an AWS GovCloud account and there is no root account to perform checks."
textInfo "${REGION}: This is an AWS GovCloud account and there is no root account to perform checks." "${REGION}" "root"
else
# "Maintain current contact details (Scored)"
# No command available
textInfo "No command available for check 1.17. See section 1.17 on the CIS Benchmark guide for details."
GET_CONTACT_DETAILS=$($AWSCLI account get-contact-information --output text $PROFILE_OPT --region "${REGION}" 2>&1)
if grep -E -q 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${GET_CONTACT_DETAILS}"; then
textInfo "${REGION}: Access Denied trying to get account contact information" "${REGION}"
else
if [[ ${GET_CONTACT_DETAILS} ]];then
textPass "${REGION}: Account has contact information. Perhaps check for freshness of these details." "${REGION}" "root"
else
textFail "${REGION}: Unable to get account contact details. See section 1.17 on the CIS Benchmark guide for details." "${REGION}" "root"
fi
fi
fi
}

View File

@@ -13,7 +13,7 @@
CHECK_ID_check118="1.18"
CHECK_TITLE_check118="[check118] Ensure security contact information is registered"
CHECK_SCORED_check118="NOT_SCORED"
CHECK_SCORED_check118="SCORED"
CHECK_CIS_LEVEL_check118="LEVEL1"
CHECK_SEVERITY_check118="Medium"
CHECK_ASFF_TYPE_check118="Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
@@ -26,10 +26,18 @@ CHECK_CAF_EPIC_check118='IAM'
check118(){
if [[ "${REGION}" == "us-gov-west-1" || "${REGION}" == "us-gov-east-1" ]]; then
textInfo "${REGION}: This is an AWS GovCloud account and there is no root account to perform checks."
textInfo "${REGION}: This is an AWS GovCloud account and there is no root account to perform checks." "${REGION}" "root"
else
# "Ensure security contact information is registered (Scored)"
# No command available
textInfo "No command available for check 1.18. See section 1.18 on the CIS Benchmark guide for details."
GET_SECURITY_CONTACT_DETAILS=$("${AWSCLI}" account get-alternate-contact --alternate-contact-type SECURITY --output text ${PROFILE_OPT} --region "${REGION}" 2>&1)
if grep -E -q 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${GET_SECURITY_CONTACT_DETAILS}"; then
textInfo "${REGION}: Access Denied trying to get account contact information" "${REGION}"
else
if grep "SECURITY" <<< "${GET_SECURITY_CONTACT_DETAILS}"; then
textPass "${REGION}: Account has security contact information. Perhaps check for freshness of these details." "${REGION}" "root"
else
textFail "${REGION}: Account has not security contact information, or it was unable to capture. See section 1.18 on the CIS Benchmark guide for details." "${REGION}" "root"
fi
fi
fi
}

View File

@@ -21,7 +21,7 @@ CHECK_ASFF_RESOURCE_TYPE_check119="AwsEc2Instance"
CHECK_ALTERNATE_check119="check119"
CHECK_SERVICENAME_check119="ec2"
CHECK_RISK_check119='AWS access from within AWS instances can be done by either encoding AWS keys into AWS API calls or by assigning the instance to a role which has an appropriate permissions policy for the required access. AWS IAM roles reduce the risks associated with sharing and rotating credentials that can be used outside of AWS itself. If credentials are compromised; they can be used from outside of the AWS account.'
CHECK_REMEDIATION_check119='IAM roles can only be associated at the launch of an instance. To remediate an instance to add it to a role you must create or re-launch a new instance. (Check for external dependencies on its current private ip or public addresses).'
CHECK_REMEDIATION_check119='Create an IAM instance role if necessary and attach it to the corresponding EC2 instance.'
CHECK_DOC_check119='http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html'
CHECK_CAF_EPIC_check119='IAM'

View File

@@ -22,7 +22,7 @@ CHECK_ALTERNATE_check102="check12"
CHECK_ASFF_COMPLIANCE_TYPE_check12="ens-op.acc.5.aws.iam.1"
CHECK_SERVICENAME_check12="iam"
CHECK_RISK_check12='Unauthorized access to this critical account if password is not secure or it is disclosed in any way.'
CHECK_REMEDIATION_check12='Enable MFA for root account. MFA is a simple best practice that adds an extra layer of protection on top of your user name and password. Recommended to use hardware keys over virtual MFA.'
CHECK_REMEDIATION_check12='Enable MFA for all IAM users that have a console password. MFA is a simple best practice that adds an extra layer of protection on top of your user name and password. Recommended to use hardware keys over virtual MFA.'
CHECK_DOC_check12='https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html'
CHECK_CAF_EPIC_check12='IAM'

View File

@@ -23,7 +23,7 @@ CHECK_ASFF_COMPLIANCE_TYPE_check120="ens-op.acc.1.aws.iam.4"
CHECK_SERVICENAME_check120="iam"
CHECK_RISK_check120='AWS provides a support center that can be used for incident notification and response; as well as technical support and customer services. Create an IAM Role to allow authorized users to manage incidents with AWS Support.'
CHECK_REMEDIATION_check120='Create an IAM role for managing incidents with AWS.'
CHECK_DOC_check120='https://docs.aws.amazon.com/awssupport/latest/user/using-service-linked-roles-sup.html'
CHECK_DOC_check120='https://docs.aws.amazon.com/awssupport/latest/user/accessing-support.html'
CHECK_CAF_EPIC_check120='IAM'
check120(){

View File

@@ -27,42 +27,40 @@ CHECK_DOC_check21='https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cl
CHECK_CAF_EPIC_check21='Logging and Monitoring'
check21(){
trail_count=0
# "Ensure CloudTrail is enabled in all regions (Scored)"
for regx in $REGIONS; do
TRAILS_AND_REGIONS=$($AWSCLI cloudtrail describe-trails $PROFILE_OPT --region $regx --query 'trailList[*].{Name:TrailARN, HomeRegion:HomeRegion}' --output text 2>&1 | tr " " ',')
if [[ $(echo "$TRAILS_AND_REGIONS" | grep AccessDenied) ]]; then
textInfo "$regx: Access Denied trying to describe trails" "$regx" "$trail"
TRAILS_DETAILS=$($AWSCLI cloudtrail describe-trails $PROFILE_OPT --region $regx --query 'trailList[*].{Name:Name, HomeRegion:HomeRegion, Multiregion:IsMultiRegionTrail, ARN:TrailARN}' --output text 2>&1)
if [[ $(echo "$TRAILS_DETAILS" | grep AccessDenied) ]]; then
textInfo "$regx: Access Denied trying to describe trails" "$regx"
continue
fi
if [[ $TRAILS_AND_REGIONS ]]; then
for reg_trail in $TRAILS_AND_REGIONS; do
TRAIL_REGION=$(echo $reg_trail | cut -d',' -f1)
if [ $TRAIL_REGION != $regx ]; then # Only report trails once in home region
continue
fi
trail=$(echo $reg_trail | cut -d',' -f2)
trail_count=$((trail_count + 1))
MULTIREGION_TRAIL_STATUS=$($AWSCLI cloudtrail describe-trails $PROFILE_OPT --region $TRAIL_REGION --query 'trailList[*].IsMultiRegionTrail' --output text --trail-name-list $trail)
if [[ "$MULTIREGION_TRAIL_STATUS" == 'False' ]];then
textFail "$regx: Trail $trail is not enabled for all regions" "$regx" "$trail"
else
TRAIL_ON_OFF_STATUS=$($AWSCLI cloudtrail get-trail-status $PROFILE_OPT --region $TRAIL_REGION --name $trail --query IsLogging --output text)
if [[ "$TRAIL_ON_OFF_STATUS" == 'False' ]];then
textFail "$regx: Trail $trail is configured for all regions but it is OFF" "$regx" "$trail"
else
textPass "$regx: Trail $trail is enabled for all regions" "$regx" "$trail"
fi
fi
if [[ $TRAILS_DETAILS ]]
then
for REGION_TRAIL in "${TRAILS_DETAILS}"
do
while read -r TRAIL_ARN TRAIL_HOME_REGION IS_MULTIREGION TRAIL_NAME
do
TRAIL_ON_OFF_STATUS=$(${AWSCLI} cloudtrail get-trail-status ${PROFILE_OPT} --region ${regx} --name ${TRAIL_ARN} --query IsLogging --output text)
if [[ "$TRAIL_ON_OFF_STATUS" == "False" ]]
then
if [[ "${IS_MULTIREGION}" == "True" ]]
then
textFail "$regx: Trail ${TRAIL_NAME} is multiregion configured from region "${TRAIL_HOME_REGION}" but it is not logging" "${regx}" "${TRAIL_NAME}"
else
textFail "$regx: Trail ${TRAIL_NAME} is not a multiregion trail and it is not logging" "${regx}" "${TRAIL_NAME}"
fi
elif [[ "$TRAIL_ON_OFF_STATUS" == "True" ]]
then
if [[ "${IS_MULTIREGION}" == "True" ]]
then
textPass "$regx: Trail ${TRAIL_NAME} is multiregion configured from region "${TRAIL_HOME_REGION}" and it is logging" "${regx}" "${TRAIL_NAME}"
else
textFail "$regx: Trail ${TRAIL_NAME} is not a multiregion trail and it is logging" "${regx}" "${TRAIL_NAME}"
fi
fi
done <<< "${REGION_TRAIL}"
done
else
textFail "$regx: No CloudTrail trails were found for the region" "${regx}" "No trails found"
fi
done
if [[ $trail_count == 0 ]]; then
if [[ $FILTERREGION ]]; then
textFail "$regx: No CloudTrail trails were found in the filtered region" "$regx" "$trail"
else
textFail "$regx: No CloudTrail trails were found in the account" "$regx" "$trail"
fi
fi
}
}

View File

@@ -27,34 +27,40 @@ CHECK_DOC_check22='http://docs.aws.amazon.com/awscloudtrail/latest/userguide/clo
CHECK_CAF_EPIC_check22='Logging and Monitoring'
check22(){
trail_count=0
# "Ensure CloudTrail log file validation is enabled (Scored)"
for regx in $REGIONS; do
TRAILS_AND_REGIONS=$($AWSCLI cloudtrail describe-trails $PROFILE_OPT --region $regx --query 'trailList[*].{Name:TrailARN, HomeRegion:HomeRegion}' --output text 2>&1 | tr " " ',')
if [[ $(echo "$TRAILS_AND_REGIONS" | grep AccessDenied) ]]; then
textInfo "$regx: Access Denied trying to describe trails" "$regx" "$trail"
continue
for regx in $REGIONS
do
TRAILS_DETAILS=$($AWSCLI cloudtrail describe-trails $PROFILE_OPT --region $regx --query 'trailList[*].{Name:Name, HomeRegion:HomeRegion, Multiregion:IsMultiRegionTrail, LogFileValidation:LogFileValidationEnabled}' --output text 2>&1)
if [[ $(echo "$TRAILS_DETAILS" | grep AccessDenied) ]]; then
textInfo "$regx: Access Denied trying to describe trails" "$regx"
continue
fi
if [[ $TRAILS_AND_REGIONS ]]; then
for reg_trail in $TRAILS_AND_REGIONS; do
TRAIL_REGION=$(echo $reg_trail | cut -d',' -f1)
if [ $TRAIL_REGION != $regx ]; then # Only report trails once in home region
continue
fi
trail=$(echo $reg_trail | cut -d',' -f2)
trail_count=$((trail_count + 1))
LOGFILEVALIDATION_TRAIL_STATUS=$($AWSCLI cloudtrail describe-trails $PROFILE_OPT --region $TRAIL_REGION --query 'trailList[*].LogFileValidationEnabled' --output text --trail-name-list $trail)
if [[ "$LOGFILEVALIDATION_TRAIL_STATUS" == 'False' ]];then
textFail "$regx: Trail $trail log file validation disabled" "$regx" "$trail"
else
textPass "$regx: Trail $trail log file validation enabled" "$regx" "$trail"
fi
if [[ $TRAILS_DETAILS ]]
then
for REGION_TRAIL in "${TRAILS_DETAILS}"
do
while read -r TRAIL_HOME_REGION LOG_FILE_VALIDATION IS_MULTIREGION TRAIL_NAME
do
if [[ "${LOG_FILE_VALIDATION}" == "True" ]]
then
if [[ "${IS_MULTIREGION}" == "True" ]]
then
textPass "$regx: Multiregion trail ${TRAIL_NAME} configured from region ${TRAIL_HOME_REGION} log file validation enabled" "$regx" "$TRAIL_NAME"
else
textPass "$regx: Single region trail ${TRAIL_NAME} log file validation enabled" "$regx" "$TRAIL_NAME"
fi
else
if [[ "${IS_MULTIREGION}" == "True" ]]
then
textFail "$regx: Multiregion trail ${TRAIL_NAME} configured from region ${TRAIL_HOME_REGION} log file validation disabled" "$regx" "$TRAIL_NAME"
else
textFail "$regx: Single region trail ${TRAIL_NAME} log file validation disabled" "$regx" "$TRAIL_NAME"
fi
fi
done <<< "${REGION_TRAIL}"
done
else
textPass "$regx: No trails found in the region" "$regx"
fi
done
if [[ $trail_count == 0 ]]; then
textFail "$REGION: No CloudTrail trails were found in the account" "$REGION" "$trail"
fi
}

View File

@@ -27,68 +27,61 @@ CHECK_DOC_check23='https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_po
CHECK_CAF_EPIC_check23='Logging and Monitoring'
check23(){
trail_count=0
# "Ensure the S3 bucket CloudTrail logs to is not publicly accessible (Scored)"
for regx in $REGIONS; do
TRAILS_AND_REGIONS=$($AWSCLI cloudtrail describe-trails $PROFILE_OPT --region $regx --query 'trailList[*].{Name:TrailARN, HomeRegion:HomeRegion}' --output text 2>&1 | tr " " ',')
if [[ $(echo "$TRAILS_AND_REGIONS" | grep AccessDenied) ]]; then
textInfo "$regx: Access Denied trying to describe trails" "$regx" "$trail"
for regx in $REGIONS
do
TRAILS_DETAILS=$($AWSCLI cloudtrail describe-trails $PROFILE_OPT --region $regx --query 'trailList[*].{Name:Name, HomeRegion:HomeRegion, Multiregion:IsMultiRegionTrail, BucketName:S3BucketName}' --output text 2>&1)
if [[ $(echo "$TRAILS_DETAILS" | grep AccessDenied) ]]; then
textInfo "$regx: Access Denied trying to describe trails" "$regx"
continue
fi
if [[ $TRAILS_AND_REGIONS ]]; then
for reg_trail in $TRAILS_AND_REGIONS; do
TRAIL_REGION=$(echo $reg_trail | cut -d',' -f1)
if [ $TRAIL_REGION != $regx ]; then # Only report trails once in home region
continue
fi
trail=$(echo $reg_trail | cut -d',' -f2)
trail_count=$((trail_count + 1))
if [[ $TRAILS_DETAILS ]]
then
for REGION_TRAIL in "${TRAILS_DETAILS}"
do
while read -r TRAIL_BUCKET TRAIL_HOME_REGION IS_MULTIREGION TRAIL_NAME
do
if [[ ! "${TRAIL_BUCKET}" ]]
then
if [[ "${IS_MULTIREGION}" == "True" ]]
then
textFail "$regx: Multiregion trail ${TRAIL_NAME} configured from region ${TRAIL_HOME_REGION} does not publish to S3" "$regx" "$TRAIL_NAME"
else
textFail "$regx: Single region trail ${TRAIL_NAME} does not publish to S3" "$regx" "$TRAIL_NAME"
fi
continue
fi
CLOUDTRAILBUCKET=$($AWSCLI cloudtrail describe-trails $PROFILE_OPT --region $TRAIL_REGION --query 'trailList[*].[S3BucketName]' --output text --trail-name-list $trail)
if [[ -z $CLOUDTRAILBUCKET ]]; then
textFail "Trail $trail in $TRAIL_REGION does not publish to S3"
continue
fi
BUCKET_LOCATION=$($AWSCLI s3api get-bucket-location $PROFILE_OPT --region $regx --bucket $TRAIL_BUCKET --output text 2>&1)
if [[ $(echo "$BUCKET_LOCATION" | grep AccessDenied) ]]
then
textInfo "$regx: Trail ${TRAIL_NAME} with home region ${TRAIL_HOME_REGION} Access Denied getting bucket location for bucket $TRAIL_BUCKET" "$regx" "$TRAIL_NAME"
continue
fi
if [[ $BUCKET_LOCATION == "None" ]]; then
BUCKET_LOCATION="us-east-1"
fi
if [[ $BUCKET_LOCATION == "EU" ]]; then
BUCKET_LOCATION="eu-west-1"
fi
CLOUDTRAIL_ACCOUNT_ID=$(echo $trail | awk -F: '{ print $5 }')
if [ "$CLOUDTRAIL_ACCOUNT_ID" != "$ACCOUNT_NUM" ]; then
textInfo "Trail $trail in $TRAIL_REGION S3 logging bucket $CLOUDTRAILBUCKET is not in current account"
continue
fi
CLOUDTRAILBUCKET_HASALLPERMISIONS=$($AWSCLI s3api get-bucket-acl --bucket $TRAIL_BUCKET $PROFILE_OPT --region $BUCKET_LOCATION --query 'Grants[?Grantee.URI==`http://acs.amazonaws.com/groups/global/AllUsers`]' --output text 2>&1)
if [[ $(echo "$CLOUDTRAILBUCKET_HASALLPERMISIONS" | grep AccessDenied) ]]; then
textInfo "$regx: Trail ${TRAIL_NAME} with home region ${TRAIL_HOME_REGION} Access Denied getting bucket acl for bucket $TRAIL_BUCKET" "$regx" "$TRAIL_NAME"
continue
fi
#
# LOCATION - requests referencing buckets created after March 20, 2019
# must be made to S3 endpoints in the same region as the bucket was
# created.
#
BUCKET_LOCATION=$($AWSCLI s3api get-bucket-location $PROFILE_OPT --region $regx --bucket $CLOUDTRAILBUCKET --output text 2>&1)
if [[ $(echo "$BUCKET_LOCATION" | grep AccessDenied) ]]; then
textInfo "Trail $trail in $TRAIL_REGION Access Denied getting bucket location for $CLOUDTRAILBUCKET"
continue
fi
if [[ $BUCKET_LOCATION == "None" ]]; then
BUCKET_LOCATION="us-east-1"
fi
if [[ $BUCKET_LOCATION == "EU" ]]; then
BUCKET_LOCATION="eu-west-1"
fi
CLOUDTRAILBUCKET_HASALLPERMISIONS=$($AWSCLI s3api get-bucket-acl --bucket $CLOUDTRAILBUCKET $PROFILE_OPT --region $BUCKET_LOCATION --query 'Grants[?Grantee.URI==`http://acs.amazonaws.com/groups/global/AllUsers`]' --output text 2>&1)
if [[ $(echo "$CLOUDTRAILBUCKET_HASALLPERMISIONS" | grep AccessDenied) ]]; then
textInfo "Trail $trail in $TRAIL_REGION Access Denied getting bucket acl for $CLOUDTRAILBUCKET"
continue
fi
if [[ -z $CLOUDTRAILBUCKET_HASALLPERMISIONS ]]; then
textPass "Trail $trail in $TRAIL_REGION S3 logging bucket $CLOUDTRAILBUCKET is not publicly accessible"
else
textFail "Trail $trail in $TRAIL_REGION S3 logging bucket $CLOUDTRAILBUCKET is publicly accessible"
fi
if [[ ! $CLOUDTRAILBUCKET_HASALLPERMISIONS ]]; then
textPass "$regx: Trail ${TRAIL_NAME} with home region ${TRAIL_HOME_REGION} S3 logging bucket $TRAIL_BUCKET is not publicly accessible" "$regx" "$TRAIL_NAME"
else
textFail "$regx: Trail ${TRAIL_NAME} with home region ${TRAIL_HOME_REGION} S3 logging bucket $TRAIL_BUCKET is publicly accessible" "$regx" "$TRAIL_NAME"
fi
done <<< "${REGION_TRAIL}"
done
else
textPass "$regx: No trails found in the region" "$regx"
fi
done
if [[ $trail_count == 0 ]]; then
textFail "$REGION: No CloudTrail trails were found in the account" "$REGION" "$trail"
fi
}

View File

@@ -27,40 +27,34 @@ CHECK_DOC_check24='https://docs.aws.amazon.com/awscloudtrail/latest/userguide/se
CHECK_CAF_EPIC_check24='Logging and Monitoring'
check24(){
trail_count=0
# "Ensure CloudTrail trails are integrated with CloudWatch Logs (Scored)"
for regx in $REGIONS; do
TRAILS_AND_REGIONS=$($AWSCLI cloudtrail describe-trails $PROFILE_OPT --region $regx --query 'trailList[*].{Name:TrailARN, HomeRegion:HomeRegion}' --output text 2>&1 | tr " " ',')
if [[ $(echo "$TRAILS_AND_REGIONS" | grep AccessDenied) ]]; then
textInfo "$regx: Access Denied trying to describe trails" "$regx" "$trail"
TRAILS_DETAILS=$($AWSCLI cloudtrail describe-trails $PROFILE_OPT --region $regx --query 'trailList[*].{Name:Name, HomeRegion:HomeRegion, ARN:TrailARN}' --output text 2>&1)
if [[ $(echo "$TRAILS_DETAILS" | grep AccessDenied) ]]; then
textInfo "$regx: Access Denied trying to describe trails" "$regx"
continue
fi
if [[ $TRAILS_AND_REGIONS ]]; then
for reg_trail in $TRAILS_AND_REGIONS; do
TRAIL_REGION=$(echo $reg_trail | cut -d',' -f1)
if [ $TRAIL_REGION != $regx ]; then # Only report trails once in home region
continue
fi
trail=$(echo $reg_trail | cut -d',' -f2)
trail_count=$((trail_count + 1))
LATESTDELIVERY_TIMESTAMP=$($AWSCLI cloudtrail get-trail-status --name $trail $PROFILE_OPT --region $TRAIL_REGION --query 'LatestCloudWatchLogsDeliveryTime' --output text|grep -v None)
if [[ ! $LATESTDELIVERY_TIMESTAMP ]];then
textFail "$TRAIL_REGION: $trail trail is not logging in the last 24h or not configured (it is in $TRAIL_REGION)" "$TRAIL_REGION" "$trail"
else
LATESTDELIVERY_DATE=$(timestamp_to_date $LATESTDELIVERY_TIMESTAMP)
HOWOLDER=$(how_older_from_today $LATESTDELIVERY_DATE)
if [ $HOWOLDER -gt "1" ];then
textFail "$TRAIL_REGION: $trail trail is not logging in the last 24h or not configured" "$TRAIL_REGION" "$trail"
if [[ $TRAILS_DETAILS ]]
then
for REGION_TRAIL in "${TRAILS_DETAILS}"
do
while read -r TRAIL_ARN TRAIL_HOME_REGION TRAIL_NAME
do
LATESTDELIVERY_TIMESTAMP=$(${AWSCLI} cloudtrail get-trail-status ${PROFILE_OPT} --region ${regx} --name ${TRAIL_ARN} --query LatestCloudWatchLogsDeliveryTime --output text|grep -v None)
if [[ ! $LATESTDELIVERY_TIMESTAMP ]];then
textFail "$regx: $TRAIL_NAME trail is not logging in the last 24h or not configured (its home region is $TRAIL_HOME_REGION)" "$regx" "$trail"
else
textPass "$TRAIL_REGION: $trail trail has been logging during the last 24h" "$TRAIL_REGION" "$trail"
LATESTDELIVERY_DATE=$(timestamp_to_date $LATESTDELIVERY_TIMESTAMP)
HOWOLDER=$(how_older_from_today $LATESTDELIVERY_DATE)
if [ $HOWOLDER -gt "1" ];then
textFail "$regx: $TRAIL_NAME trail is not logging in the last 24h or not configured" "$regx" "$TRAIL_NAME"
else
textPass "$regx: $TRAIL_NAME trail has been logging during the last 24h" "$regx" "$TRAIL_NAME"
fi
fi
fi
done <<< "${REGION_TRAIL}"
done
else
textFail "$regx: No CloudTrail trails were found for the region" "${regx}" "No trails found"
fi
done
if [[ $trail_count == 0 ]]; then
textFail "$REGION: No CloudTrail trails were found in the account" "$REGION" "$trail"
fi
}

View File

@@ -26,68 +26,62 @@ CHECK_DOC_check26='https://docs.aws.amazon.com/AmazonS3/latest/dev/security-best
CHECK_CAF_EPIC_check26='Logging and Monitoring'
check26(){
trail_count=0
# "Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket (Scored)"
for regx in $REGIONS; do
TRAILS_AND_REGIONS=$($AWSCLI cloudtrail describe-trails $PROFILE_OPT --region $regx --query 'trailList[*].{Name:TrailARN, HomeRegion:HomeRegion}' --output text 2>&1 | tr " " ',')
if [[ $(echo "$TRAILS_AND_REGIONS" | grep AccessDenied) ]]; then
textInfo "$regx: Access Denied trying to describe trails" "$regx" "$trail"
for regx in $REGIONS
do
TRAILS_DETAILS=$($AWSCLI cloudtrail describe-trails $PROFILE_OPT --region $regx --query 'trailList[*].{Name:Name, HomeRegion:HomeRegion, Multiregion:IsMultiRegionTrail, BucketName:S3BucketName}' --output text 2>&1)
if [[ $(echo "$TRAILS_DETAILS" | grep AccessDenied) ]]; then
textInfo "$regx: Access Denied trying to describe trails" "$regx"
continue
fi
if [[ $TRAILS_AND_REGIONS ]]; then
for reg_trail in $TRAILS_AND_REGIONS; do
TRAIL_REGION=$(echo $reg_trail | cut -d',' -f1)
if [ $TRAIL_REGION != $regx ]; then # Only report trails once in home region
continue
fi
trail=$(echo $reg_trail | cut -d',' -f2)
trail_count=$((trail_count + 1))
if [[ $TRAILS_DETAILS ]]
then
for REGION_TRAIL in "${TRAILS_DETAILS}"
do
while read -r TRAIL_BUCKET TRAIL_HOME_REGION IS_MULTIREGION TRAIL_NAME
do
if [[ ! "${TRAIL_BUCKET}" ]]
then
if [[ "${IS_MULTIREGION}" == "True" ]]
then
textFail "$regx: Multiregion trail ${TRAIL_NAME} configured from region ${TRAIL_HOME_REGION} does not publish to S3" "$regx" "$TRAIL_NAME"
else
textFail "$regx: Single region trail ${TRAIL_NAME} does not publish to S3" "$regx" "$TRAIL_NAME"
fi
continue
fi
CLOUDTRAILBUCKET=$($AWSCLI cloudtrail describe-trails $PROFILE_OPT --region $TRAIL_REGION --query 'trailList[*].[S3BucketName]' --output text --trail-name-list $trail)
if [[ -z $CLOUDTRAILBUCKET ]]; then
textFail "$regx: Trail $trail does not publish to S3" "$TRAIL_REGION" "$trail"
continue
fi
BUCKET_LOCATION=$($AWSCLI s3api get-bucket-location $PROFILE_OPT --region $regx --bucket $TRAIL_BUCKET --output text 2>&1)
if [[ $(echo "$BUCKET_LOCATION" | grep AccessDenied) ]]
then
textInfo "$regx: Trail ${TRAIL_NAME} with home region ${TRAIL_HOME_REGION} Access Denied getting bucket location for bucket $TRAIL_BUCKET" "$regx" "$TRAIL_NAME"
continue
fi
if [[ $BUCKET_LOCATION == "None" ]]; then
BUCKET_LOCATION="us-east-1"
fi
if [[ $BUCKET_LOCATION == "EU" ]]; then
BUCKET_LOCATION="eu-west-1"
fi
CLOUDTRAIL_ACCOUNT_ID=$(echo $trail | awk -F: '{ print $5 }')
if [ "$CLOUDTRAIL_ACCOUNT_ID" != "$ACCOUNT_NUM" ]; then
textInfo "$regx: Trail $trail S3 logging bucket $CLOUDTRAILBUCKET is not in current account" "$TRAIL_REGION" "$trail"
continue
fi
CLOUDTRAILBUCKET_LOGENABLED=$($AWSCLI s3api get-bucket-logging --bucket $TRAIL_BUCKET $PROFILE_OPT --region $BUCKET_LOCATION --query 'LoggingEnabled.TargetBucket' --output text 2>&1)
if [[ $(echo "$CLOUDTRAILBUCKET_LOGENABLED" | grep AccessDenied) ]]; then
textInfo "$regx: Trail $TRAIL_NAME Access Denied getting bucket logging for $TRAIL_BUCKET" "$regx" "$TRAIL_NAME"
continue
fi
#
# LOCATION - requests referencing buckets created after March 20, 2019
# must be made to S3 endpoints in the same region as the bucket was
# created.
#
BUCKET_LOCATION=$($AWSCLI s3api get-bucket-location $PROFILE_OPT --region $regx --bucket $CLOUDTRAILBUCKET --output text 2>&1)
if [[ $(echo "$BUCKET_LOCATION" | grep AccessDenied) ]]; then
textInfo "$regx: Trail $trail Access Denied getting bucket location for $CLOUDTRAILBUCKET" "$TRAIL_REGION" "$trail"
continue
fi
if [[ $BUCKET_LOCATION == "None" ]]; then
BUCKET_LOCATION="us-east-1"
fi
if [[ $BUCKET_LOCATION == "EU" ]]; then
BUCKET_LOCATION="eu-west-1"
fi
if [[ $CLOUDTRAILBUCKET_LOGENABLED != "None" ]]; then
textPass "$regx: Trail $TRAIL_NAME S3 bucket access logging is enabled for $TRAIL_BUCKET" "$regx" "$TRAIL_NAME"
else
textFail "$regx: Trail $TRAIL_NAME S3 bucket access logging is not enabled for $TRAIL_BUCKET" "$regx" "$TRAIL_NAME"
fi
CLOUDTRAILBUCKET_LOGENABLED=$($AWSCLI s3api get-bucket-logging --bucket $CLOUDTRAILBUCKET $PROFILE_OPT --region $BUCKET_LOCATION --query 'LoggingEnabled.TargetBucket' --output text 2>&1)
if [[ $(echo "$CLOUDTRAILBUCKET_LOGENABLED" | grep AccessDenied) ]]; then
textInfo "$regx: Trail $trail Access Denied getting bucket logging for $CLOUDTRAILBUCKET" "$TRAIL_REGION" "$trail"
continue
fi
if [[ $CLOUDTRAILBUCKET_LOGENABLED != "None" ]]; then
textPass "$regx: Trail $trail S3 bucket access logging is enabled for $CLOUDTRAILBUCKET" "$TRAIL_REGION" "$trail"
else
textFail "$regx: Trail $trail S3 bucket access logging is not enabled for $CLOUDTRAILBUCKET" "$TRAIL_REGION" "$trail"
fi
done <<< "${REGION_TRAIL}"
done
else
textPass "$regx: No trails found in the region" "$regx"
fi
done
if [[ $trail_count == 0 ]]; then
textFail "$REGION: No CloudTrail trails were found in the account" "$REGION" "$trail"
fi
}

View File

@@ -27,33 +27,28 @@ CHECK_DOC_check27='https://docs.aws.amazon.com/awscloudtrail/latest/userguide/en
CHECK_CAF_EPIC_check27='Logging and Monitoring'
check27(){
trail_count=0
# "Ensure CloudTrail logs are encrypted at rest using KMS CMKs (Scored)"
for regx in $REGIONS; do
TRAILS_AND_REGIONS=$($AWSCLI cloudtrail describe-trails $PROFILE_OPT --region $regx --query 'trailList[*].{Name:TrailARN, HomeRegion:HomeRegion}' --output text 2>&1 | tr " " ',')
if [[ $(echo "$TRAILS_AND_REGIONS" | grep AccessDenied) ]]; then
textInfo "$regx: Access Denied trying to describe trails" "$regx" "$trail"
TRAILS_DETAILS=$($AWSCLI cloudtrail describe-trails $PROFILE_OPT --region $regx --query 'trailList[*].{Name:Name, KeyID:KmsKeyId}' --output text 2>&1)
if [[ $(echo "$TRAILS_DETAILS" | grep AccessDenied) ]]; then
textInfo "$regx: Access Denied trying to describe trails" "$regx"
continue
fi
if [[ $TRAILS_AND_REGIONS ]]; then
for reg_trail in $TRAILS_AND_REGIONS; do
TRAIL_REGION=$(echo $reg_trail | cut -d',' -f1)
if [ $TRAIL_REGION != $regx ]; then # Only report trails once in home region
continue
fi
trail=$(echo $reg_trail | cut -d',' -f2)
trail_count=$((trail_count + 1))
KMSKEYID=$($AWSCLI cloudtrail describe-trails $PROFILE_OPT --region $TRAIL_REGION --query 'trailList[*].KmsKeyId' --output text --trail-name-list $trail)
if [[ "$KMSKEYID" ]];then
textPass "$regx: Trail $trail has encryption enabled" "$regx" "$trail"
else
textFail "$regx: Trail $trail has encryption disabled" "$regx" "$trail"
fi
if [[ $TRAILS_DETAILS ]]
then
for REGION_TRAIL in "${TRAILS_DETAILS}"
do
while read -r TRAIL_KEY_ID TRAIL_NAME
do
if [[ "${TRAIL_KEY_ID}" != "None" ]]
then
textPass "$regx: Trail $TRAIL_NAME has encryption enabled" "$regx" "$TRAIL_NAME"
else
textFail "$regx: Trail $TRAIL_NAME has encryption disabled" "$regx" "$TRAIL_NAME"
fi
done <<< "${REGION_TRAIL}"
done
else
textPass "$regx: No CloudTrail trails were found for the region" "${regx}" "No trails found"
fi
done
if [[ $trail_count == 0 ]]; then
textFail "$REGION: No CloudTrail trails were found in the account" "$REGION" "$trail"
fi
}

View File

@@ -30,27 +30,32 @@ extra71(){
# "Ensure users of groups with AdministratorAccess policy have MFA tokens enabled "
ADMIN_GROUPS=''
AWS_GROUPS=$($AWSCLI $PROFILE_OPT iam list-groups --output text --region $REGION --query 'Groups[].GroupName')
for grp in $AWS_GROUPS; do
# aws --profile onlinetraining iam list-attached-group-policies --group-name Administrators --query 'AttachedPolicies[].PolicyArn' | grep 'arn:aws:iam::aws:policy/AdministratorAccess'
# list-attached-group-policies
CHECK_ADMIN_GROUP=$($AWSCLI $PROFILE_OPT --region $REGION iam list-attached-group-policies --group-name $grp --output json --query 'AttachedPolicies[].PolicyArn' | grep "arn:${AWS_PARTITION}:iam::aws:policy/AdministratorAccess")
if [[ $CHECK_ADMIN_GROUP ]]; then
ADMIN_GROUPS="$ADMIN_GROUPS $grp"
textInfo "$REGION: $grp group provides administrative access" "$REGION" "$grp"
ADMIN_USERS=$($AWSCLI $PROFILE_OPT iam get-group --region $REGION --group-name $grp --output json --query 'Users[].UserName' | grep '"' | cut -d'"' -f2 )
for auser in $ADMIN_USERS; do
# users in group are Administrators
# users
# check for user MFA device in credential report
USER_MFA_ENABLED=$( cat $TEMP_REPORT_FILE | grep "^$auser," | cut -d',' -f8)
if [[ "true" == $USER_MFA_ENABLED ]]; then
textPass "$REGION: $auser / MFA Enabled / admin via group $grp" "$REGION" "$grp"
else
textFail "$REGION: $auser / MFA DISABLED / admin via group $grp" "$REGION" "$grp"
fi
done
else
textInfo "$REGION: $grp group provides non-administrative access" "$REGION" "$grp"
fi
done
if [[ ${AWS_GROUPS} ]]
then
for grp in $AWS_GROUPS; do
# aws --profile onlinetraining iam list-attached-group-policies --group-name Administrators --query 'AttachedPolicies[].PolicyArn' | grep 'arn:aws:iam::aws:policy/AdministratorAccess'
# list-attached-group-policies
CHECK_ADMIN_GROUP=$($AWSCLI $PROFILE_OPT --region $REGION iam list-attached-group-policies --group-name $grp --output json --query 'AttachedPolicies[].PolicyArn' | grep "arn:${AWS_PARTITION}:iam::aws:policy/AdministratorAccess")
if [[ $CHECK_ADMIN_GROUP ]]; then
ADMIN_GROUPS="$ADMIN_GROUPS $grp"
textInfo "$REGION: $grp group provides administrative access" "$REGION" "$grp"
ADMIN_USERS=$($AWSCLI $PROFILE_OPT iam get-group --region $REGION --group-name $grp --output json --query 'Users[].UserName' | grep '"' | cut -d'"' -f2 )
for auser in $ADMIN_USERS; do
# users in group are Administrators
# users
# check for user MFA device in credential report
USER_MFA_ENABLED=$( cat $TEMP_REPORT_FILE | grep "^$auser," | cut -d',' -f8)
if [[ "true" == $USER_MFA_ENABLED ]]; then
textPass "$REGION: $auser / MFA Enabled / admin via group $grp" "$REGION" "$grp"
else
textFail "$REGION: $auser / MFA DISABLED / admin via group $grp" "$REGION" "$grp"
fi
done
else
textInfo "$REGION: $grp group provides non-administrative access" "$REGION" "$grp"
fi
done
else
textPass "$REGION: There is no IAM groups" "$REGION"
fi
}

View File

@@ -68,8 +68,7 @@ extra7100(){
done
if [[ $PERMISSIVE_POLICIES_LIST ]]; then
textInfo "STS AssumeRole Policies should only include the complete ARNs for the Roles that the user needs"
textInfo "Learn more: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_permissions-to-switch.html#roles-usingrole-createpolicy"
textInfo "STS AssumeRole Policies should only include the complete ARNs for the Roles that the user needs. Learn more: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_permissions-to-switch.html#roles-usingrole-createpolicy" "$REGION"
for policy in $PERMISSIVE_POLICIES_LIST; do
textFail "$REGION: Policy $policy allows permissive STS Role assumption" "$REGION" "$policy"
done

View File

@@ -23,7 +23,7 @@ CHECK_REMEDIATION_extra7102='Check Identified IPs; consider changing them to pri
CHECK_DOC_extra7102='https://www.shodan.io/'
CHECK_CAF_EPIC_extra7102='Infrastructure Security'
# Watch out, always use Shodan API key, if you use `curl https://www.shodan.io/host/{ip}` massively
# Watch out, always use Shodan API key, if you use `curl https://www.shodan.io/host/{ip}` massively
# your IP will be banned by Shodan
# This is the right way to do so
@@ -32,31 +32,31 @@ CHECK_CAF_EPIC_extra7102='Infrastructure Security'
# Each finding will be saved in prowler/output folder for further review.
extra7102(){
if [[ ! $SHODAN_API_KEY ]]; then
textInfo "[extra7102] Requires a Shodan API key to work. Use -N <shodan_api_key>"
else
for regx in $REGIONS; do
LIST_OF_EIP=$($AWSCLI $PROFILE_OPT --region $regx ec2 describe-network-interfaces --query 'NetworkInterfaces[*].Association.PublicIp' --output text 2>&1)
if [[ $(echo "$LIST_OF_EIP" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
textInfo "$regx: Access Denied trying to describe network interfaces" "$regx"
continue
fi
if [[ $LIST_OF_EIP ]]; then
for ip in $LIST_OF_EIP;do
SHODAN_QUERY=$(curl -ks https://api.shodan.io/shodan/host/$ip?key=$SHODAN_API_KEY)
# Shodan has a request rate limit of 1 request/second.
sleep 1
if [[ $SHODAN_QUERY == *"No information available for that IP"* ]]; then
textPass "$regx: IP $ip is not listed in Shodan" "$regx"
else
echo $SHODAN_QUERY > $OUTPUT_DIR/shodan-output-$ip.json
IP_SHODAN_INFO=$(cat $OUTPUT_DIR/shodan-output-$ip.json | jq -r '. | { ports: .ports, org: .org, country: .country_name }| @text' | tr -d \"\{\}\}\]\[ | tr , '\ ' )
textFail "$regx: IP $ip is listed in Shodan with data $IP_SHODAN_INFO. More info https://www.shodan.io/host/$ip and $OUTPUT_DIR/shodan-output-$ip.json" "$regx" "$ip"
fi
done
if [[ ! $SHODAN_API_KEY ]]; then
textInfo "$regx: Requires a Shodan API key to work. Use -N <shodan_api_key>" "$regx"
else
textInfo "$regx: No Public or Elastic IPs found" "$regx"
LIST_OF_EIP=$($AWSCLI $PROFILE_OPT --region $regx ec2 describe-network-interfaces --query 'NetworkInterfaces[*].Association.PublicIp' --output text 2>&1)
if [[ $(echo "$LIST_OF_EIP" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
textInfo "$regx: Access Denied trying to describe network interfaces" "$regx"
continue
fi
if [[ $LIST_OF_EIP ]]; then
for ip in $LIST_OF_EIP;do
SHODAN_QUERY=$(curl -ks https://api.shodan.io/shodan/host/$ip?key=$SHODAN_API_KEY)
# Shodan has a request rate limit of 1 request/second.
sleep 1
if [[ $SHODAN_QUERY == *"No information available for that IP"* ]]; then
textPass "$regx: IP $ip is not listed in Shodan" "$regx"
else
echo $SHODAN_QUERY > $OUTPUT_DIR/shodan-output-$ip.json
IP_SHODAN_INFO=$(cat $OUTPUT_DIR/shodan-output-$ip.json | jq -r '. | { ports: .ports, org: .org, country: .country_name }| @text' | tr -d \"\{\}\}\]\[ | tr , '\ ' )
textFail "$regx: IP $ip is listed in Shodan with data $IP_SHODAN_INFO. More info https://www.shodan.io/host/$ip and $OUTPUT_DIR/shodan-output-$ip.json" "$regx" "$ip"
fi
done
else
textInfo "$regx: No Public or Elastic IPs found" "$regx"
fi
fi
done
fi
}

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env bash
# Prowler - the handy cloud security tool (copyright 2020) by Toni de la Fuente
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
@@ -31,17 +31,21 @@ extra7111(){
textInfo "$regx: Access Denied trying to list notebook instances" "$regx"
continue
fi
if [[ $LIST_SM_NB_INSTANCES ]];then
if [[ $LIST_SM_NB_INSTANCES ]];then
for nb_instance in $LIST_SM_NB_INSTANCES; do
SM_NB_DIRECTINET=$($AWSCLI $PROFILE_OPT --region $regx sagemaker describe-notebook-instance --notebook-instance-name $nb_instance --query 'DirectInternetAccess' --output text)
SM_NB_DIRECTINET=$($AWSCLI $PROFILE_OPT --region $regx sagemaker describe-notebook-instance --notebook-instance-name $nb_instance --query 'DirectInternetAccess' --output text 2>&1)
if [[ $(echo "$SM_NB_DIRECTINET" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
textInfo "$regx: Access Denied trying to describe notebook instances" "$regx"
continue
fi
if [[ "${SM_NB_DIRECTINET}" == "Enabled" ]]; then
textFail "${regx}: Sagemaker Notebook instance $nb_instance has direct internet access enabled" "${regx}" "$nb_instance"
else
textPass "${regx}: Sagemaker Notebook instance $nb_instance has direct internet access disabled" "${regx}" "$nb_instance"
fi
fi
done
else
else
textInfo "${regx}: No Sagemaker Notebook instances found" "${regx}"
fi
fi
done
}
}

View File

@@ -23,13 +23,22 @@ CHECK_REMEDIATION_extra712='Enable Amazon Macie and create appropriate jobs to d
CHECK_DOC_extra712='https://docs.aws.amazon.com/macie/latest/user/getting-started.html'
CHECK_CAF_EPIC_extra712='Data Protection'
extra712(){
# "No API commands available to check if Macie is enabled,"
# "just looking if IAM Macie related permissions exist. "
MACIE_IAM_ROLES_CREATED=$($AWSCLI iam list-roles $PROFILE_OPT --query 'Roles[*].Arn'|grep AWSMacieServiceCustomer|wc -l)
if [[ $MACIE_IAM_ROLES_CREATED -eq 2 ]];then
textPass "$REGION: Macie related IAM roles exist so it might be enabled. Check it out manually" "$REGION"
else
textFail "$REGION: No Macie related IAM roles found. It is most likely not to be enabled" "$REGION"
fi
}
extra712(){
# Macie supports get-macie-session which tells the current status, if not Disabled.
# Capturing the STDOUT can help determine when Disabled.
for region in $REGIONS; do
MACIE_STATUS=$($AWSCLI macie2 get-macie-session ${PROFILE_OPT} --region "$region" --query status --output text 2>&1)
if [[ "$MACIE_STATUS" == "ENABLED" ]]; then
textPass "$region: Macie is enabled." "$region"
elif [[ "$MACIE_STATUS" == "PAUSED" ]]; then
textFail "$region: Macie is currently in a SUSPENDED state." "$region"
elif grep -q -E 'Macie is not enabled' <<< "${MACIE_STATUS}"; then
textFail "$region: Macie is not enabled." "$region"
elif grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${MACIE_STATUS}"; then
textInfo "$region: Access Denied trying to get AWS Macie information." "$region"
fi
done
}

View File

@@ -33,6 +33,6 @@ extra7123(){
textFail "User $user has 2 active access keys" "$REGION" "$user"
done
else
textPass "No users with 2 active access keys"
textPass "No users with 2 active access keys" "$REGION"
fi
}
}

View File

@@ -31,15 +31,15 @@ extra7125(){
for user in $LIST_USERS; do
# Would be virtual if sms-mfa or mfa, hardware is u2f or different.
MFA_TYPE=$($AWSCLI iam list-mfa-devices --user-name $user $PROFILE_OPT --region $REGION --query MFADevices[].SerialNumber --output text | awk -F':' '{ print $6 }'| awk -F'/' '{ print $1 }')
if [[ $MFA_TYPE == "mfa" || $MFA_TYPE == "sms-mfa" ]]; then
textInfo "User $user has virtual MFA enabled"
elif [[ $MFA_TYPE == "" ]]; then
if [[ $MFA_TYPE == "mfa" || $MFA_TYPE == "sms-mfa" ]]; then
textInfo "User $user has virtual MFA enabled" "$REGION" "$user"
elif [[ $MFA_TYPE == "" ]]; then
textFail "User $user has not hardware MFA enabled" "$REGION" "$user"
else
else
textPass "User $user has hardware MFA enabled" "$REGION" "$user"
fi
done
else
textPass "No users found"
textPass "No users found" "$REGION"
fi
}
}

View File

@@ -34,14 +34,14 @@ extra7132(){
for rdsinstance in ${RDS_INSTANCES}; do
RDS_NAME="$rdsinstance"
MONITORING_FLAG=$($AWSCLI rds describe-db-instances $PROFILE_OPT --region $regx --db-instance-identifier $rdsinstance --query 'DBInstances[*].[EnhancedMonitoringResourceArn]' --output text)
if [[ $MONITORING_FLAG == "None" ]];then
textFail "$regx: RDS instance: $RDS_NAME has enhanced monitoring disabled!" "$rex" "$RDS_NAME"
if [[ $MONITORING_FLAG == "None" ]];then
textFail "$regx: RDS instance: $RDS_NAME has enhanced monitoring disabled!" "$regx" "$RDS_NAME"
else
textPass "$regx: RDS instance: $RDS_NAME has enhanced monitoring enabled." "$regx" "$RDS_NAME"
fi
done
else
textInfo "$regx: no RDS instances found" "$regx" "$RDS_NAME"
textInfo "$regx: no RDS instances found" "$regx"
fi
done
}

View File

@@ -25,7 +25,7 @@ CHECK_CAF_EPIC_extra7134='Infrastructure Security'
extra7134(){
for regx in $REGIONS; do
SG_LIST=$($AWSCLI ec2 describe-security-groups --query 'SecurityGroups[?length(IpPermissions[?((FromPort==null && ToPort==null) || (FromPort==`20` && ToPort==`21`)) && (contains(IpRanges[].CidrIp, `0.0.0.0/0`) || contains(Ipv6Ranges[].CidrIpv6, `::/0`))]) > `0`].{GroupId:GroupId}' $PROFILE_OPT --region $regx --output text 2>&1)
SG_LIST=$($AWSCLI ec2 describe-security-groups --query 'SecurityGroups[?length(IpPermissions[?((FromPort==null && ToPort==null) || (FromPort==`20` && ToPort==`21`)) && (contains(IpRanges[].CidrIp, `0.0.0.0/0`) || contains(Ipv6Ranges[].CidrIpv6, `::/0`)) && (IpProtocol==`tcp`)]) > `0`].{GroupId:GroupId}' $PROFILE_OPT --region $regx --output text 2>&1)
if [[ $(echo "$SG_LIST" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
textInfo "$regx: Access Denied trying to describe security groups" "$regx"
continue
@@ -38,4 +38,4 @@ extra7134(){
textPass "$regx: No Security Groups found with any port open to 0.0.0.0/0 for FTP ports" "$regx" "$SG"
fi
done
}
}

View File

@@ -25,7 +25,7 @@ CHECK_CAF_EPIC_extra7135='Infrastructure Security'
extra7135(){
for regx in $REGIONS; do
SG_LIST=$($AWSCLI ec2 describe-security-groups --query 'SecurityGroups[?length(IpPermissions[?((FromPort==null && ToPort==null) || (FromPort==`9092` && ToPort==`9092`)) && (contains(IpRanges[].CidrIp, `0.0.0.0/0`) || contains(Ipv6Ranges[].CidrIpv6, `::/0`))]) > `0`].{GroupId:GroupId}' $PROFILE_OPT --region $regx --output text 2>&1)
SG_LIST=$($AWSCLI ec2 describe-security-groups --query 'SecurityGroups[?length(IpPermissions[?((FromPort==null && ToPort==null) || (FromPort==`9092` && ToPort==`9092`)) && (contains(IpRanges[].CidrIp, `0.0.0.0/0`) || contains(Ipv6Ranges[].CidrIpv6, `::/0`)) && (IpProtocol==`tcp`)]) > `0`].{GroupId:GroupId}' $PROFILE_OPT --region $regx --output text 2>&1)
if [[ $(echo "$SG_LIST" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
textInfo "$regx: Access Denied trying to describe security groups" "$regx"
continue
@@ -38,4 +38,4 @@ extra7135(){
textPass "$regx: No Security Groups found with any port open to 0.0.0.0/0 for Kafka ports" "$regx"
fi
done
}
}

View File

@@ -11,7 +11,7 @@
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
CHECK_ID_extra7136="7.136"
CHECK_TITLE_extra7136="[extra7136] Ensure no security groups allow ingress from 0.0.0.0/0 or ::/0 to Telnet port 23 "
CHECK_TITLE_extra7136="[extra7136] Ensure no security groups allow ingress from 0.0.0.0/0 or ::/0 to Telnet port 23"
CHECK_SCORED_extra7136="NOT_SCORED"
CHECK_CIS_LEVEL_extra7136="EXTRA"
CHECK_SEVERITY_extra7136="High"
@@ -25,7 +25,7 @@ CHECK_CAF_EPIC_extra7136='Infrastructure Security'
extra7136(){
for regx in $REGIONS; do
SG_LIST=$($AWSCLI ec2 describe-security-groups --query 'SecurityGroups[?length(IpPermissions[?((FromPort==null && ToPort==null) || (FromPort==`23` && ToPort==`23`)) && (contains(IpRanges[].CidrIp, `0.0.0.0/0`) || contains(Ipv6Ranges[].CidrIpv6, `::/0`))]) > `0`].{GroupId:GroupId}' $PROFILE_OPT --region $regx --output text 2>&1)
SG_LIST=$($AWSCLI ec2 describe-security-groups --query 'SecurityGroups[?length(IpPermissions[?((FromPort==null && ToPort==null) || (FromPort==`23` && ToPort==`23`)) && (contains(IpRanges[].CidrIp, `0.0.0.0/0`) || contains(Ipv6Ranges[].CidrIpv6, `::/0`)) && (IpProtocol==`tcp`)]) > `0`].{GroupId:GroupId}' $PROFILE_OPT --region $regx --output text 2>&1)
if [[ $(echo "$SG_LIST" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
textInfo "$regx: Access Denied trying to describe security groups" "$regx"
continue
@@ -38,4 +38,4 @@ extra7136(){
textPass "$regx: No Security Groups found with any port open to 0.0.0.0/0 for Telnet ports" "$regx" "$SG"
fi
done
}
}

View File

@@ -11,7 +11,7 @@
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
CHECK_ID_extra7137="7.137"
CHECK_TITLE_extra7137="[extra7137] Ensure no security groups allow ingress from 0.0.0.0/0 or ::/0 to Windows SQL Server ports 1433 or 1434 "
CHECK_TITLE_extra7137="[extra7137] Ensure no security groups allow ingress from 0.0.0.0/0 or ::/0 to Windows SQL Server ports 1433 or 1434"
CHECK_SCORED_extra7137="NOT_SCORED"
CHECK_CIS_LEVEL_extra7137="EXTRA"
CHECK_SEVERITY_extra7137="High"
@@ -38,4 +38,4 @@ extra7137(){
textPass "$regx: No Security Groups found with any port open to 0.0.0.0/0 for Microsoft SQL Server ports" "$regx"
fi
done
}
}

View File

@@ -41,8 +41,8 @@ extra7142(){
textFail "$regx: Application Load Balancer $alb is not dropping invalid header fields" "$regx" "$alb"
fi
done
else
textInfo "$regx: no ALBs found"
else
textInfo "$regx: no ALBs found" "$regx"
fi
done
}

View File

@@ -29,7 +29,7 @@ extra7156(){
# "Check if API Gateway V2 has Access Logging enabled "
for regx in $REGIONS; do
LIST_OF_API_GW=$($AWSCLI apigatewayv2 get-apis $PROFILE_OPT --region $regx --query Items[*].ApiId --output text 2>&1)
if [[ $(echo "$LIST_OF_API_GW" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
if [[ $(echo "$LIST_OF_API_GW" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError|BadRequestException') ]]; then
textInfo "$regx: Access Denied trying to get APIs" "$regx"
continue
fi
@@ -54,4 +54,4 @@ extra7156(){
textInfo "$regx: No API Gateway found" "$regx"
fi
done
}
}

View File

@@ -26,7 +26,7 @@ CHECK_CAF_EPIC_extra7157='IAM'
extra7157(){
for regx in $REGIONS; do
LIST_OF_API_GW=$($AWSCLI apigatewayv2 get-apis $PROFILE_OPT --region $regx --query "Items[*].ApiId" --output text 2>&1)
if [[ $(echo "$LIST_OF_API_GW" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
if [[ $(echo "$LIST_OF_API_GW" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError|BadRequestException') ]]; then
textInfo "$regx: Access Denied trying to get APIs" "$regx"
continue
fi

View File

@@ -11,7 +11,7 @@
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
CHECK_ID_extra7162="7.162"
CHECK_TITLE_extra7162="[extra7162] Check if CloudWatch Log Groups have a retention policy of 365 days"
CHECK_TITLE_extra7162="[extra7162] Check if CloudWatch Log Groups have a retention policy of at least 365 days"
CHECK_SCORED_extra7162="NOT_SCORED"
CHECK_CIS_LEVEL_extra7162="EXTRA"
CHECK_SEVERITY_extra7162="Medium"
@@ -19,36 +19,57 @@ CHECK_ASFF_RESOURCE_TYPE_extra7162="AwsLogsLogGroup"
CHECK_ALTERNATE_check7162="extra7162"
CHECK_SERVICENAME_extra7162="cloudwatch"
CHECK_RISK_extra7162='If log groups have a low retention policy of less than 365 days; crucial logs and data can be lost'
CHECK_REMEDIATION_extra7162='Add Log Retention policy of 365 days to log groups. This will persist logs and traces for a long time.'
CHECK_REMEDIATION_extra7162='Add Log Retention policy of at least 365 days to log groups. This will persist logs and traces for a long time.'
CHECK_DOC_extra7162='https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/AWS_Logs.html'
CHECK_CAF_EPIC_extra7162='Data Retention'
extra7162() {
# "Check if CloudWatch Log Groups have a retention policy of 365 days"
declare -i LOG_GROUP_RETENTION_PERIOD_DAYS=365
for regx in $REGIONS; do
LIST_OF_365_RETENTION_LOG_GROUPS=$($AWSCLI logs describe-log-groups $PROFILE_OPT --region $regx --query 'logGroups[?retentionInDays=="${LOG_GROUP_RETENTION_PERIOD_DAYS}"].[logGroupName]' --output text 2>&1)
if [[ $(echo "$LIST_OF_365_RETENTION_LOG_GROUPS" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
textInfo "$regx: Access Denied trying to describe log groups" "$regx"
local LOG_GROUP_RETENTION_PERIOD_DAYS="365"
for regx in ${REGIONS}; do
LIST_OF_365_OR_MORE_RETENTION_LOG_GROUPS=$("${AWSCLI}" logs describe-log-groups ${PROFILE_OPT} --region "${regx}" --query "logGroups[?retentionInDays>=\`${LOG_GROUP_RETENTION_PERIOD_DAYS}\`].[logGroupName]" --output text 2>&1)
if grep -E -q 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${LIST_OF_365_OR_MORE_RETENTION_LOG_GROUPS}"; then
textInfo "${regx}: Access Denied trying to describe log groups" "${regx}"
continue
fi
if [[ $LIST_OF_365_RETENTION_LOG_GROUPS ]]; then
for log in $LIST_OF_365_RETENTION_LOG_GROUPS; do
textPass "$regx: $log Log Group has 365 days retention period!" "$regx" "$log"
if [[ ${LIST_OF_365_OR_MORE_RETENTION_LOG_GROUPS} ]]; then
for log in ${LIST_OF_365_OR_MORE_RETENTION_LOG_GROUPS}; do
textPass "${regx}: ${log} Log Group has at least 365 days retention period!" "${regx}" "${log}"
done
fi
LIST_OF_NON_365_RETENTION_LOG_GROUPS=$($AWSCLI logs describe-log-groups $PROFILE_OPT --region $regx --query 'logGroups[?retentionInDays!="${LOG_GROUP_RETENTION_PERIOD_DAYS}"].[logGroupName]' --output text)
if [[ $LIST_OF_NON_365_RETENTION_LOG_GROUPS ]]; then
for log in $LIST_OF_NON_365_RETENTION_LOG_GROUPS; do
textFail "$regx: $log Log Group does not have 365 days retention period!" "$regx" "$log"
LIST_OF_NEVER_EXPIRE_RETENTION_LOG_GROUPS=$("${AWSCLI}" logs describe-log-groups ${PROFILE_OPT} --region "${regx}" --query "logGroups[?retentionInDays==null].[logGroupName]" --output text 2>&1)
if grep -E -q 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${LIST_OF_NEVER_EXPIRE_RETENTION_LOG_GROUPS}"; then
textInfo "${regx}: Access Denied trying to describe log groups" "${regx}"
continue
fi
if [[ ${LIST_OF_NEVER_EXPIRE_RETENTION_LOG_GROUPS} ]]; then
for log in ${LIST_OF_NEVER_EXPIRE_RETENTION_LOG_GROUPS}; do
textPass "${regx}: ${log} Log Group retention period never expires!" "${regx}" "${log}"
done
fi
REGION_NO_LOG_GROUP=$($AWSCLI logs describe-log-groups $PROFILE_OPT --region $regx --output text)
if [[ $REGION_NO_LOG_GROUP ]]; then
LIST_OF_NON_365_RETENTION_LOG_GROUPS=$("${AWSCLI}" logs describe-log-groups ${PROFILE_OPT} --region "${regx}" --query "logGroups[?retentionInDays<\`${LOG_GROUP_RETENTION_PERIOD_DAYS}\`].[logGroupName]" --output text 2>&1)
if grep -E -q 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${LIST_OF_NON_365_RETENTION_LOG_GROUPS}"; then
textInfo "${regx}: Access Denied trying to describe log groups" "${regx}"
continue
fi
if [[ ${LIST_OF_NON_365_RETENTION_LOG_GROUPS} ]]; then
for log in ${LIST_OF_NON_365_RETENTION_LOG_GROUPS}; do
textFail "${regx}: ${log} Log Group does not have at least 365 days retention period!" "${regx}" "${log}"
done
fi
REGION_NO_LOG_GROUP=$("${AWSCLI}" logs describe-log-groups ${PROFILE_OPT} --region "${regx}" --output text)
if grep -E -q 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${REGION_NO_LOG_GROUP}"; then
textInfo "${regx}: Access Denied trying to describe log groups" "${regx}"
continue
fi
if [[ ${REGION_NO_LOG_GROUP} ]]; then
:
else
textInfo "$regx does not have a Log Group!" "$regx"
textInfo "${regx} does not have a Log Group!" "${regx}"
fi
done
}

View File

@@ -33,29 +33,30 @@ CHECK_REMEDIATION_extra7164="Associate KMS Key with Cloudwatch log group."
CHECK_DOC_extra7164="https://docs.aws.amazon.com/cli/latest/reference/logs/associate-kms-key.html"
CHECK_CAF_EPIC_extra7164="Data Protection"
extra7164(){
# "Check if Cloudwatch log groups are associated with AWS KMS"
# "Check if Cloudwatch log groups are associated with AWS KMS"
for regx in $REGIONS; do
LIST_OF_LOGGROUPS=$($AWSCLI logs describe-log-groups $PROFILE_OPT --region $regx --output json 2>&1 )
if [[ $(echo "$LIST_OF_LOGGROUPS" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
LIST_OF_LOGGROUPS=$($AWSCLI logs describe-log-groups $PROFILE_OPT --region $regx --query 'logGroups[]' 2>&1 )
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${LIST_OF_LOGGROUPS}"
then
textInfo "$regx: Access Denied trying to describe log groups" "$regx"
continue
fi
if [[ $LIST_OF_LOGGROUPS ]]; then
LIST_OF_LOGGROUPS_WITHOUT_KMS=$(echo "${LIST_OF_LOGGROUPS}" | jq '.logGroups[]' | jq '. | select( has("kmsKeyId") == false )' | jq -r '.logGroupName')
LIST_OF_LOGGROUPS_WITH_KMS=$(echo "${LIST_OF_LOGGROUPS}" | jq '.logGroups[]' | jq '. | select( has("kmsKeyId") == true )' | jq -r '.logGroupName')
if [[ $LIST_OF_LOGGROUPS_WITHOUT_KMS ]]; then
for loggroup in $LIST_OF_LOGGROUPS_WITHOUT_KMS; do
textFail "$regx: ${loggroup} does not have AWS KMS keys associated." "$regx" "${loggroup}"
done
fi
if [[ $LIST_OF_LOGGROUPS_WITH_KMS ]]; then
for loggroup in $LIST_OF_LOGGROUPS_WITH_KMS; do
textPass "$regx: ${loggroup} does have AWS KMS keys associated." "$regx" "${loggroup}"
done
fi
else
textPass "$regx: No Cloudwatch log groups found." "$regx"
if [[ "${LIST_OF_LOGGROUPS}" != '[]' ]]
then
for LOGGROUP in $(jq -c '.[]' <<< "${LIST_OF_LOGGROUPS}"); do
LOGGROUP_NAME=$(jq -r '.logGroupName' <<< "${LOGGROUP}")
if [[ $(jq '. | select( has("kmsKeyId") == false )' <<< "${LOGGROUP}") ]]
then
textFail "$regx: ${LOGGROUP_NAME} does not have AWS KMS keys associated." "$regx" "${LOGGROUP_NAME}"
else
textPass "$regx: ${LOGGROUP_NAME} does have AWS KMS keys associated." "$regx" "${LOGGROUP_NAME}"
fi
done
else
textPass "$regx: No Cloudwatch log groups found." "$regx" "No log groups"
fi
done
}

View File

@@ -25,11 +25,11 @@ CHECK_DOC_extra7166='https://docs.aws.amazon.com/waf/latest/developerguide/confi
CHECK_CAF_EPIC_extra7166='Infrastructure security'
extra7166() {
if [[ "$($AWSCLI $PROFILE_OPT shield get-subscription-state --output text)" == "ACTIVE" ]]; then
CALLER_IDENTITY=$($AWSCLI sts get-caller-identity $PROFILE_OPT --query Arn)
PARTITION=$(echo $CALLER_IDENTITY | cut -d: -f2)
ACCOUNT_ID=$(echo $CALLER_IDENTITY | cut -d: -f5)
for regx in $REGIONS; do
for regx in $REGIONS; do
if [[ "$($AWSCLI $PROFILE_OPT shield get-subscription-state --output text)" == "ACTIVE" ]]; then
CALLER_IDENTITY=$($AWSCLI sts get-caller-identity $PROFILE_OPT --query Arn)
PARTITION=$(echo $CALLER_IDENTITY | cut -d: -f2)
ACCOUNT_ID=$(echo $CALLER_IDENTITY | cut -d: -f5)
LIST_OF_ELASTIC_IPS_WITH_ASSOCIATIONS=$($AWSCLI ec2 describe-addresses $PROFILE_OPT --region $regx --query 'Addresses[?AssociationId].AllocationId' --output text)
if [[ $LIST_OF_ELASTIC_IPS_WITH_ASSOCIATIONS ]]; then
for elastic_ip in $LIST_OF_ELASTIC_IPS_WITH_ASSOCIATIONS; do
@@ -41,10 +41,10 @@ extra7166() {
fi
done
else
textInfo "$regx: no elastic IP addresses with assocations found" "$regx"
textInfo "$regx: No elastic IP addresses with assocations found" "$regx"
fi
done
else
textInfo "No AWS Shield Advanced subscription found. Skipping check."
fi
else
textInfo "$regx: No AWS Shield Advanced subscription found. Skipping check" "$regx"
fi
done
}

View File

@@ -41,6 +41,6 @@ extra7167() {
textInfo "$REGION: no Cloudfront distributions found" "$REGION"
fi
else
textInfo "No AWS Shield Advanced subscription found. Skipping check."
textInfo "$REGION: no AWS Shield Advanced subscription found. Skipping check." "$REGION"
fi
}

View File

@@ -44,6 +44,6 @@ extra7168() {
textInfo "$REGION: no Route53 hosted zones found" "$REGION"
fi
else
textInfo "No AWS Shield Advanced subscription found. Skipping check."
textInfo "$REGION: no AWS Shield Advanced subscription found. Skipping check." "$REGION"
fi
}

View File

@@ -41,6 +41,6 @@ extra7169() {
textInfo "$REGION: no global accelerators found" "$REGION"
fi
else
textInfo "No AWS Shield Advanced subscription found. Skipping check."
textInfo "$REGION: no AWS Shield Advanced subscription found. Skipping check." "$REGION"
fi
}

View File

@@ -25,8 +25,8 @@ CHECK_DOC_extra7170='https://docs.aws.amazon.com/waf/latest/developerguide/confi
CHECK_CAF_EPIC_extra7170='Infrastructure security'
extra7170() {
if [[ "$($AWSCLI $PROFILE_OPT shield get-subscription-state --output text)" == "ACTIVE" ]]; then
for regx in $REGIONS; do
for regx in $REGIONS; do
if [[ "$($AWSCLI $PROFILE_OPT shield get-subscription-state --output text)" == "ACTIVE" ]]; then
LIST_OF_APPLICATION_LOAD_BALANCERS=$($AWSCLI elbv2 describe-load-balancers $PROFILE_OPT --region $regx --query 'LoadBalancers[?Type == `application` && Scheme == `internet-facing`].[LoadBalancerName,LoadBalancerArn]' --output text)
if [[ $LIST_OF_APPLICATION_LOAD_BALANCERS ]]; then
while read -r alb; do
@@ -39,10 +39,10 @@ extra7170() {
fi
done <<<"$LIST_OF_APPLICATION_LOAD_BALANCERS"
else
textInfo "$regx: no application load balancers found" "$regx"
textInfo "$regx: No application load balancers found" "$regx"
fi
done
else
textInfo "No AWS Shield Advanced subscription found. Skipping check."
fi
else
textInfo "$regx: No AWS Shield Advanced subscription found. Skipping check." "$regx"
fi
done
}

View File

@@ -25,11 +25,11 @@ CHECK_DOC_extra7171='https://docs.aws.amazon.com/waf/latest/developerguide/confi
CHECK_CAF_EPIC_extra7171='Infrastructure security'
extra7171() {
if [[ "$($AWSCLI $PROFILE_OPT shield get-subscription-state --output text)" == "ACTIVE" ]]; then
CALLER_IDENTITY=$($AWSCLI sts get-caller-identity $PROFILE_OPT --query Arn)
PARTITION=$(echo $CALLER_IDENTITY | cut -d: -f2)
ACCOUNT_ID=$(echo $CALLER_IDENTITY | cut -d: -f5)
for regx in $REGIONS; do
for regx in $REGIONS; do
if [[ "$($AWSCLI $PROFILE_OPT shield get-subscription-state --output text)" == "ACTIVE" ]]; then
CALLER_IDENTITY=$($AWSCLI sts get-caller-identity $PROFILE_OPT --query Arn)
PARTITION=$(echo $CALLER_IDENTITY | cut -d: -f2)
ACCOUNT_ID=$(echo $CALLER_IDENTITY | cut -d: -f5)
LIST_OF_CLASSIC_LOAD_BALANCERS=$($AWSCLI elb describe-load-balancers $PROFILE_OPT --region $regx --query 'LoadBalancerDescriptions[?Scheme == `internet-facing`].[LoadBalancerName]' --output text |grep -v '^None$')
if [[ $LIST_OF_CLASSIC_LOAD_BALANCERS ]]; then
for elb in $LIST_OF_CLASSIC_LOAD_BALANCERS; do
@@ -41,10 +41,10 @@ extra7171() {
fi
done
else
textInfo "$regx: no classic load balancers found" "$regx"
textInfo "$regx: No classic load balancers found" "$regx"
fi
done
else
textInfo "No AWS Shield Advanced subscription found. Skipping check."
fi
else
textInfo "$regx: No AWS Shield Advanced subscription found. Skipping check." "$regx"
fi
done
}

View File

@@ -12,12 +12,12 @@
# specific language governing permissions and limitations under the License.
CHECK_ID_extra7173="7.173"
CHECK_TITLE_extra7173="[check7173] Security Groups created by EC2 Launch Wizard"
CHECK_TITLE_extra7173="[extra7173] Security Groups created by EC2 Launch Wizard"
CHECK_SCORED_extra7173="NOT_SCORED"
CHECK_CIS_LEVEL_extra7173="EXTRA"
CHECK_SEVERITY_extra7173="Medium"
CHECK_ASFF_RESOURCE_TYPE_extra7173="AwsEc2SecurityGroup"
CHECK_ALTERNATE_extra7173="extra7173"
CHECK_ALTERNATE_check7173="extra7173"
CHECK_SERVICENAME_extra7173="ec2"
CHECK_RISK_cextra7173="Security Groups Created on the AWS Console using the EC2 wizard may allow port 22 from 0.0.0.0/0"
CHECK_REMEDIATION_extra7173="Apply Zero Trust approach. Implement a process to scan and remediate security groups created by the EC2 Wizard. Recommended best practices is to use an authorized security group."

52
checks/check_extra7181 Normal file
View File

@@ -0,0 +1,52 @@
#!/usr/bin/env bash
# Prowler - the handy cloud security tool (copyright 2019) by Toni de la Fuente
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed
# under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
CHECK_ID_extra7181="7.181"
CHECK_TITLE_extra7181="[extra7181] Directory Service monitoring with CloudWatch logs"
CHECK_SCORED_extra7181="NOT_SCORED"
CHECK_CIS_LEVEL_extra7181="EXTRA"
CHECK_SEVERITY_extra7181="Medium"
CHECK_ASFF_RESOURCE_TYPE_extra7181="AwsDirectoryService"
CHECK_ALTERNATE_extra7181="extra7181"
CHECK_SERVICENAME_extra7181="ds"
CHECK_RISK_cextra7181="As a best practice, monitor your organization to ensure that changes are logged. This helps you to ensure that any unexpected change can be investigated and unwanted changes can be rolled back."
CHECK_REMEDIATION_extra7181="It is recommended that that the export of logs is enabled"
CHECK_DOC_extra7181="CHECK_DOC_extra7181='https://docs.aws.amazon.com/directoryservice/latest/admin-guide/incident-response.html'"
CHECK_CAF_EPIC_extra7181="Infrastructure Security"
extra7181(){
for regx in $REGIONS; do
DIRECTORY_SERVICE_IDS=$("${AWSCLI}" ds describe-directories $PROFILE_OPT --region "${regx}" --query 'DirectoryDescriptions[*].DirectoryId[]' --output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${DIRECTORY_SERVICE_IDS}"; then
textInfo "${regx}: Access Denied trying to describe directories" "${regx}"
continue
fi
if [[ ${DIRECTORY_SERVICE_IDS} ]]; then
for DIRECTORY_ID in ${DIRECTORY_SERVICE_IDS}; do
DIRECTORY_SERVICE_MONITORING=$("${AWSCLI}" ds list-log-subscriptions ${PROFILE_OPT} --region "${regx}" --directory-id "${DIRECTORY_ID}" --query 'LogSubscriptions' --output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${DIRECTORY_SERVICE_MONITORING}"; then
textInfo "${regx}: Access Denied trying to list Directory Service log subscriptions" "${regx}"
continue
fi
if [[ "${DIRECTORY_SERVICE_MONITORING}" ]]; then
textPass "${regx}: Directory Service ${DIRECTORY_ID} have log forwarding to CloudWatch enabled" "${regx}" "${DIRECTORY_ID}"
else
textFail "${regx}: Directory Service ${DIRECTORY_ID} does not have log forwarding to CloudWatch enabled" "${regx}" "${DIRECTORY_ID}"
fi
done
else
textInfo "${regx}: No Directory Service found" "${regx}"
fi
done
}

52
checks/check_extra7182 Normal file
View File

@@ -0,0 +1,52 @@
#!/usr/bin/env bash
# Prowler - the handy cloud security tool (copyright 2019) by Toni de la Fuente
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed
# under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
CHECK_ID_extra7182="7.182"
CHECK_TITLE_extra7182="[extra7182] Directory Service SNS Notifications"
CHECK_SCORED_extra7182="NOT_SCORED"
CHECK_CIS_LEVEL_extra7182="EXTRA"
CHECK_SEVERITY_extra7182="Medium"
CHECK_ASFF_RESOURCE_TYPE_extra7182="AwsDirectoryService"
CHECK_ALTERNATE_check7182="extra7182"
CHECK_SERVICENAME_extra7182="ds"
CHECK_RISK_cextra7182="As a best practice, monitor status of Directory Service. This helps to avoid late actions to fix Directory Service issues"
CHECK_REMEDIATION_extra7182="It is recommended set up SNS messaging to send email or text messages when the status of your directory changes"
CHECK_DOC_extra7182="https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_enable_notifications.html"
CHECK_CAF_EPIC_extra7182="Infrastructure Security"
extra7182(){
for regx in $REGIONS; do
DIRECTORY_SERVICE_IDS=$("${AWSCLI}" ds describe-directories ${PROFILE_OPT} --region "${regx}" --query 'DirectoryDescriptions[*].DirectoryId[]' --output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${DIRECTORY_SERVICE_IDS}"; then
textInfo "${regx}: Access Denied trying to describe directories" "${regx}"
continue
fi
if [[ ${DIRECTORY_SERVICE_IDS} ]]; then
for DIRECTORY_ID in ${DIRECTORY_SERVICE_IDS}; do
DIRECTORY_SERVICE_MONITORING=$("${AWSCLI}" ds describe-event-topics ${PROFILE_OPT} --region "${regx}" --directory-id "${DIRECTORY_ID}" --query 'EventTopics' --output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${DIRECTORY_SERVICE_MONITORING}"; then
textInfo "${regx}: Access Denied trying to describe Directory Service event topics" "${regx}"
continue
fi
if [[ "${DIRECTORY_SERVICE_MONITORING}" ]]; then
textPass "${regx}: Directory Service ${DIRECTORY_ID} have SNS messaging enabled" "${regx}" "${DIRECTORY_ID}"
else
textFail "${regx}: Directory Service ${DIRECTORY_ID} does not have SNS messaging enabled" "${regx}" "${DIRECTORY_ID}"
fi
done
else
textInfo "${regx}: No Directory Service found" "${regx}"
fi
done
}

71
checks/check_extra7183 Normal file
View File

@@ -0,0 +1,71 @@
#!/usr/bin/env bash
# Prowler - the handy cloud security tool (copyright 2019) by Toni de la Fuente
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed
# under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
CHECK_ID_extra7183="7.183"
CHECK_TITLE_extra7183="[extra7183] Directory Service LDAP Certificates expiration"
CHECK_SCORED_extra7183="NOT_SCORED"
CHECK_CIS_LEVEL_extra7183="EXTRA"
CHECK_SEVERITY_extra7183="Medium"
CHECK_ASFF_RESOURCE_TYPE_extra7183="AwsDirectoryService"
CHECK_ALTERNATE_check7183="extra7183"
CHECK_SERVICENAME_extra7183="ds"
CHECK_RISK_cextra7183="Expired certificates can impact service availability."
CHECK_REMEDIATION_extra7183="Monitor certificate expiration and take automated action to alarm responsible team for taking care of the replacement or remove."
CHECK_DOC_extra7183="https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_ldap.html"
CHECK_CAF_EPIC_extra7183="Data Protection"
extra7183(){
local DAYS_TO_EXPIRE_THRESHOLD=90
for regx in $REGIONS; do
DIRECTORY_SERVICE_IDS=$("${AWSCLI}" ds describe-directories ${PROFILE_OPT} --region "${regx}" --query 'DirectoryDescriptions[*].DirectoryId[]' --output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${DIRECTORY_SERVICE_IDS}"; then
textInfo "${regx}: Access Denied trying to describe directories" "${regx}"
continue
fi
if [[ ${DIRECTORY_SERVICE_IDS} ]]; then
for DIRECTORY_ID in ${DIRECTORY_SERVICE_IDS}; do
CERT_DATA=$("${AWSCLI}" ds list-certificates ${PROFILE_OPT} --region "${regx}" --directory-id "${DIRECTORY_ID}" --query 'CertificatesInfo[*].[CertificateId,ExpiryDateTime]' --output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${CERT_DATA}"; then
textInfo "${regx}: Access Denied trying to list certificates" "${regx}"
continue
fi
if grep -q -E 'UnsupportedOperationException' <<< "${CERT_DATA}"; then
textInfo "${regx}: Error calling the ListCertificates operation: LDAPS operations are not supported for this Directory Type (directory id: ${DIRECTORY_ID})" "${regx}"
continue
fi
if [[ ${CERT_DATA} ]]; then
echo "${CERT_DATA}" | while read -r CERTIFICATE_ID NOTAFTER; do
EXPIRES_DATE=$(timestamp_to_date "${NOTAFTER}")
if [[ ${EXPIRES_DATE} == "" ]]
then
textInfo "${regx}: LDAP Certificate ${CERTIFICATE_ID} has an incorrect timestamp format: ${NOTAFTER}" "${regx}" "${CERTIFICATE_ID}"
else
COUNTER_DAYS=$(how_many_days_from_today "${EXPIRES_DATE}")
if [[ "${COUNTER_DAYS}" -le "${DAYS_TO_EXPIRE_THRESHOLD}" ]]; then
textFail "${regx}: LDAP Certificate ${CERTIFICATE_ID} configured at ${DIRECTORY_ID} is about to expire in ${COUNTER_DAYS} days!" "${regx}" "${CERTIFICATE_ID}"
else
textPass "${regx}: LDAP Certificate ${CERTIFICATE_ID} configured at ${DIRECTORY_ID} expires in ${COUNTER_DAYS} days" "${regx}" "${CERTIFICATE_ID}"
fi
fi
done
else
textFail "${regx}: Directory Service ${DIRECTORY_ID} does not have a LDAP Certificate configured" "${regx}" "${DIRECTORY_ID}"
fi
done
else
textInfo "${regx}: No Directory Service found" "${regx}"
fi
done
}

67
checks/check_extra7184 Normal file
View File

@@ -0,0 +1,67 @@
#!/usr/bin/env bash
# Prowler - the handy cloud security tool (copyright 2019) by Toni de la Fuente
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed
# under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
CHECK_ID_extra7184="7.184"
CHECK_TITLE_extra7184="[extra7184] Directory Service Manual Snapshot Limit"
CHECK_SCORED_extra7184="NOT_SCORED"
CHECK_CIS_LEVEL_extra7184="EXTRA"
CHECK_SEVERITY_extra7184="Low"
CHECK_ASFF_RESOURCE_TYPE_extra7184="AwsDirectoryService"
CHECK_ALTERNATE_check7184="extra7184"
CHECK_SERVICENAME_extra7184="ds"
CHECK_RISK_extra7184="A limit reached can bring unwanted results. The maximum number of manual snapshots is a hard limit"
CHECK_REMEDIATION_extra7184="Monitor manual snapshots limit to ensure capacity when you need it."
CHECK_DOC_extra7184="https://docs.aws.amazon.com/general/latest/gr/ds_region.html"
CHECK_CAF_EPIC_extra7184="Infrastructure Security"
extra7184(){
local THRESHOLD="2"
for regx in ${REGIONS}; do
DIRECTORY_SERVICE_IDS=$("${AWSCLI}" ds describe-directories ${PROFILE_OPT} --region "${regx}" --query 'DirectoryDescriptions[*].DirectoryId[]' --output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${DIRECTORY_SERVICE_IDS}"; then
textInfo "${regx}: Access Denied trying to describe directories" "${regx}"
continue
fi
if [[ ${DIRECTORY_SERVICE_IDS} ]]; then
for DIRECTORY_ID in ${DIRECTORY_SERVICE_IDS}; do
LIMIT_DATA=$("${AWSCLI}" ds get-snapshot-limits ${PROFILE_OPT} --region "${regx}" --directory-id "${DIRECTORY_ID}" --query 'SnapshotLimits' --output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${LIMIT_DATA}"; then
textInfo "${regx}: Access Denied trying to get Directiory Service snapshot limits" "${regx}"
continue
fi
if grep -q -E 'ClientException' <<< "${LIMIT_DATA}"; then
textInfo "${regx}: Error calling the GetSnapshotLimits operation: Snapshot limits can be fetched only for VPC or Microsoft AD directories (directory id: ${DIRECTORY_ID})" "${regx}"
continue
fi
echo "${LIMIT_DATA}" | while read -r CURRENT_SNAPSHOTS_COUNT SNAPSHOTS_LIMIT SNAPSHOTS_LIMIT_REACHED; do
if [[ ${SNAPSHOTS_LIMIT_REACHED} == "true" ]]
then
textFail "${regx}: Directory Service ${DIRECTORY_ID} reached ${SNAPSHOTS_LIMIT} Snapshots Limit" "${regx}" "${DIRECTORY_ID}"
else
LIMIT_REMAIN=$(("${SNAPSHOTS_LIMIT}" - "${CURRENT_SNAPSHOTS_COUNT}"))
if [[ "${LIMIT_REMAIN}" -le "${THRESHOLD}" ]]; then
textFail "${regx}: Directory Service ${DIRECTORY_ID} is about to reach ${SNAPSHOTS_LIMIT} snapshots which is the limit" "${regx}" "${DIRECTORY_ID}"
else
textPass "${regx}: Directory Service ${DIRECTORY_ID} is using ${CURRENT_SNAPSHOTS_COUNT} out of ${SNAPSHOTS_LIMIT} from the Snapshot Limit" "${regx}" "{$DIRECTORY_ID}"
fi
fi
done
done
else
textInfo "${regx}: No Directory Service found" "${regx}"
fi
done
}

85
checks/check_extra7185 Normal file
View File

@@ -0,0 +1,85 @@
#!/usr/bin/env bash
# Prowler - the handy cloud security tool (copyright 2019) by Toni de la Fuente
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed
# under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
CHECK_ID_extra7185="7.185"
CHECK_TITLE_extra7185="[extra7185] Ensure no Customer Managed IAM policies allow actions that may lead into Privilege Escalation"
CHECK_SCORED_extra7185="NOT_SCORED"
CHECK_CIS_LEVEL_extra7185="EXTRA"
CHECK_SEVERITY_extra7185="High"
CHECK_ASFF_RESOURCE_TYPE_extra7185="AwsIamPolicy"
CHECK_ALTERNATE_check7185="extra7185"
CHECK_SERVICENAME_extra7185="iam"
CHECK_RISK_extra7185='Users with some IAM permissions are allowed to elevate their privileges up to administrator rights.'
CHECK_REMEDIATION_extra7185='Grant usage permission on a per-resource basis and applying least privilege principle.'
CHECK_DOC_extra7185='https://docs.aws.amazon.com/IAM/latest/APIReference/API_CreateAccessKey.html'
CHECK_CAF_EPIC_extra7185='IAM'
# Does the tool analyze both users and roles, or just one or the other? --> Everything using AttachementCount.
# Does the tool take a principal-centric or policy-centric approach? --> Policy-centric approach.
# Does the tool handle resource constraints? --> We don't check if the policy affects all resources or not, we check everything.
# Does the tool consider the permissions of service roles? --> Just checks policies.
# Does the tool handle transitive privesc paths (i.e., attack chains)? --> Not yet.
# Does the tool handle the DENY effect as expected? --> Yes, it checks DENY's statements with Action and NotAction.
# Does the tool handle NotAction as expected? --> Yes
# Does the tool handle Condition constraints? --> Not yet.
# Does the tool handle service control policy (SCP) restrictions? --> No, SCP are within Organizations AWS API.
extra7185() {
local PRIVILEGE_ESCALATION_IAM_ACTIONS="iam:AttachGroupPolicy|iam:SetDefaultPolicyVersion2|iam:AddUserToGroup|iam:AttachRolePolicy|iam:AttachUserPolicy|iam:CreateAccessKey|iam:CreatePolicyVersion|iam:CreateLoginProfile|iam:PassRole|iam:PutGroupPolicy|iam:PutRolePolicy|iam:PutUserPolicy|iam:SetDefaultPolicyVersion|iam:UpdateAssumeRolePolicy|iam:UpdateLoginProfile|sts:AssumeRole|ec2:RunInstances|lambda:CreateEventSourceMapping|lambda:CreateFunction|lambda:InvokeFunction|lambda:UpdateFunctionCode|dynamodb:CreateTable|dynamodb:PutItem|glue:CreateDevEndpoint|glue:GetDevEndpoint|glue:GetDevEndpoints|glue:UpdateDevEndpoint|cloudformation:CreateStack|cloudformation:DescribeStacks|datapipeline:CreatePipeline|datapipeline:PutPipelineDefinition|datapipeline:ActivatePipeline"
# Use --scope Local to list only Customer Managed Policies
# Query 'Policies[?AttachmentCount > `0`]' to check if this policy is in use, so attached to any user, group or role
LIST_CUSTOM_POLICIES=$(${AWSCLI} iam list-policies ${PROFILE_OPT} \
--scope Local \
--query 'Policies[*].[Arn,DefaultVersionId]' \
--output text)
# Check errors
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${LIST_CUSTOM_POLICIES}"; then
textInfo "${REGION}: Access Denied trying to list IAM policies" "${REGION}"
else
if [[ $LIST_CUSTOM_POLICIES ]]; then
while read -r POLICY_ARN POLICY_DEFAULT_VERSION; do
POLICY_PRIVILEGED_ACTIONS=$($AWSCLI iam get-policy-version ${PROFILE_OPT} \
--policy-arn "${POLICY_ARN}" \
--version-id "${POLICY_DEFAULT_VERSION}" \
--query "PolicyVersion.Document.Statement[]" \
--output json)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${POLICY_PRIVILEGED_ACTIONS}"; then
textInfo "${REGION}: Access Denied trying to get policy version" "${REGION}"
continue
fi
ALLOWED_ACTIONS=$(jq -r '.[] | select(."Effect" == "Allow") | .Action // empty' <<< "${POLICY_PRIVILEGED_ACTIONS}" | sed 's/\[//;s/\]//;s/,/ /;s/ //g;/^$/d')
DENIED_ACTIONS=$(jq -r '.[] | select(."Effect" == "Deny") | .Action // empty' <<< "${POLICY_PRIVILEGED_ACTIONS}" | sed 's/\[//;s/\]//;s/,/ /;s/ //g;/^$/d')
DENIED_NOT_ACTIONS=$(jq -r '.[] | select(."Effect" == "Deny") | .NotAction // empty' <<< "${POLICY_PRIVILEGED_ACTIONS}" | sed 's/\[//;s/\]//;s/,/ /;s/ //g;/^$/d')
# First, we need to perform a left join with ALLOWED_ACTIONS and DENIED_ACTIONS
LEFT_ACTIONS=$(diff <(echo "${ALLOWED_ACTIONS}") <(echo "${DENIED_ACTIONS}") | grep "^<" | sed 's/< //;s/"//g')
# Then, we need to find the DENIED_NOT_ACTIONS in LEFT_ACTIONS
PRIVILEGED_ACTIONS=$(comm -1 -2 <(sort <<< "${DENIED_NOT_ACTIONS}") <(sort <<< "${LEFT_ACTIONS}"))
# Finally, check if there is a privilege escalation action within this policy
POLICY_PRIVILEGE_ESCALATION_ACTIONS=$(grep -o -E "${PRIVILEGE_ESCALATION_IAM_ACTIONS}" <<< "${PRIVILEGED_ACTIONS}")
if [[ -n "${POLICY_PRIVILEGE_ESCALATION_ACTIONS}" ]]; then
textFail "${REGION}: Customer Managed IAM Policy ${POLICY_ARN} allows for privilege escalation using the following actions: ${POLICY_PRIVILEGE_ESCALATION_ACTIONS//$'\n'/ }" "${REGION}" "${POLICY_NAME}"
else
textPass "${REGION}: Customer Managed IAM Policy ${POLICY_ARN} not allows for privilege escalation" "${REGION}" "${POLICY_NAME}"
fi
done<<<"${LIST_CUSTOM_POLICIES}"
else
textInfo "${REGION}: No Customer Managed IAM policies found" "${REGION}"
fi
fi
}

42
checks/check_extra7186 Normal file
View File

@@ -0,0 +1,42 @@
#!/usr/bin/env bash
# Prowler - the handy cloud security tool (copyright 2019) by Toni de la Fuente
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed
# under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
CHECK_ID_extra7186="7.186"
CHECK_TITLE_extra7186="[extra7186] Check S3 Account Level Public Access Block"
CHECK_SCORED_extra7186="NOT_SCORED"
CHECK_CIS_LEVEL_extra7186="EXTRA"
CHECK_SEVERITY_extra7186="High"
CHECK_ASFF_RESOURCE_TYPE_extra7186="AwsS3Bucket"
CHECK_ALTERNATE_check7186="extra7186"
CHECK_SERVICENAME_extra7186="s3"
CHECK_RISK_extra7186='Public access policies may be applied to sensitive data buckets'
CHECK_REMEDIATION_extra7186='You can enable Public Access Block at the account level to prevent the exposure of your data stored in S3'
CHECK_DOC_extra7186='https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html'
CHECK_CAF_EPIC_extra7186='Data Protection'
extra7186(){
S3_PUBLIC_ACCESS_BLOCK=$("${AWSCLI}" ${PROFILE_OPT} s3control get-public-access-block \
--account-id "${ACCOUNT_NUM}" \
--region "${REGION}" \
--query 'PublicAccessBlockConfiguration.[IgnorePublicAcls,RestrictPublicBuckets]' \
--output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${S3_PUBLIC_ACCESS_BLOCK}"; then
textInfo "${REGION}: Access Denied trying to recover AWS account ID" "${REGION}"
exit
fi
if grep -q -E 'False|NoSuchPublicAccessBlockConfiguration' <<< "${S3_PUBLIC_ACCESS_BLOCK}"; then
textFail "${REGION}: Block Public Access is not configured for the account ${ACCOUNT_NUM}" "${REGION}" "${ACCOUNT_NUM}"
else
textPass "${REGION}: Block Public Access is configured for the account ${ACCOUNT_NUM}" "${REGION}" "${ACCOUNT_NUM}"
fi
}

53
checks/check_extra7187 Normal file
View File

@@ -0,0 +1,53 @@
#!/usr/bin/env bash
# Prowler - the handy cloud security tool (copyright 2019) by Toni de la Fuente
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed
# under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
CHECK_ID_extra7187="7.187"
CHECK_TITLE_extra7187="[extra7187] Ensure that your Amazon WorkSpaces storage volumes are encrypted in order to meet security and compliance requirements"
CHECK_SCORED_extra7187="NOT_SCORED"
CHECK_CIS_LEVEL_extra7187="EXTRA"
CHECK_SEVERITY_extra7187="High"
CHECK_ASFF_RESOURCE_TYPE_extra7187="AwsWorkspaces"
CHECK_ALTERNATE_check7187="extra7187"
CHECK_SERVICENAME_extra7187="workspaces"
CHECK_RISK_extra7187='If the value listed in the Volume Encryption column is Disabled the selected AWS WorkSpaces instance volumes (root and user volumes) are not encrypted. Therefore your data-at-rest is not protected from unauthorized access and does not meet the compliance requirements regarding data encryption.'
CHECK_REMEDIATION_extra7187='WorkSpaces is integrated with the AWS Key Management Service (AWS KMS). This enables you to encrypt storage volumes of WorkSpaces using AWS KMS Key. When you launch a WorkSpace you can encrypt the root volume (for Microsoft Windows - the C drive; for Linux - /) and the user volume (for Windows - the D drive; for Linux - /home). Doing so ensures that the data stored at rest - disk I/O to the volume - and snapshots created from the volumes are all encrypted'
CHECK_DOC_extra7187='https://docs.aws.amazon.com/workspaces/latest/adminguide/encrypt-workspaces.html'
CHECK_CAF_EPIC_extra7187='Infrastructure Security'
extra7187(){
for regx in $REGIONS; do
RT_VOL_UNENCRYPTED_WORKSPACES_ID_LIST=$($AWSCLI workspaces describe-workspaces --query "Workspaces[?RootVolumeEncryptionEnabled!=\`true\`].WorkspaceId" ${PROFILE_OPT} --region "${regx}" --output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|Could not connect to the endpoint URL|AuthorizationError' <<< "$RT_VOL_UNENCRYPTED_WORKSPACES_ID_LIST"; then
textInfo "$regx: Access Denied trying to describe workspaces" "$regx"
continue
fi
USERVOL_UNENCRYPTED_WORKSPACES_ID_LIST=$($AWSCLI workspaces describe-workspaces --query "Workspaces[?UserVolumeEncryptionEnabled!=\`true\`].WorkspaceId" ${PROFILE_OPT} --region "${regx}" --output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|Could not connect to the endpoint URL|AuthorizationError' <<< "$USERVOL_UNENCRYPTED_WORKSPACES_ID_LIST"; then
textInfo "$regx: Access Denied trying to describe workspaces" "$regx"
continue
fi
if [[ $RT_VOL_UNENCRYPTED_WORKSPACES_ID_LIST ]];then
for RTVL in $RT_VOL_UNENCRYPTED_WORKSPACES_ID_LIST;do
textFail "$regx: Found WorkSpaces: $RTVL with root volume unencrypted" "$regx" "$RTVL"
done
else
textPass "$regx: No Workspaces with unencrypted root volume found" "$regx" "$RTVL"
fi
if [[ $USERVOL_UNENCRYPTED_WORKSPACES_ID_LIST ]];then
for UVL in $USERVOL_UNENCRYPTED_WORKSPACES_ID_LIST;do
textFail "$regx: Found WorkSpaces: $UVL with user volume unencrypted" "$regx" "$UVL"
done
else
textPass "$regx: No Workspaces with unencrypted user volume found" "$regx" "$UVL"
fi
done
}

60
checks/check_extra7188 Normal file
View File

@@ -0,0 +1,60 @@
#!/usr/bin/env bash
# Prowler - the handy cloud security tool (copyright 2019) by Toni de la Fuente
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed
# under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
CHECK_ID_extra7188="7.188"
CHECK_TITLE_extra7188="[extra7188] Ensure Radius server in DS is using the recommended security protocol"
CHECK_SCORED_extra7188="NOT_SCORED"
CHECK_CIS_LEVEL_extra7188="EXTRA"
CHECK_SEVERITY_extra7188="Medium"
CHECK_ASFF_TYPE_extra7188="Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
CHECK_ASFF_RESOURCE_TYPE_extra7188="AwsDirectoryService"
CHECK_ALTERNATE_check7188="extra7188"
CHECK_SERVICENAME_extra7188="ds"
CHECK_RISK_extra7188="As a best practice, you might need to configure the authentication protocol between the Microsoft AD DCs and the RADIUS/MFA server. Supported protocols are PAP, CHAP MS-CHAPv1, and MS-CHAPv2. MS-CHAPv2 is recommended because it provides the strongest security of the three options."
CHECK_REMEDIATION_extra7188="MS-CHAPv2 provides the strongest security of the options supported, and is therefore recommended"
CHECK_DOC_extra7188='https://aws.amazon.com/blogs/security/how-to-enable-multi-factor-authentication-for-amazon-workspaces-and-amazon-quicksight-by-using-microsoft-ad-and-on-premises-credentials/'
CHECK_CAF_EPIC_extra7188="Infrastructure Security"
extra7188(){
for regx in $REGIONS; do
LIST_OF_DIRECTORIES=$("${AWSCLI}" ds describe-directories $PROFILE_OPT --region "${regx}" --query 'DirectoryDescriptions[*]' --output json 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError|Could not connect to the endpoint URL' <<< "${LIST_OF_DIRECTORIES}"; then
textInfo "${regx}: Access Denied trying to describe directories" "${regx}"
continue
fi
if [[ $LIST_OF_DIRECTORIES && $LIST_OF_DIRECTORIES != '[]' ]]; then
LIST_OF_DIRECTORIES_WITHOUT_RADIUS=$(echo "${LIST_OF_DIRECTORIES}" | jq '.[] | select(.RadiusSettings == null) | {DirectoryId}' | jq -r '.DirectoryId')
LIST_OF_DIRECTORIES_WITH_RADIUS=$(echo "${LIST_OF_DIRECTORIES}" | jq '.[] | select(.RadiusSettings)')
LIST_OF_DIRECTORIES_WITH_RADIUS_RECOMMENDED_SECURITY_PROTOCOL=$(echo "${LIST_OF_DIRECTORIES_WITH_RADIUS}" | jq 'select(.RadiusSettings.AuthenticationProtocol=="MS-CHAPv2") | {DirectoryId}' | jq -r '.DirectoryId')
LIST_OF_DIRECTORIES_WITHOUT_RADIUS_RECOMMENDED_SECURITY_PROTOCOL=$(echo "${LIST_OF_DIRECTORIES_WITH_RADIUS}" | jq 'select(.RadiusSettings.AuthenticationProtocol!="MS-CHAPv2") | {DirectoryId}' | jq -r '.DirectoryId')
if [[ $LIST_OF_DIRECTORIES_WITHOUT_RADIUS_RECOMMENDED_SECURITY_PROTOCOL ]]; then
for directory in $LIST_OF_DIRECTORIES_WITHOUT_RADIUS_RECOMMENDED_SECURITY_PROTOCOL; do
textFail "$regx: Radius server of directory: ${directory} does not have recommended security protocol" "$regx" "${directory}"
done
fi
if [[ $LIST_OF_DIRECTORIES_WITH_RADIUS_RECOMMENDED_SECURITY_PROTOCOL ]]; then
for directory in $LIST_OF_DIRECTORIES_WITH_RADIUS_RECOMMENDED_SECURITY_PROTOCOL; do
textPass "$regx: Radius server of directory: ${directory} has recommended security protocol" "$regx" "${directory}"
done
fi
if [[ $LIST_OF_DIRECTORIES_WITHOUT_RADIUS ]]; then
for directory in $LIST_OF_DIRECTORIES_WITHOUT_RADIUS; do
textPass "${regx}: Directory ${directory} has not a Radius server" "${regx}" "${directory}"
done
fi
else
textPass "${regx}: No Directory Service directories found" "${regx}"
fi
done
}

60
checks/check_extra7189 Normal file
View File

@@ -0,0 +1,60 @@
#!/usr/bin/env bash
# Prowler - the handy cloud security tool (copyright 2019) by Toni de la Fuente
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed
# under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
CHECK_ID_extra7189="7.189"
CHECK_TITLE_extra7189="[extra7189] Ensure Multi-Factor Authentication (MFA) using Radius Server is enabled in DS"
CHECK_SCORED_extra7189="NOT_SCORED"
CHECK_CIS_LEVEL_extra7189="EXTRA"
CHECK_SEVERITY_extra7189="Medium"
CHECK_ASFF_TYPE_extra7189="Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
CHECK_ASFF_RESOURCE_TYPE_extra7189="AwsDirectoryService"
CHECK_ALTERNATE_check7189="extra7189"
CHECK_SERVICENAME_extra7189="ds"
CHECK_RISK_extra7189="Multi-Factor Authentication (MFA) adds an extra layer of authentication assurance beyond traditional username and password."
CHECK_REMEDIATION_extra7189="Enabling MFA provides increased security to a user name and password as it requires the user to possess a solution that displays a time-sensitive authentication code."
CHECK_DOC_extra7189='https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_mfa.html'
CHECK_CAF_EPIC_extra7189="Infrastructure Security"
extra7189(){
for regx in $REGIONS; do
LIST_OF_DIRECTORIES=$("${AWSCLI}" ds describe-directories $PROFILE_OPT --region "${regx}" --query 'DirectoryDescriptions[*]' --output json 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError|Could not connect to the endpoint URL' <<< "${LIST_OF_DIRECTORIES}"; then
textInfo "${regx}: Access Denied trying to describe directories" "${regx}"
continue
fi
if [[ $LIST_OF_DIRECTORIES && $LIST_OF_DIRECTORIES != '[]' ]]; then
LIST_OF_DIRECTORIES_WITHOUT_RADIUS=$(echo "${LIST_OF_DIRECTORIES}" | jq '.[] | select(.RadiusSettings == null) | {DirectoryId}' | jq -r '.DirectoryId')
LIST_OF_DIRECTORIES_WITH_RADIUS=$(echo "${LIST_OF_DIRECTORIES}" | jq '.[] | select(.RadiusSettings)')
LIST_OF_DIRECTORIES_WITH_RADIUS_MFA_COMPLETED=$(echo "${LIST_OF_DIRECTORIES_WITH_RADIUS}" | jq 'select(.RadiusStatus=="Completed") | {DirectoryId}' | jq -r '.DirectoryId')
LIST_OF_DIRECTORIES_WITHOUT_RADIUS_MFA_COMPLETED=$(echo "${LIST_OF_DIRECTORIES_WITH_RADIUS}" | jq 'select(.RadiusStatus!="Completed") | {DirectoryId}' | jq -r '.DirectoryId')
if [[ $LIST_OF_DIRECTORIES_WITHOUT_RADIUS_MFA_COMPLETED ]]; then
for directory in $LIST_OF_DIRECTORIES_WITHOUT_RADIUS_MFA_COMPLETED; do
textFail "$regx: Directory: ${directory} does not have Radius MFA enabled successfully" "$regx" "${directory}"
done
fi
if [[ $LIST_OF_DIRECTORIES_WITH_RADIUS_MFA_COMPLETED ]]; then
for directory in $LIST_OF_DIRECTORIES_WITH_RADIUS_MFA_COMPLETED; do
textPass "$regx: Directory: ${directory} has Radius MFA enabled" "$regx" "${directory}"
done
fi
if [[ $LIST_OF_DIRECTORIES_WITHOUT_RADIUS ]]; then
for directory in $LIST_OF_DIRECTORIES_WITHOUT_RADIUS; do
textPass "${regx}: Directory ${directory} does not have Radius Server configured" "${regx}" "${directory}"
done
fi
else
textPass "${regx}: No Directory Service directories found" "${regx}"
fi
done
}

45
checks/check_extra7190 Normal file
View File

@@ -0,0 +1,45 @@
#!/usr/bin/env bash
# Prowler - the handy cloud security tool (copyright 2019) by Toni de la Fuente
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed
# under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
CHECK_ID_extra7190="7.190"
CHECK_TITLE_extra7190="[extra7190] Ensure user maximum session duration is no longer than 10 hours."
CHECK_SCORED_extra7190="NOT_SCORED"
CHECK_CIS_LEVEL_extra7190="EXTRA"
CHECK_SEVERITY_extra7190="Medium"
CHECK_ASFF_TYPE_extra7190="Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
CHECK_ASFF_RESOURCE_TYPE_extra7190="AmazonAppStream"
CHECK_ALTERNATE_check7190="extra7190"
CHECK_SERVICENAME_extra7190="appstream"
CHECK_RISK_extra7190="Having a session duration lasting longer than 10 hours should not be necessary and if running for any malicious reasons provides a greater time for usage than should be allowed."
CHECK_REMEDIATION_extra7190="Change the Maximum session duration is set to 600 minutes or less for the AppStream Fleet"
CHECK_DOC_extra7190='https://docs.aws.amazon.com/appstream2/latest/developerguide/set-up-stacks-fleets.html'
CHECK_CAF_EPIC_extra7190="Infrastructure Security"
extra7190(){
for regx in $REGIONS; do
LIST_OF_FLEETS_WITH_MAX_SESSION_DURATION_ABOVE_RECOMMENDED=$("${AWSCLI}" appstream describe-fleets $PROFILE_OPT --region "${regx}" --query 'Fleets[?MaxUserDurationInSeconds>=`36000`].Arn' --output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError|Could not connect to the endpoint URL|Connect timeout on endpoint URL' <<< "${LIST_OF_FLEETS_WITH_MAX_SESSION_DURATION_ABOVE_RECOMMENDED}"; then
textInfo "${regx}: Access Denied trying to describe appstream fleet(s)" "${regx}"
continue
fi
if [[ $LIST_OF_FLEETS_WITH_MAX_SESSION_DURATION_ABOVE_RECOMMENDED && $LIST_OF_FLEETS_WITH_MAX_SESSION_DURATION_ABOVE_RECOMMENDED != '[]' ]]; then
for Arn in $LIST_OF_FLEETS_WITH_MAX_SESSION_DURATION_ABOVE_RECOMMENDED; do
textFail "$regx: Fleet: ${Arn} has the maximum session duration configured for longer than 10 hours duration." "$regx" "${Arn}"
done
else
textPass "${regx}: No AppStream Fleets having a maximum session duration lasting longer than 10 hours found." "${regx}"
fi
done
}

45
checks/check_extra7191 Normal file
View File

@@ -0,0 +1,45 @@
#!/usr/bin/env bash
# Prowler - the handy cloud security tool (copyright 2019) by Toni de la Fuente
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed
# under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
CHECK_ID_extra7191="7.191"
CHECK_TITLE_extra7191="[extra7191] Ensure session disconnect timeout is set to 5 minutes or less."
CHECK_SCORED_extra7191="NOT_SCORED"
CHECK_CIS_LEVEL_extra7191="EXTRA"
CHECK_SEVERITY_extra7191="Medium"
CHECK_ASFF_TYPE_extra7191="Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
CHECK_ASFF_RESOURCE_TYPE_extra7191="AmazonAppStream"
CHECK_ALTERNATE_check7191="extra7191"
CHECK_SERVICENAME_extra7191="appstream"
CHECK_RISK_extra7191="Disconnect timeout in minutes, is the amount of of time that a streaming session remains active after users disconnect."
CHECK_REMEDIATION_extra7191="Change the Disconnect timeout to 5 minutes or less for the AppStream Fleet"
CHECK_DOC_extra7191='https://docs.aws.amazon.com/appstream2/latest/developerguide/set-up-stacks-fleets.html'
CHECK_CAF_EPIC_extra7191="Infrastructure Security"
extra7191(){
for regx in $REGIONS; do
LIST_OF_FLEETS_WITH_SESSION_DISCONNECT_DURATION_ABOVE_RECOMMENDED=$("${AWSCLI}" appstream describe-fleets $PROFILE_OPT --region "${regx}" --query 'Fleets[?DisconnectTimeoutInSeconds>`300`].Arn' --output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError|Could not connect to the endpoint URL|Connect timeout on endpoint URL' <<< "${LIST_OF_FLEETS_WITH_SESSION_DISCONNECT_DURATION_ABOVE_RECOMMENDED}"; then
textInfo "${regx}: Access Denied trying to describe appstream fleet(s)" "${regx}"
continue
fi
if [[ $LIST_OF_FLEETS_WITH_SESSION_DISCONNECT_DURATION_ABOVE_RECOMMENDED && $LIST_OF_FLEETS_WITH_SESSION_DISCONNECT_DURATION_ABOVE_RECOMMENDED != '[]' ]]; then
for Arn in $LIST_OF_FLEETS_WITH_SESSION_DISCONNECT_DURATION_ABOVE_RECOMMENDED; do
textFail "$regx: Fleet: ${Arn} has the session disconnect timeout is set to more than 5 minutes." "$regx" "${Arn}"
done
else
textPass "${regx}: No AppStream Fleets having the session disconnect timeout set to more than 5 minutes found." "${regx}"
fi
done
}

45
checks/check_extra7192 Normal file
View File

@@ -0,0 +1,45 @@
#!/usr/bin/env bash
# Prowler - the handy cloud security tool (copyright 2019) by Toni de la Fuente
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed
# under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
CHECK_ID_extra7192="7.192"
CHECK_TITLE_extra7192="[extra7192] Ensure session idle disconnect timeout is set to 10 minutes or less."
CHECK_SCORED_extra7192="NOT_SCORED"
CHECK_CIS_LEVEL_extra7192="EXTRA"
CHECK_SEVERITY_extra7192="Medium"
CHECK_ASFF_TYPE_extra7192="Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
CHECK_ASFF_RESOURCE_TYPE_extra7192="AmazonAppStream"
CHECK_ALTERNATE_check7192="extra7192"
CHECK_SERVICENAME_extra7192="appstream"
CHECK_RISK_extra7192="Idle disconnect timeout in minutes is the amount of time that users can be inactive before they are disconnected from their streaming session and the Disconnect timeout in minutes time begins."
CHECK_REMEDIATION_extra7192="Change the session idle timeout to 10 minutes or less for the AppStream Fleet."
CHECK_DOC_extra7192='https://docs.aws.amazon.com/appstream2/latest/developerguide/set-up-stacks-fleets.html'
CHECK_CAF_EPIC_extra7192="Infrastructure Security"
extra7192(){
for regx in $REGIONS; do
LIST_OF_FLEETS_WITH_SESSION_IDLE_DISCONNECT_DURATION_ABOVE_RECOMMENDED=$("${AWSCLI}" appstream describe-fleets $PROFILE_OPT --region "${regx}" --query 'Fleets[?IdleDisconnectTimeoutInSeconds>`600`].Arn' --output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError|Could not connect to the endpoint URL|Connect timeout on endpoint URL' <<< "${LIST_OF_FLEETS_WITH_SESSION_IDLE_DISCONNECT_DURATION_ABOVE_RECOMMENDED}"; then
textInfo "${regx}: Access Denied trying to describe appstream fleet(s)" "${regx}"
continue
fi
if [[ $LIST_OF_FLEETS_WITH_SESSION_IDLE_DISCONNECT_DURATION_ABOVE_RECOMMENDED && $LIST_OF_FLEETS_WITH_SESSION_IDLE_DISCONNECT_DURATION_ABOVE_RECOMMENDED != '[]' ]]; then
for Arn in $LIST_OF_FLEETS_WITH_SESSION_IDLE_DISCONNECT_DURATION_ABOVE_RECOMMENDED; do
textFail "$regx: Fleet: ${Arn} has the session idle disconnect timeout is set to more than 10 minutes." "$regx" "${Arn}"
done
else
textPass "${regx}: No AppStream Fleets having the session idle disconnect timeout set to more than 10 minutes found." "${regx}"
fi
done
}

45
checks/check_extra7193 Normal file
View File

@@ -0,0 +1,45 @@
#!/usr/bin/env bash
# Prowler - the handy cloud security tool (copyright 2019) by Toni de la Fuente
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed
# under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
CHECK_ID_extra7193="7.193"
CHECK_TITLE_extra7193="[extra7193] Ensure default Internet Access from your Amazon AppStream fleet streaming instances should remain unchecked."
CHECK_SCORED_extra7193="NOT_SCORED"
CHECK_CIS_LEVEL_extra7193="EXTRA"
CHECK_SEVERITY_extra7193="Medium"
CHECK_ASFF_TYPE_extra7193="Software and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark"
CHECK_ASFF_RESOURCE_TYPE_extra7193="AmazonAppStream"
CHECK_ALTERNATE_check7193="extra7193"
CHECK_SERVICENAME_extra7193="appstream"
CHECK_RISK_extra7193="Default Internet Access from your fleet streaming instances should be controlled using a NAT gateway in the VPC."
CHECK_REMEDIATION_extra7193="Uncheck the default internet access for the AppStream Fleet."
CHECK_DOC_extra7193='https://docs.aws.amazon.com/appstream2/latest/developerguide/set-up-stacks-fleets.html'
CHECK_CAF_EPIC_extra7193="Infrastructure Security"
extra7193(){
for regx in $REGIONS; do
LIST_OF_FLEETS_WITH_DEFAULT_INTERNET_ACCESS_ENABLED=$("${AWSCLI}" appstream describe-fleets $PROFILE_OPT --region "${regx}" --query 'Fleets[?EnableDefaultInternetAccess==`true`].Arn' --output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError|Could not connect to the endpoint URL|Connect timeout on endpoint URL' <<< "${LIST_OF_FLEETS_WITH_DEFAULT_INTERNET_ACCESS_ENABLED}"; then
textInfo "${regx}: Access Denied trying to describe appstream fleet(s)" "${regx}"
continue
fi
if [[ $LIST_OF_FLEETS_WITH_DEFAULT_INTERNET_ACCESS_ENABLED && $LIST_OF_FLEETS_WITH_DEFAULT_INTERNET_ACCESS_ENABLED != '[]' ]]; then
for Arn in $LIST_OF_FLEETS_WITH_DEFAULT_INTERNET_ACCESS_ENABLED; do
textFail "$regx: Fleet: ${Arn} has default internet access enabled." "$regx" "${Arn}"
done
else
textPass "${regx}: No AppStream Fleets have default internet access enabled." "${regx}"
fi
done
}

61
checks/check_extra7194 Normal file
View File

@@ -0,0 +1,61 @@
#!/usr/bin/env bash
# Prowler - the handy cloud security tool (copyright 2018) by Toni de la Fuente
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed
# under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
# Remediation:
#
# https://docs.aws.amazon.com/AmazonECR/latest/userguide/lp_creation.html
#
# aws ecr put-lifecycle-policy \
# --repository-name repository-name \
# --lifecycle-policy-text file://policy.json
CHECK_ID_extra7194="7.194"
CHECK_TITLE_extra7194="[extra7194] Check if ECR repositories have lifecycle policies enabled"
CHECK_SCORED_extra7194="NOT_SCORED"
CHECK_CIS_LEVEL_extra7194="EXTRA"
CHECK_SEVERITY_extra7194="Low"
CHECK_ALTERNATE_check776="extra7194"
CHECK_SERVICENAME_extra7194="ecr"
CHECK_ASFF_RESOURCE_TYPE_extra7194="AwsEcrRepository"
CHECK_RISK_extra7194='Amazon ECR repositories run the risk of retaining huge volumes of images, increasing unnecessary cost.'
CHECK_REMEDIATION_extra7194='Open the Amazon ECR console. Create an ECR lifecycle policy.'
CHECK_DOC_extra7194='https://docs.aws.amazon.com/AmazonECR/latest/userguide/LifecyclePolicies.html'
CHECK_CAF_EPIC_extra7194=''
extra7194(){
for region in ${REGIONS}; do
# List ECR repositories
LIST_ECR_REPOS=$(${AWSCLI} ecr describe-repositories ${PROFILE_OPT} --region "${region}" --query "repositories[*].[repositoryName]" --output text 2>&1)
# Handle authorization errors
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError|Could not connect to the endpoint URL' <<< "${LIST_ECR_REPOS}"; then
textInfo "${region}: Access Denied trying to describe ECR repositories" "${region}"
continue
fi
if [[ -n "${LIST_ECR_REPOS}" ]]; then
for repo in ${LIST_ECR_REPOS}; do
# Check if a lifecycle policy exists
LIFECYCLE_POLICY=$($AWSCLI ecr get-lifecycle-policy ${PROFILE_OPT} --region "${region}" --repository-name "${repo}" --query "repositoryName" --output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError|Could not connect to the endpoint URL' <<< "${LIFECYCLE_POLICY}"; then
textInfo "${region}: Access Denied trying to get lifecycle policy from repository: ${repo}" "${region}"
continue
elif grep -q -E 'LifecyclePolicyNotFoundException' <<< "$LIFECYCLE_POLICY"; then
textFail "${region}: ECR repository ${repo} has no lifecycle policy" "${region}" "${repo}"
else
textPass "${region}: ECR repository ${repo} has a lifecycle policy" "${region}" "${repo}"
fi
done
else
textPass "${region}: No ECR repositories found" "${region}"
fi
done
}

105
checks/check_extra7195 Normal file
View File

@@ -0,0 +1,105 @@
#!/usr/bin/env bash
# Prowler - the handy cloud security tool (copyright 2019) by Toni de la Fuente
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed
# under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
# Remediation:
#
# here URL to the relevand/official documentation
# https://docs.aws.amazon.com/codeartifact/latest/ug/package-origin-controls.html
# https://zego.engineering/dependency-confusion-in-aws-codeartifact-86b9ff68963d
# https://aws.amazon.com/blogs/devops/tighten-your-package-security-with-codeartifact-package-origin-control-toolkit/
#
#
# here commands or steps to fix it if avalable, like:
# aws codeartifact put-package-origin-configuration \
# --package "MyPackage" \
# --namespace "MyNamespace" \ #You don't need namespace for npm or pypi
# --domain "MyDomain" \
# --repository "MyRepository" \
# --domain-owner "MyOwnerAccount"
# --format "MyFormat" \ # npm/pypi/maven
# --restrictions 'publish=ALLOW,upstream=BLOCK'
CHECK_ID_extra7195="7.195"
CHECK_TITLE_extra7195="[check7195] Ensure CodeArtifact internal packages do not allow external public source publishing."
CHECK_SCORED_extra7195="NOT_SCORED"
CHECK_CIS_LEVEL_extra7195="EXTRA"
CHECK_SEVERITY_extra7195="Critical"
CHECK_ASFF_RESOURCE_TYPE_extra7195="Other"
CHECK_ALTERNATE_check7195="extra7195"
CHECK_SERVICENAME_extra7195="codeartifact"
CHECK_RISK_extra7195="Allowing package versions of a package to be added both by direct publishing and ingesting from public repositories makes you vulnerable to a dependency substitution attack."
CHECK_REMEDIATION_extra7195="Configure package origin controls on a package in a repository to limit how versions of that package can be added to the repository."
CHECK_DOC_extra7195="https://docs.aws.amazon.com/codeartifact/latest/ug/package-origin-controls.html"
CHECK_CAF_EPIC_extra7195=""
extra7195(){
# Checks Code Artifact packages for Dependency Confusion
# Looking for codeartifact repositories in all regions
for regx in ${REGIONS}; do
LIST_OF_REPOSITORIES=$("${AWSCLI}" codeartifact list-repositories ${PROFILE_OPT} --region "${regx}" --query 'repositories[*].[name,domainName,domainOwner]' --output text 2>&1)
if [[ $(echo "${LIST_OF_REPOSITORIES}" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError|Could not connect to the endpoint URL|ExpiredToken') ]]; then
textInfo "${regx}: Access Denied trying to list repositories" "${regx}"
continue
fi
if [[ "${LIST_OF_REPOSITORIES}" != "" && "${LIST_OF_REPOSITORIES}" != "none" ]]; then
while read -r REPOSITORY DOMAIN ACCOUNT; do
# Iterate over repositories to get packages
# Found repository scanning packages
LIST_OF_PACKAGES=$(aws codeartifact list-packages --repository "$REPOSITORY" --domain "$DOMAIN" --domain-owner "$ACCOUNT" ${PROFILE_OPT} --region "${regx}" --query 'packages[*].[package, namespace, format, originConfiguration.restrictions.upstream]' --output text 2>&1)
if [[ $(echo "${LIST_OF_PACKAGES}" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError|Could not connect to the endpoint URL|ExpiredToken') ]]; then
textInfo "${regx}: Access Denied trying to list packages for repository: ${REPOSITORY}" "${regx}" "${REPOSITORY}"
continue
fi
if [[ "${LIST_OF_PACKAGES}" != "" && "${LIST_OF_PACKAGES}" != "none" ]]; then
while read -r PACKAGE NAMESPACE FORMAT UPSTREAM; do
# Get the latest version of the package we assume if the latest is internal the package is internal
# textInfo "Found package: $(if [[ "$NAMESPACE" != "" && "$NAMESPACE" != "None" ]]; then echo "${NAMESPACE}:"; fi)${PACKAGE}"
LATEST=$(aws codeartifact list-package-versions --package "$PACKAGE" $(if [[ "$NAMESPACE" != "" && "$NAMESPACE" != "None" ]]; then echo "--namespace $NAMESPACE"; fi) --domain "$DOMAIN" --repository "$REPOSITORY" --domain-owner "$ACCOUNT" --format "$FORMAT" ${PROFILE_OPT} --region "${regx}" --sort-by PUBLISHED_TIME --no-paginate --query 'versions[0].version' --output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError|Could not connect to the endpoint URL|ExpiredToken' <<< "${LATEST}"; then
textInfo "${regx}: Access Denied trying to get latest version for packages: $(if [[ "$NAMESPACE" != "" && "$NAMESPACE" != "None" ]]; then echo "${NAMESPACE}:"; fi)${PACKAGE}" "${regx}"
continue
fi
if grep -q -E 'ResourceNotFoundException' <<< "${LATEST}"; then
textInfo "${regx}: Package not found for package: $(if [[ "$NAMESPACE" != "" && "$NAMESPACE" != "None" ]]; then echo "${NAMESPACE}:"; fi)${PACKAGE}" "${regx}"
continue
fi
LATEST=$(head -n 1 <<< $LATEST)
# textInfo "Latest version: ${LATEST}"
# Get the origin type for the latest version
ORIGIN_TYPE=$(aws codeartifact describe-package-version --package "$PACKAGE" $(if [[ "$NAMESPACE" != "" && "$NAMESPACE" != "None" ]]; then echo "--namespace $NAMESPACE"; fi) --domain "$DOMAIN" --repository "$REPOSITORY" --domain-owner "$ACCOUNT" --format "$FORMAT" --package-version "$LATEST" ${PROFILE_OPT} --region "${regx}" --query 'packageVersion.origin.originType' --output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError|Could not connect to the endpoint URL|ExpiredToken' <<< "${ORIGIN_TYPE}"; then
textInfo "${regx}: Access Denied trying to get origin type of package $(if [[ "$NAMESPACE" != "" && "$NAMESPACE" != "None" ]]; then echo "${NAMESPACE}:"; fi)${PACKAGE}:${LATEST}" "${regx}" "${PACKAGE}"
continue
fi
if grep -q -E 'INTERNAL|UNKNOWN' <<< "${ORIGIN_TYPE}"; then
# The package is internal
if [[ "$UPSTREAM" == "ALLOW" ]]; then
# The package is not configured to block upstream fail check
textFail "${regx}: Internal package $(if [[ "$NAMESPACE" != "" && "$NAMESPACE" != "None" ]]; then echo "${NAMESPACE}:"; fi)${PACKAGE} is vulnerable to dependency confusion in repository ${REPOSITORY}" "${regx}" "${PACKAGE}"
else
textPass "${regx}: Internal package $(if [[ "$NAMESPACE" != "" && "$NAMESPACE" != "None" ]]; then echo "${NAMESPACE}:"; fi)${PACKAGE} is NOT vulnerable to dependency confusion in repository ${REPOSITORY}" "${regx}" "${PACKAGE}"
fi
fi
done <<< "${LIST_OF_PACKAGES}"
else
textInfo "${regx}: No packages found in ${REPOSITORY}" "${regx}" "${REPOSITORY}"
fi
done <<< "${LIST_OF_REPOSITORIES}"
else
textPass "${regx}: No repositories found" "${regx}"
fi
done
}

View File

@@ -33,13 +33,18 @@ extra72(){
textInfo "$regx: Access Denied trying to describe snapshot" "$regx"
continue
fi
for snapshot in $LIST_OF_EBS_SNAPSHOTS; do
SNAPSHOT_IS_PUBLIC=$($AWSCLI ec2 describe-snapshot-attribute $PROFILE_OPT --region $regx --output text --snapshot-id $snapshot --attribute createVolumePermission --query "CreateVolumePermissions[?Group=='all']")
if [[ $SNAPSHOT_IS_PUBLIC ]];then
textFail "$regx: $snapshot is currently Public!" "$regx" "$snapshot"
else
textPass "$regx: $snapshot is not Public" "$regx" "$snapshot"
fi
done
if [[ ${LIST_OF_EBS_SNAPSHOTS} ]]
then
for snapshot in $LIST_OF_EBS_SNAPSHOTS; do
SNAPSHOT_IS_PUBLIC=$($AWSCLI ec2 describe-snapshot-attribute $PROFILE_OPT --region $regx --output text --snapshot-id $snapshot --attribute createVolumePermission --query "CreateVolumePermissions[?Group=='all']")
if [[ $SNAPSHOT_IS_PUBLIC ]];then
textFail "$regx: $snapshot is currently Public!" "$regx" "$snapshot"
else
textPass "$regx: $snapshot is not Public" "$regx" "$snapshot"
fi
done
else
textPass "$regx: There is no EBS Snapshots" "$regx" "No EBS Snapshots"
fi
done
}

View File

@@ -27,17 +27,26 @@ CHECK_CAF_EPIC_extra728='Data Protection'
extra728(){
for regx in $REGIONS; do
LIST_SQS=$($AWSCLI sqs list-queues $PROFILE_OPT --region $regx --query QueueUrls --output text 2>&1|grep -v ^None )
if [[ $(echo "$LIST_SQS" | grep -E 'AccessDenied|UnauthorizedOperation') ]]; then
LIST_SQS=$("$AWSCLI" sqs list-queues $PROFILE_OPT --region "$regx" --query QueueUrls --output text 2>&1|grep -v ^None )
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${LIST_SQS}"; then
textInfo "$regx: Access Denied trying to list queues" "$regx"
continue
fi
if [[ $LIST_SQS ]]; then
for queue in $LIST_SQS; do
# check if the policy has KmsMasterKeyId therefore SSE enabled
SSE_ENABLED_QUEUE=$($AWSCLI sqs get-queue-attributes --queue-url $queue $PROFILE_OPT --region $regx --attribute-names All --query Attributes.KmsMasterKeyId --output text|grep -v ^None)
if [[ $SSE_ENABLED_QUEUE ]]; then
textPass "$regx: SQS queue $queue is using Server Side Encryption" "$regx" "$queue"
SSE_KMS_ENABLED_QUEUE=$("$AWSCLI" sqs get-queue-attributes --queue-url "$queue" $PROFILE_OPT --region "$regx" --attribute-names All --query Attributes.KmsMasterKeyId --output text)
SSE_SQS_ENABLED_QUEUE=$("$AWSCLI" sqs get-queue-attributes --queue-url "$queue" $PROFILE_OPT --region "$regx" --attribute-names All --query Attributes.SqsManagedSseEnabled --output text)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${SSE_SQS_ENABLED_QUEUE}"; then
textInfo "$regx: Access Denied trying to list queues" "$regx"
continue
elif grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${SSE_KMS_ENABLED_QUEUE}"; then
textInfo "$regx: Access Denied trying to list queues" "$regx"
continue
elif [[ "$SSE_KMS_ENABLED_QUEUE" != 'None' ]]; then
textPass "$regx: SQS queue $queue is using KMS Server Side Encryption" "$regx" "$queue"
elif [[ "$SSE_SQS_ENABLED_QUEUE" == 'true' ]]; then
textPass "$regx: SQS queue $queue is using SQS Server Side Encryption" "$regx" "$queue"
else
textFail "$regx: SQS queue $queue is not using Server Side Encryption" "$regx" "$queue"
fi

View File

@@ -28,21 +28,25 @@ CHECK_CAF_EPIC_extra729='Data Protection'
extra729(){
# "Ensure there are no EBS Volumes unencrypted "
for regx in $REGIONS; do
LIST_OF_EBS_NON_ENC_VOLUMES=$($AWSCLI ec2 describe-volumes $PROFILE_OPT --region $regx --query 'Volumes[?Encrypted==`false`].VolumeId' --output text 2>&1)
if [[ $(echo "$LIST_OF_EBS_NON_ENC_VOLUMES" | grep -E 'AccessDenied|UnauthorizedOperation') ]]; then
textInfo "$regx: Access Denied trying to describe volumes" "$regx"
continue
fi
if [[ $LIST_OF_EBS_NON_ENC_VOLUMES ]];then
for volume in $LIST_OF_EBS_NON_ENC_VOLUMES; do
textFail "$regx: $volume is not encrypted!" "$regx" "$volume"
done
fi
LIST_OF_EBS_ENC_VOLUMES=$($AWSCLI ec2 describe-volumes $PROFILE_OPT --region $regx --query 'Volumes[?Encrypted==`true`].VolumeId' --output text)
if [[ $LIST_OF_EBS_ENC_VOLUMES ]];then
for volume in $LIST_OF_EBS_ENC_VOLUMES; do
textPass "$regx: $volume is encrypted" "$regx" "$volume"
done
fi
LIST_OF_EBS_NON_ENC_VOLUMES=$($AWSCLI ec2 describe-volumes $PROFILE_OPT --region $regx --query 'Volumes[?Encrypted==`false`].VolumeId' --output text 2>&1)
if [[ $(echo "$LIST_OF_EBS_NON_ENC_VOLUMES" | grep -E 'AccessDenied|UnauthorizedOperation') ]]; then
textInfo "$regx: Access Denied trying to describe volumes" "$regx"
continue
fi
if [[ $LIST_OF_EBS_NON_ENC_VOLUMES ]];then
for volume in $LIST_OF_EBS_NON_ENC_VOLUMES; do
textFail "$regx: $volume is not encrypted!" "$regx" "$volume"
done
fi
LIST_OF_EBS_ENC_VOLUMES=$($AWSCLI ec2 describe-volumes $PROFILE_OPT --region $regx --query 'Volumes[?Encrypted==`true`].VolumeId' --output text)
if [[ $LIST_OF_EBS_ENC_VOLUMES ]];then
for volume in $LIST_OF_EBS_ENC_VOLUMES; do
textPass "$regx: $volume is encrypted" "$regx" "$volume"
done
fi
if [[ ! "${LIST_OF_EBS_NON_ENC_VOLUMES}" ]] && [[ ! "${LIST_OF_EBS_ENC_VOLUMES}" ]]
then
textPass "$regx: There are no ebs volumes" "$regx" "No ebs volumes"
fi
done
}

View File

@@ -36,6 +36,6 @@ extra732(){
fi
done
else
textInfo "$REGION: No CloudFront distributions found"
textInfo "$REGION: No CloudFront distributions found" "$REGION" "$ACCOUNT_NUM"
fi
}

View File

@@ -34,13 +34,18 @@ extra74(){
textInfo "$regx: Access Denied trying to describe security groups" "$regx"
continue
fi
for SG_ID in $LIST_OF_SECURITYGROUPS; do
SG_NO_INGRESS_FILTER=$($AWSCLI ec2 describe-network-interfaces $PROFILE_OPT --region $regx --filters "Name=group-id,Values=$SG_ID" --query "length(NetworkInterfaces)" --output text)
if [[ $SG_NO_INGRESS_FILTER -ne 0 ]];then
textFail "$regx: $SG_ID has no ingress filtering and it is being used!" "$regx" "$SG_ID"
else
textInfo "$regx: $SG_ID has no ingress filtering but it is not being used" "$regx" "$SG_ID"
fi
done
if [[ ${LIST_OF_SECURITYGROUPS} ]]
then
for SG_ID in $LIST_OF_SECURITYGROUPS; do
SG_NO_INGRESS_FILTER=$($AWSCLI ec2 describe-network-interfaces $PROFILE_OPT --region $regx --filters "Name=group-id,Values=$SG_ID" --query "length(NetworkInterfaces)" --output text)
if [[ $SG_NO_INGRESS_FILTER -ne 0 ]];then
textFail "$regx: $SG_ID has no ingress filtering and it is being used!" "$regx" "$SG_ID"
else
textInfo "$regx: $SG_ID has no ingress filtering but it is not being used" "$regx" "$SG_ID"
fi
done
else
textPass "$regx: There is no EC2 Security Groups" "$regx" "No EBS Snapshots"
fi
done
}

View File

@@ -26,71 +26,39 @@ CHECK_CAF_EPIC_extra740='Data Protection'
extra740(){
# This does NOT use max-items, which would limit the number of items
# considered. It considers all snapshots, but only reports at most
# considered. It considers all snapshots, but only reports at most
# max-items passing and max-items failing.
for regx in ${REGIONS}; do
UNENCRYPTED_SNAPSHOTS=$(${AWSCLI} ec2 describe-snapshots ${PROFILE_OPT} \
--region ${regx} --owner-ids ${ACCOUNT_NUM} --output text \
--query 'Snapshots[?Encrypted==`false`]|[*].{Id:SnapshotId}' 2>&1 \
--query 'Snapshots[?Encrypted==`false`]|[*].{Id:SnapshotId}' --max-items $MAXITEMS 2>&1 \
| grep -v None )
if [[ $(echo "$UNENCRYPTED_SNAPSHOTS" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
textInfo "$regx: Access Denied trying to describe snapshots" "$regx"
continue
fi
fi
ENCRYPTED_SNAPSHOTS=$(${AWSCLI} ec2 describe-snapshots ${PROFILE_OPT} \
--region ${regx} --owner-ids ${ACCOUNT_NUM} --output text \
--query 'Snapshots[?Encrypted==`true`]|[*].{Id:SnapshotId}' 2>&1 \
--query 'Snapshots[?Encrypted==`true`]|[*].{Id:SnapshotId}' --max-items $MAXITEMS 2>&1 \
| grep -v None )
if [[ $(echo "$ENCRYPTED_SNAPSHOTS" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
textInfo "$regx: Access Denied trying to describe snapshots" "$regx"
continue
fi
typeset -i unencrypted
typeset -i encrypted
unencrypted=0
encrypted=0
fi
if [[ ${UNENCRYPTED_SNAPSHOTS} ]]; then
for snapshot in ${UNENCRYPTED_SNAPSHOTS}; do
unencrypted=${unencrypted}+1
if [ "${unencrypted}" -le "${MAXITEMS}" ]; then
textFail "${regx}: ${snapshot} is not encrypted!" "${regx}" "${snapshot}"
fi
textFail "${regx}: ${snapshot} is not encrypted!" "${regx}" "${snapshot}"
done
fi
if [[ ${ENCRYPTED_SNAPSHOTS} ]]; then
for snapshot in ${ENCRYPTED_SNAPSHOTS}; do
encrypted=${encrypted}+1
if [ "${encrypted}" -le "${MAXITEMS}" ]; then
textPass "${regx}: ${snapshot} is encrypted." "${regx}" "${snapshot}"
fi
textPass "${regx}: ${snapshot} is encrypted." "${regx}" "${snapshot}"
done
fi
if [[ "${encrypted}" = "0" ]] && [[ "${unencrypted}" = "0" ]] ; then
textInfo "${regx}: No EBS volume snapshots" "${regx}"
else
typeset -i total
total=${encrypted}+${unencrypted}
if [[ "${unencrypted}" -ge "${MAXITEMS}" ]]; then
textFail "${unencrypted} unencrypted snapshots out of ${total} snapshots found. Only the first ${MAXITEMS} unencrypted snapshots are reported!"
fi
if [[ "${encrypted}" -ge "${MAXITEMS}" ]]; then
textPass "${encrypted} encrypted snapshots out of ${total} snapshots found. Only the first ${MAXITEMS} encrypted snapshots are reported."
fi
# Bit of 'bc' magic to print something like 10.42% or 0.85% or similar. 'bc' has a
# bug where it will never print leading zeros. So 0.5 is output as ".5". This has a
# little extra clause to print a 0 if 0 < x < 1.
ratio=$(echo "scale=2; p=(100*${encrypted}/(${encrypted}+${unencrypted})); if(p<1 && p>0) print 0;print p, \"%\";" | bc 2>/dev/null)
exit=$?
# maybe 'bc' doesn't exist, or it exits with an error
if [[ "${exit}" = "0" ]]
then
textInfo "${regx}: ${ratio} encrypted EBS volumes (${encrypted} out of ${total})" "${regx}"
else
textInfo "${regx}: ${unencrypted} unencrypted EBS volume snapshots out of ${total} total snapshots" "${regx}"
fi
if [[ -z ${ENCRYPTED_SNAPSHOTS} ]] && [[ -z ${UNENCRYPTED_SNAPSHOTS} ]] ; then
textInfo "${regx}: No EBS volume snapshots." "${regx}"
fi
done
}

View File

@@ -18,23 +18,29 @@ CHECK_SEVERITY_extra745="Medium"
CHECK_ASFF_RESOURCE_TYPE_extra745="AwsApiGatewayRestApi"
CHECK_ALTERNATE_check745="extra745"
CHECK_SERVICENAME_extra745="apigateway"
CHECK_RISK_extra745='If accessible from internet without restrictions opens up attack / abuse surface for any malicious user.'
CHECK_REMEDIATION_extra745='Verify that any public Api Gateway is protected and audited. Detective controls for common risks should be implemented.'
CHECK_RISK_extra745='If accessible from internet opens up attack / abuse surface for any malicious user.'
CHECK_REMEDIATION_extra745='Make Api Gateway private if a public endpoint is not necessary. Detective controls for common risks should be implemented.'
CHECK_DOC_extra745='https://d1.awsstatic.com/whitepapers/api-gateway-security.pdf?svrd_sip6'
CHECK_CAF_EPIC_extra745='Infrastructure Security'
extra745(){
for regx in $REGIONS; do
LIST_OF_REST_APIS=$($AWSCLI $PROFILE_OPT --region $regx apigateway get-rest-apis --query 'items[*].id' --output text 2>&1)
if [[ $(echo "$LIST_OF_REST_APIS" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
LIST_OF_REST_APIS=$("$AWSCLI" $PROFILE_OPT --region "$regx" apigateway get-rest-apis --query 'items[*].id' --output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "$LIST_OF_REST_APIS"; then
textInfo "$regx: Access Denied trying to get rest APIs" "$regx"
continue
fi
if [[ $LIST_OF_REST_APIS ]];then
for api in $LIST_OF_REST_APIS; do
API_GW_NAME=$($AWSCLI apigateway get-rest-apis $PROFILE_OPT --region $regx --query "items[?id==\`$api\`].name" --output text)
ENDPOINT_CONFIG_TYPE=$($AWSCLI $PROFILE_OPT --region $regx apigateway get-rest-api --rest-api-id $api --query endpointConfiguration.types --output text)
if [[ $ENDPOINT_CONFIG_TYPE ]]; then
API_GW_NAME=$("$AWSCLI" apigateway get-rest-apis $PROFILE_OPT --region $regx --query "items[?id==\`$api\`].name" --output text)
ENDPOINT_CONFIG_TYPE=$("$AWSCLI" $PROFILE_OPT --region "$regx" apigateway get-rest-api --rest-api-id "$api" --query endpointConfiguration.types --output text)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "$API_GW_NAME"; then
textInfo "$regx: Access Denied trying to get rest APIs" "$regx"
continue
elif grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "$ENDPOINT_CONFIG_TYPE"; then
textInfo "$regx: Access Denied trying to get rest APIs" "$regx"
continue
elif [[ $ENDPOINT_CONFIG_TYPE ]]; then
case $ENDPOINT_CONFIG_TYPE in
PRIVATE )
textPass "$regx: API Gateway $API_GW_NAME ID $api is set as $ENDPOINT_CONFIG_TYPE" "$regx" "$API_GW_NAME"

View File

@@ -26,11 +26,11 @@ CHECK_CAF_EPIC_extra749='Infrastructure Security'
extra749(){
for regx in $REGIONS; do
SG_LIST=$($AWSCLI ec2 describe-security-groups --query 'SecurityGroups[?length(IpPermissions[?((FromPort==null && ToPort==null) || ((FromPort<=`1521` && ToPort>=`1521`)||(FromPort<=`2483` && ToPort>=`2483`))) && (contains(IpRanges[].CidrIp, `0.0.0.0/0`) || contains(Ipv6Ranges[].CidrIpv6, `::/0`))]) > `0`].{GroupId:GroupId}' $PROFILE_OPT --region $regx --output text 2>&1)
SG_LIST=$($AWSCLI ec2 describe-security-groups --query 'SecurityGroups[?length(IpPermissions[?((FromPort==null && ToPort==null) || ((FromPort<=`1521` && ToPort>=`1521`)||(FromPort<=`2483` && ToPort>=`2483`))) && (contains(IpRanges[].CidrIp, `0.0.0.0/0`) || contains(Ipv6Ranges[].CidrIpv6, `::/0`)) && (IpProtocol==`tcp`)]) > `0`].{GroupId:GroupId}' $PROFILE_OPT --region $regx --output text 2>&1)
if [[ $(echo "$SG_LIST" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
textInfo "$regx: Access Denied trying to describe security groups" "$regx"
continue
fi
fi
if [[ $SG_LIST ]];then
for SG in $SG_LIST;do
textFail "$regx: Found Security Group: $SG open to 0.0.0.0/0 for Oracle ports" "$regx" "$SG"

View File

@@ -26,11 +26,11 @@ CHECK_CAF_EPIC_extra750='Infrastructure Security'
extra750(){
for regx in $REGIONS; do
SG_LIST=$($AWSCLI ec2 describe-security-groups --query 'SecurityGroups[?length(IpPermissions[?((FromPort==null && ToPort==null) || (FromPort<=`3306` && ToPort>=`3306`)) && (contains(IpRanges[].CidrIp, `0.0.0.0/0`) || contains(Ipv6Ranges[].CidrIpv6, `::/0`))]) > `0`].{GroupId:GroupId}' $PROFILE_OPT --region $regx --output text 2>&1)
SG_LIST=$($AWSCLI ec2 describe-security-groups --query 'SecurityGroups[?length(IpPermissions[?((FromPort==null && ToPort==null) || (FromPort<=`3306` && ToPort>=`3306`)) && (contains(IpRanges[].CidrIp, `0.0.0.0/0`) || contains(Ipv6Ranges[].CidrIpv6, `::/0`)) && (IpProtocol==`tcp`)]) > `0`].{GroupId:GroupId}' $PROFILE_OPT --region $regx --output text 2>&1)
if [[ $(echo "$SG_LIST" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
textInfo "$regx: Access Denied trying to describe security groups" "$regx"
continue
fi
fi
if [[ $SG_LIST ]];then
for SG in $SG_LIST;do
textFail "$regx: Found Security Group: $SG open to 0.0.0.0/0 for MySQL port" "$regx" "$SG"

View File

@@ -26,11 +26,11 @@ CHECK_CAF_EPIC_extra751='Infrastructure Security'
extra751(){
for regx in $REGIONS; do
SG_LIST=$($AWSCLI ec2 describe-security-groups --query 'SecurityGroups[?length(IpPermissions[?((FromPort==null && ToPort==null) || (FromPort<=`5432` && ToPort>=`5432`)) && (contains(IpRanges[].CidrIp, `0.0.0.0/0`) || contains(Ipv6Ranges[].CidrIpv6, `::/0`))]) > `0`].{GroupId:GroupId}' $PROFILE_OPT --region $regx --output text 2>&1)
SG_LIST=$($AWSCLI ec2 describe-security-groups --query 'SecurityGroups[?length(IpPermissions[?((FromPort==null && ToPort==null) || (FromPort<=`5432` && ToPort>=`5432`)) && (contains(IpRanges[].CidrIp, `0.0.0.0/0`) || contains(Ipv6Ranges[].CidrIpv6, `::/0`)) && (IpProtocol==`tcp`)]) > `0`].{GroupId:GroupId}' $PROFILE_OPT --region $regx --output text 2>&1)
if [[ $(echo "$SG_LIST" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
textInfo "$regx: Access Denied trying to describe security groups" "$regx"
continue
fi
fi
if [[ $SG_LIST ]];then
for SG in $SG_LIST;do
textFail "$regx: Found Security Group: $SG open to 0.0.0.0/0 for Postgres port" "$regx" "$SG"

View File

@@ -26,11 +26,11 @@ CHECK_CAF_EPIC_extra752='Infrastructure Security'
extra752(){
for regx in $REGIONS; do
SG_LIST=$($AWSCLI ec2 describe-security-groups --query 'SecurityGroups[?length(IpPermissions[?((FromPort==null && ToPort==null) || (FromPort<=`6379` && ToPort>=`6379`)) && (contains(IpRanges[].CidrIp, `0.0.0.0/0`) || contains(Ipv6Ranges[].CidrIpv6, `::/0`))]) > `0`].{GroupId:GroupId}' $PROFILE_OPT --region $regx --output text 2>&1)
SG_LIST=$($AWSCLI ec2 describe-security-groups --query 'SecurityGroups[?length(IpPermissions[?((FromPort==null && ToPort==null) || (FromPort<=`6379` && ToPort>=`6379`)) && (contains(IpRanges[].CidrIp, `0.0.0.0/0`) || contains(Ipv6Ranges[].CidrIpv6, `::/0`)) && (IpProtocol==`tcp`)]) > `0`].{GroupId:GroupId}' $PROFILE_OPT --region $regx --output text 2>&1)
if [[ $(echo "$SG_LIST" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
textInfo "$regx: Access Denied trying to describe security groups" "$regx"
continue
fi
fi
if [[ $SG_LIST ]];then
for SG in $SG_LIST;do
textFail "$regx: Found Security Group: $SG open to 0.0.0.0/0 for Redis port" "$regx" "$SG"

View File

@@ -26,11 +26,11 @@ CHECK_CAF_EPIC_extra753='Infrastructure Security'
extra753(){
for regx in $REGIONS; do
SG_LIST=$($AWSCLI ec2 describe-security-groups --query 'SecurityGroups[?length(IpPermissions[?((FromPort==null && ToPort==null) || ((FromPort<=`27017` && ToPort>=`27017`) || (FromPort<=`27018` && ToPort>=`27018`))) && (contains(IpRanges[].CidrIp, `0.0.0.0/0`) || contains(Ipv6Ranges[].CidrIpv6, `::/0`))]) > `0`].{GroupId:GroupId}' $PROFILE_OPT --region $regx --output text 2>&1)
SG_LIST=$($AWSCLI ec2 describe-security-groups --query 'SecurityGroups[?length(IpPermissions[?((FromPort==null && ToPort==null) || ((FromPort<=`27017` && ToPort>=`27017`) || (FromPort<=`27018` && ToPort>=`27018`))) && (contains(IpRanges[].CidrIp, `0.0.0.0/0`) || contains(Ipv6Ranges[].CidrIpv6, `::/0`)) && (IpProtocol==`tcp`)]) > `0`].{GroupId:GroupId}' $PROFILE_OPT --region $regx --output text 2>&1)
if [[ $(echo "$SG_LIST" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
textInfo "$regx: Access Denied trying to describe security groups" "$regx"
continue
fi
fi
if [[ $SG_LIST ]];then
for SG in $SG_LIST;do
textFail "$regx: Found Security Group: $SG open to 0.0.0.0/0 for MongoDB ports" "$regx" "$SG"

View File

@@ -26,11 +26,11 @@ CHECK_CAF_EPIC_extra754='Infrastructure Security'
extra754(){
for regx in $REGIONS; do
SG_LIST=$($AWSCLI ec2 describe-security-groups --query 'SecurityGroups[?length(IpPermissions[?((FromPort==null && ToPort==null) || ((FromPort<=`7199` && ToPort>=`7199`) || (FromPort<=`9160` && ToPort>=`9160`)|| (FromPort<=`8888` && ToPort>=`8888`))) && (contains(IpRanges[].CidrIp, `0.0.0.0/0`) || contains(Ipv6Ranges[].CidrIpv6, `::/0`))]) > `0`].{GroupId:GroupId}' $PROFILE_OPT --region $regx --output text 2>&1)
SG_LIST=$($AWSCLI ec2 describe-security-groups --query 'SecurityGroups[?length(IpPermissions[?((FromPort==null && ToPort==null) || ((FromPort<=`7199` && ToPort>=`7199`) || (FromPort<=`9160` && ToPort>=`9160`)|| (FromPort<=`8888` && ToPort>=`8888`))) && (contains(IpRanges[].CidrIp, `0.0.0.0/0`) || contains(Ipv6Ranges[].CidrIpv6, `::/0`)) && (IpProtocol==`tcp`)]) > `0`].{GroupId:GroupId}' $PROFILE_OPT --region $regx --output text 2>&1)
if [[ $(echo "$SG_LIST" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
textInfo "$regx: Access Denied trying to describe security groups" "$regx"
continue
fi
fi
if [[ $SG_LIST ]];then
for SG in $SG_LIST;do
textFail "$regx: Found Security Group: $SG open to 0.0.0.0/0 for Cassandra ports" "$regx" "$SG"

View File

@@ -30,7 +30,7 @@ extra755(){
if [[ $(echo "$SG_LIST" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
textInfo "$regx: Access Denied trying to describe security groups" "$regx"
continue
fi
fi
if [[ $SG_LIST ]];then
for SG in $SG_LIST;do
textFail "$regx: Found Security Group: $SG open to 0.0.0.0/0 for Memcached port" "$regx" "$SG"

View File

@@ -27,24 +27,22 @@ extra762(){
# regex to match OBSOLETE runtimes in string functionName%runtime
# https://docs.aws.amazon.com/lambda/latest/dg/runtime-support-policy.html
OBSOLETE='%(nodejs4.3|nodejs4.3-edge|nodejs6.10|nodejs8.10|dotnetcore1.0|dotnetcore2.0)'
OBSOLETE='nodejs4.3|nodejs4.3-edge|nodejs6.10|nodejs8.10|dotnetcore1.0|dotnetcore2.0|dotnetcore2.1|python3.6|python2.7|ruby2.5|nodejs10.x|nodejs'
for regx in $REGIONS; do
LIST_OF_FUNCTIONS=$($AWSCLI lambda list-functions $PROFILE_OPT --region $regx --output text --query 'Functions[*].{R:Runtime,N:FunctionName}' 2>&1| tr "\t" "%" )
if [[ $(echo "$LIST_OF_FUNCTIONS" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
LIST_OF_FUNCTIONS=$("$AWSCLI" lambda list-functions $PROFILE_OPT --region "$regx" --query 'Functions[*].[Runtime,FunctionName]' --output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "$LIST_OF_FUNCTIONS"; then
textInfo "$regx: Access Denied trying to list functions" "$regx"
continue
fi
if [[ $LIST_OF_FUNCTIONS ]]; then
for lambdafunction in $LIST_OF_FUNCTIONS;do
fname=$(echo "$lambdafunction" | cut -d'%' -f1)
runtime=$(echo "$lambdafunction" | cut -d'%' -f2)
if echo "$lambdafunction" | grep -Eq $OBSOLETE ; then
textFail "$regx: Obsolete runtime: ${runtime} used by: ${fname}" "$regx" "${fname}"
while read -r FUNCTION_RUNTIME FUNCTION_NAME; do
if grep -wEq $OBSOLETE <<< "${FUNCTION_RUNTIME}" ; then
textFail "$regx: Obsolete runtime: ${FUNCTION_RUNTIME} used by: ${FUNCTION_NAME}" "$regx" "${FUNCTION_NAME}"
else
textPass "$regx: Supported runtime: ${runtime} used by: ${fname}" "$regx" "${fname}"
textPass "$regx: Supported runtime: ${FUNCTION_RUNTIME} used by: ${FUNCTION_NAME}" "$regx" "${FUNCTION_NAME}"
fi
done
done<<<"${LIST_OF_FUNCTIONS}"
else
textInfo "$regx: No Lambda functions found" "$regx"
fi

View File

@@ -41,7 +41,7 @@ extra763(){
BUCKET_VERSIONING_ENABLED=$("${AWSCLI}" s3api get-bucket-versioning --bucket "${bucket}" ${PROFILE_OPT} --region "${BUCKET_REGION}" --query Status --output text 2>&1)
if grep -q 'AccessDenied' <<< "${BUCKET_VERSIONING_ENABLED}"; then
textInfo "${BUCKET_REGION}: Access Denied Trying to Get Bucket Versioning for $bucket"
textInfo "${BUCKET_REGION}: Access Denied Trying to Get Bucket Versioning for $bucket" "${REGION}"
continue
fi
if grep -q "^Enabled$" <<< "${BUCKET_VERSIONING_ENABLED}"; then

View File

@@ -24,17 +24,36 @@ CHECK_DOC_extra767='https://docs.aws.amazon.com/AmazonCloudFront/latest/Develope
CHECK_CAF_EPIC_extra767='Data Protection'
extra767(){
LIST_OF_DISTRIBUTIONS=$($AWSCLI cloudfront list-distributions --query 'DistributionList.Items[*].Id' $PROFILE_OPT --output text|grep -v ^None)
LIST_OF_DISTRIBUTIONS=$("${AWSCLI}" cloudfront list-distributions --query 'DistributionList.Items[*].Id' ${PROFILE_OPT} --output text | grep -v ^None)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${LIST_OF_DISTRIBUTIONS}"; then
textInfo "${REGION}: Access Denied Trying to list CloudFront distributions" "${REGION}"
exit
fi
if [[ $LIST_OF_DISTRIBUTIONS ]];then
for dist in $LIST_OF_DISTRIBUTIONS; do
CHECK_FLE=$($AWSCLI cloudfront get-distribution --id $dist --query Distribution.DistributionConfig.DefaultCacheBehavior.FieldLevelEncryptionId $PROFILE_OPT --output text)
if [[ $CHECK_FLE ]]; then
textPass "CloudFront distribution $dist has Field Level Encryption enabled" "$regx" "$dist"
for distribution in $LIST_OF_DISTRIBUTIONS; do
CHECK_ALLOWED_METHODS=$("${AWSCLI}" cloudfront get-distribution --id "${distribution}" --query Distribution.DistributionConfig.DefaultCacheBehavior.AllowedMethods.Items ${PROFILE_OPT} --output text)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${CHECK_ALLOWED_METHODS}"; then
textInfo "${REGION}: Access Denied Trying to get CloudFront distributions" "${REGION}"
continue
fi
if [[ "$CHECK_ALLOWED_METHODS" =~ "POST" ]]; then
CHECK_FIELD_LEVEL_ENCRYPTION=$("${AWSCLI}" cloudfront get-distribution --id "${distribution}" --query Distribution.DistributionConfig.DefaultCacheBehavior.FieldLevelEncryptionId ${PROFILE_OPT} --output text)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${CHECK_FIELD_LEVEL_ENCRYPTION}"; then
textInfo "${REGION}: Access Denied Trying to get CloudFront distributions" "${REGION}"
continue
fi
if [[ $CHECK_FIELD_LEVEL_ENCRYPTION ]]; then
textPass "CloudFront distribution ${distribution} has Field Level Encryption enabled" "${REGION}" "${distribution}"
else
textFail "CloudFront distribution ${distribution} has Field Level Encryption disabled!" "${REGION}" "${distribution}"
fi
else
textFail "CloudFront distribution $dist has Field Level Encryption disabled!" "$regx" "$dist"
textPass "CloudFront distribution ${distribution} does not support POST requests"
fi
done
else
textInfo "No CloudFront distributions found" "$regx"
textPass "No CloudFront distributions found" "${REGION}"
fi
}

View File

@@ -30,11 +30,11 @@ extra77(){
for regx in $REGIONS; do
LIST_ECR_REPOS=$($AWSCLI ecr describe-repositories $PROFILE_OPT --region $regx --query "repositories[*].[repositoryName]" --output text 2>&1)
if [[ $(echo "$LIST_ECR_REPOS" | grep AccessDenied) ]]; then
textInfo "$regx: Access Denied Trying to describe ECR repositories" "$regx" "$repo"
textInfo "$regx: Access Denied Trying to describe ECR repositories" "$regx"
continue
fi
if [[ $(echo "$LIST_ECR_REPOS" | grep SubscriptionRequiredException) ]]; then
textInfo "$regx: Subscription Required Exception trying to describe ECR repositories" "$regx" "$repo"
textInfo "$regx: Subscription Required Exception trying to describe ECR repositories" "$regx"
continue
fi
if [[ ! -z "$LIST_ECR_REPOS" ]]; then
@@ -55,14 +55,14 @@ extra77(){
# check if the policy has Principal as *
CHECK_ECR_REPO_ALLUSERS_POLICY=$(cat $TEMP_POLICY_FILE | jq '.Statement[]|select(.Effect=="Allow" and (((.Principal|type == "object") and .Principal.AWS == "*") or ((.Principal|type == "string") and .Principal == "*")))')
if [[ $CHECK_ECR_REPO_ALLUSERS_POLICY ]]; then
textFail "$regx: $repo policy \"may\" allow Anonymous users to perform actions (Principal: \"*\")" "$regx"
textFail "$regx: $repo policy \"may\" allow Anonymous users to perform actions (Principal: \"*\")" "$regx" "$repo"
else
textPass "$regx: $repo is not open" "$regx" "$repo"
fi
rm -f $TEMP_POLICY_FILE
done
else
textInfo "$regx: No ECR repositories found" "$regx" "$repo"
textInfo "$regx: No ECR repositories found" "$regx"
fi
done
}

View File

@@ -33,8 +33,8 @@ extra771(){
for bucket in ${LIST_OF_BUCKETS};do
# Recover Bucket region
BUCKET_REGION=$("${AWSCLI}" ${PROFILE_OPT} s3api get-bucket-location --bucket "${bucket}" --query LocationConstraint --output text)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${BUCKET_POLICY_STATEMENTS}"; then
textInfo "${REGION}: Access Denied trying to get bucket policy for ${bucket}" "${REGION}"
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${BUCKET_REGION}"; then
textInfo "${REGION}: Access Denied trying to get bucket location for ${bucket}" "${REGION}"
fi
# If None use default region
if [[ "${BUCKET_REGION}" == "None" ]]; then
@@ -43,11 +43,11 @@ extra771(){
# Recover Bucket policy statements
BUCKET_POLICY_STATEMENTS=$("${AWSCLI}" s3api ${PROFILE_OPT} get-bucket-policy --region "${BUCKET_REGION}" --bucket "${bucket}" --output json --query Policy 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${BUCKET_POLICY_STATEMENTS}"; then
textInfo "${REGION}: Access Denied trying to get bucket policy for ${bucket}" "${REGION}"
textInfo "${BUCKET_REGION}: Access Denied trying to get bucket policy for ${bucket}" "${BUCKET_REGION}"
continue
fi
if grep -q -E 'NoSuchBucketPolicy'<<< "${BUCKET_POLICY_STATEMENTS}"; then
textInfo "${REGION}: Bucket policy does not exist for bucket ${bucket}" "${REGION}"
textInfo "${BUCKET_REGION}: Bucket policy does not exist for bucket ${bucket}" "${BUCKET_REGION}"
else
BUCKET_POLICY_BAD_STATEMENTS=$(jq --compact-output --arg arn "arn:${AWS_PARTITION}:s3:::$bucket" 'fromjson | .Statement[]|select(
.Effect=="Allow" and
@@ -66,9 +66,9 @@ extra771(){
# Make sure JSON comma characted will not break CSV output. Replace "," by word "[comma]"
BUCKET_POLICY_BAD_STATEMENTS="${BUCKET_POLICY_BAD_STATEMENTS//,/[comma]}"
if [[ "${BUCKET_POLICY_BAD_STATEMENTS}" != "" ]]; then
textFail "${REGION}: Bucket ${bucket} allows public write: ${BUCKET_POLICY_BAD_STATEMENTS}" "${REGION}" "${bucket}"
textFail "${BUCKET_REGION}: Bucket ${bucket} allows public write: ${BUCKET_POLICY_BAD_STATEMENTS}" "${BUCKET_REGION}" "${bucket}"
else
textPass "${REGION}: Bucket ${bucket} has S3 bucket policy which does not allow public write access" "${REGION}" "${bucket}"
textPass "${BUCKET_REGION}: Bucket ${bucket} has S3 bucket policy which does not allow public write access" "${BUCKET_REGION}" "${bucket}"
fi
fi
done

View File

@@ -31,12 +31,12 @@ extra773(){
for dist in $LIST_OF_DISTRIBUTIONS; do
WEB_ACL_ID=$($AWSCLI cloudfront get-distribution $PROFILE_OPT --id "$dist" --query 'Distribution.DistributionConfig.WebACLId' --output text)
if [[ $WEB_ACL_ID ]]; then
textPass "CloudFront distribution $dist is using AWS WAF web ACL $WEB_ACL_ID" "us-east-1" "$dist"
textPass "$REGION: CloudFront distribution $dist is using AWS WAF web ACL $WEB_ACL_ID" "$REGION" "$dist"
else
textFail "CloudFront distribution $dist is not using AWS WAF web ACL" "us-east-1" "$dist"
textFail "$REGION: CloudFront distribution $dist is not using AWS WAF web ACL" "$REGION" "$dist"
fi
done
else
textInfo "No CloudFront distributions found"
textInfo "$REGION: No CloudFront distributions found" "$REGION"
fi
}

View File

@@ -19,7 +19,7 @@ CHECK_SEVERITY_extra778="Medium"
CHECK_ASFF_RESOURCE_TYPE_extra778="AwsEc2SecurityGroup"
CHECK_ALTERNATE_check778="extra778"
CHECK_SERVICENAME_extra778="ec2"
CHECK_RISK_extra778='If Security groups are not properly configured the attack surface is increased. '
CHECK_RISK_extra778='If Security groups are not properly configured the attack surface is increased.'
CHECK_REMEDIATION_extra778='Use a Zero Trust approach. Narrow ingress traffic as much as possible. Consider north-south as well as east-west traffic.'
CHECK_DOC_extra778='https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html'
CHECK_CAF_EPIC_extra778='Infrastructure Security'
@@ -54,6 +54,7 @@ extra778(){
continue
fi
PASS="true"
for CIDR_IP in ${CIDR_IP_LIST}; do
if [[ ! ${CIDR_IP} =~ ${RFC1918_REGEX} ]]; then
CIDR=$(echo ${CIDR_IP} | cut -d"/" -f2 | xargs)
@@ -61,11 +62,14 @@ extra778(){
# Edge case "0.0.0.0/0" for RDP and SSH are checked already by check41 and check42
if [[ ${CIDR} < ${CIDR_THRESHOLD} && 0 < ${CIDR} ]]; then
textFail "${REGION}: ${SECURITY_GROUP} has potential wide-open non-RFC1918 address ${CIDR_IP} in ${DIRECTION} rule" "${REGION}" "${SECURITY_GROUP}"
else
textPass "${REGION}: ${SECURITY_GROUP} has no potential wide-open non-RFC1918 address" "${REGION}" "${SECURITY_GROUP}"
PASS="false"
fi
fi
done
if [[ ${PASS} == "true" ]]
then
textPass "${REGION}: ${SECURITY_GROUP} has no potential ${DIRECTION} rule with a wide-open non-RFC1918 address" "${REGION}" "${SECURITY_GROUP}"
fi
}
for regx in ${REGIONS}; do

View File

@@ -11,40 +11,45 @@
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
CHECK_ID_extra780="7.80"
CHECK_TITLE_extra780="[extra780] Check if Amazon Elasticsearch Service (ES) domains has Amazon Cognito authentication for Kibana enabled"
CHECK_TITLE_extra780="[extra780] Check if Amazon OpenSearch Service domains (formerly known as Elasticsearch or ES) has either Amazon Cognito authentication or SAML authentication for Kibana enabled"
CHECK_SCORED_extra780="NOT_SCORED"
CHECK_CIS_LEVEL_extra780="EXTRA"
CHECK_SEVERITY_extra780="High"
CHECK_ASFF_RESOURCE_TYPE_extra780="AwsElasticsearchDomain"
CHECK_ALTERNATE_check780="extra780"
CHECK_SERVICENAME_extra780="es"
CHECK_RISK_extra780='Amazon Elasticsearch Service supports Amazon Cognito for Kibana authentication. '
CHECK_REMEDIATION_extra780='If you do not configure Amazon Cognito authentication; you can still protect Kibana using an IP-based access policy and a proxy server; HTTP basic authentication; or SAML.'
CHECK_RISK_extra780='Enable Amazon Cognito authentication or SAML authentication for Kibana, supported by Amazon OpenSearch Service (formerly known as Elasticsearch).'
CHECK_REMEDIATION_extra780='If you do not configure Amazon Cognito or SAML authentication; you can still protect Kibana using an IP-based access policy and a proxy server; HTTP basic authentication.'
CHECK_DOC_extra780='https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-ac.html'
CHECK_CAF_EPIC_extra780='IAM'
extra780(){
for regx in ${REGIONS}; do
LIST_OF_DOMAINS=$("${AWSCLI}" es list-domain-names ${PROFILE_OPT} --region "${regx}" --query 'DomainNames[].DomainName' --output text 2>&1)
if [[ $(echo "${LIST_OF_DOMAINS}" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
if grep -E -q 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${LIST_OF_DOMAINS}"; then
textInfo "${regx}: Access Denied trying to list domain names" "${regx}"
continue
fi
if [[ "${LIST_OF_DOMAINS}" ]]; then
for domain in ${LIST_OF_DOMAINS}; do
CHECK_IF_COGNITO_ENABLED=$("${AWSCLI}" es describe-elasticsearch-domain --domain-name "${domain}" ${PROFILE_OPT} --region "${regx}" --query 'DomainStatus.CognitoOptions.Enabled' --output text 2>&1)
if [[ $(echo "${CHECK_IF_COGNITO_ENABLED}" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
CHECK_IF_COGNITO_OR_SAML_ENABLED=$("${AWSCLI}" es describe-elasticsearch-domain --domain-name "${domain}" ${PROFILE_OPT} --region "${regx}" --query 'DomainStatus.[CognitoOptions.Enabled,AdvancedSecurityOptions.SAMLOptions.Enabled]' --output text 2>&1)
if grep -E -q 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${CHECK_IF_COGNITO_OR_SAML_ENABLED}"; then
textInfo "${regx}: Access Denied trying to get ES domain ${domain}" "${regx}"
continue
fi
if [[ $(tr '[:upper:]' '[:lower:]' <<< "${CHECK_IF_COGNITO_ENABLED}") == "true" ]]; then
read -r cognito_enabled saml_enabled <<< "$(tr '[:upper:]' '[:lower:]' <<< "${CHECK_IF_COGNITO_OR_SAML_ENABLED}")"
if [[ $cognito_enabled == "true" ]]; then
textPass "${regx}: Amazon ES domain ${domain} has Amazon Cognito authentication for Kibana enabled" "${regx}" "${domain}"
else
textFail "${regx}: Amazon ES domain ${domain} does not have Amazon Cognito authentication for Kibana enabled" "${regx}" "${domain}"
if [[ $saml_enabled == "true" ]]; then
textPass "${regx}: Amazon ES domain ${domain} has SAML authentication for Kibana enabled" "${regx}" "${domain}"
else
textFail "${regx}: Amazon ES domain ${domain} has neither Amazon Cognito authentication nor SAML authentication for Kibana enabled" "${regx}" "${domain}"
fi
fi
done
else
textInfo "${regx}: No Amazon ES domain found" "${regx}"
textPass "${regx}: No Amazon ES domain found" "${regx}"
fi
done
}

View File

@@ -32,36 +32,41 @@ extra789(){
${PROFILE_OPT} \
--query "ServiceDetails[?Owner=='${ACCOUNT_NUM}'].ServiceId" \
--region ${regx} \
--output text | xargs 2>&1)
if [[ $(echo "$ENDPOINT_SERVICES_IDS" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
textInfo "$regx: Access Denied trying to describe VPC endpoint services" "$regx"
continue
fi
--output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${ENDPOINT_SERVICES_IDS}"; then
textInfo "$regx: Access Denied trying to describe VPC endpoint services" "$regx"
continue
fi
for ENDPOINT_SERVICE_ID in ${ENDPOINT_SERVICES_IDS}; do
if [[ ${ENDPOINT_SERVICES_IDS} ]]
then
for ENDPOINT_SERVICE_ID in ${ENDPOINT_SERVICES_IDS}; do
ENDPOINT_CONNECTION_LIST=$(${AWSCLI} ec2 describe-vpc-endpoint-connections \
${PROFILE_OPT} \
--query "VpcEndpointConnections[?VpcEndpointState=='available'].VpcEndpointOwner" \
--region ${regx} \
--output text | xargs
)
ENDPOINT_CONNECTION_LIST=$(${AWSCLI} ec2 describe-vpc-endpoint-connections \
${PROFILE_OPT} \
--query "VpcEndpointConnections[?VpcEndpointState=='available'].VpcEndpointOwner" \
--region ${regx} \
--output text | xargs
)
for ENDPOINT_CONNECTION in ${ENDPOINT_CONNECTION_LIST}; do
for ACCOUNT_ID in ${TRUSTED_ACCOUNT_IDS}; do
if [[ "${ACCOUNT_ID}" == "${ENDPOINT_CONNECTION}" ]]; then
textPass "${regx}: Found trusted account in VPC endpoint service connection ${ENDPOINT_CONNECTION}" "${regx}" "${ENDPOINT_CONNECTION}"
# Algorithm:
# Remove all trusted ACCOUNT_IDs from ENDPOINT_CONNECTION_LIST.
# As a result, the ENDPOINT_CONNECTION_LIST finally contains only unknown/untrusted account ids.
ENDPOINT_CONNECTION_LIST=("${ENDPOINT_CONNECTION_LIST[@]/$ENDPOINT_CONNECTION}") # remove hit from allowlist
fi
for ENDPOINT_CONNECTION in ${ENDPOINT_CONNECTION_LIST}; do
for ACCOUNT_ID in ${TRUSTED_ACCOUNT_IDS}; do
if [[ "${ACCOUNT_ID}" == "${ENDPOINT_CONNECTION}" ]]; then
textPass "${regx}: Found trusted account in VPC endpoint service connection ${ENDPOINT_CONNECTION}" "${regx}" "${ENDPOINT_CONNECTION}"
# Algorithm:
# Remove all trusted ACCOUNT_IDs from ENDPOINT_CONNECTION_LIST.
# As a result, the ENDPOINT_CONNECTION_LIST finally contains only unknown/untrusted account ids.
ENDPOINT_CONNECTION_LIST=("${ENDPOINT_CONNECTION_LIST[@]/$ENDPOINT_CONNECTION}") # remove hit from allowlist
fi
done
done
for UNTRUSTED_CONNECTION in ${ENDPOINT_CONNECTION_LIST}; do
textFail "${regx}: Found untrusted account in VPC endpoint service connection ${UNTRUSTED_CONNECTION}" "${regx}" "${ENDPOINT_CONNECTION}"
done
done
for UNTRUSTED_CONNECTION in ${ENDPOINT_CONNECTION_LIST}; do
textFail "${regx}: Found untrusted account in VPC endpoint service connection ${UNTRUSTED_CONNECTION}" "${regx}" "${ENDPOINT_CONNECTION}"
done
done
else
textPass "${regx}: There is no VPC endpoints" "${regx}"
fi
done
}

View File

@@ -32,39 +32,43 @@ extra790(){
${PROFILE_OPT} \
--query "ServiceDetails[?Owner=='${ACCOUNT_NUM}'].ServiceId" \
--region ${regx} \
--output text | xargs 2>&1)
if [[ $(echo "$ENDPOINT_SERVICES_IDS" | grep -E 'AccessDenied|UnauthorizedOperation|AuthorizationError') ]]; then
textInfo "$regx: Access Denied trying to describe VPC endpoint services" "$regx"
continue
fi
--output text 2>&1)
if grep -q -E 'AccessDenied|UnauthorizedOperation|AuthorizationError' <<< "${ENDPOINT_SERVICES_IDS}" ; then
textInfo "$regx: Access Denied trying to describe VPC endpoint services" "$regx"
continue
fi
if [[ ${ENDPOINT_SERVICES_IDS} ]]
then
for ENDPOINT_SERVICE_ID in ${ENDPOINT_SERVICES_IDS}; do
ENDPOINT_PERMISSIONS_LIST=$(${AWSCLI} ec2 describe-vpc-endpoint-service-permissions \
${PROFILE_OPT} \
--service-id ${ENDPOINT_SERVICE_ID} \
--query "AllowedPrincipals[*].Principal" \
--region ${regx} \
--output text | xargs
)
for ENDPOINT_SERVICE_ID in ${ENDPOINT_SERVICES_IDS}; do
ENDPOINT_PERMISSIONS_LIST=$(${AWSCLI} ec2 describe-vpc-endpoint-service-permissions \
${PROFILE_OPT} \
--service-id ${ENDPOINT_SERVICE_ID} \
--query "AllowedPrincipals[*].Principal" \
--region ${regx} \
--output text | xargs
)
for ENDPOINT_PERMISSION in ${ENDPOINT_PERMISSIONS_LIST}; do
# Take only account id from ENDPOINT_PERMISSION: arn:aws:iam::965406151242:root
ENDPOINT_PERMISSION_ACCOUNT_ID=$(echo ${ENDPOINT_PERMISSION} | cut -d':' -f5 | xargs)
for ENDPOINT_PERMISSION in ${ENDPOINT_PERMISSIONS_LIST}; do
# Take only account id from ENDPOINT_PERMISSION: arn:aws:iam::965406151242:root
ENDPOINT_PERMISSION_ACCOUNT_ID=$(echo ${ENDPOINT_PERMISSION} | cut -d':' -f5 | xargs)
for ACCOUNT_ID in ${TRUSTED_ACCOUNT_IDS}; do
if [[ "${ACCOUNT_ID}" == "${ENDPOINT_PERMISSION_ACCOUNT_ID}" ]]; then
textPass "${regx}: Found trusted account in VPC endpoint service permission ${ENDPOINT_PERMISSION}" "${regx}"
# Algorithm:
# Remove all trusted ACCOUNT_IDs from ENDPOINT_PERMISSIONS_LIST.
# As a result, the ENDPOINT_PERMISSIONS_LIST finally contains only unknown/untrusted account ids.
ENDPOINT_PERMISSIONS_LIST=("${ENDPOINT_PERMISSIONS_LIST[@]/$ENDPOINT_PERMISSION}")
fi
done
done
for ACCOUNT_ID in ${TRUSTED_ACCOUNT_IDS}; do
if [[ "${ACCOUNT_ID}" == "${ENDPOINT_PERMISSION_ACCOUNT_ID}" ]]; then
textPass "${regx}: Found trusted account in VPC endpoint service permission ${ENDPOINT_PERMISSION}" "${regx}"
# Algorithm:
# Remove all trusted ACCOUNT_IDs from ENDPOINT_PERMISSIONS_LIST.
# As a result, the ENDPOINT_PERMISSIONS_LIST finally contains only unknown/untrusted account ids.
ENDPOINT_PERMISSIONS_LIST=("${ENDPOINT_PERMISSIONS_LIST[@]/$ENDPOINT_PERMISSION}")
fi
for UNTRUSTED_PERMISSION in ${ENDPOINT_PERMISSIONS_LIST}; do
textFail "${regx}: Found untrusted account in VPC endpoint service permission ${UNTRUSTED_PERMISSION}" "${regx}" "${UNTRUSTED_PERMISSION}"
done
done
for UNTRUSTED_PERMISSION in ${ENDPOINT_PERMISSIONS_LIST}; do
textFail "${regx}: Found untrusted account in VPC endpoint service permission ${UNTRUSTED_PERMISSION}" "${regx}" "${UNTRUSTED_PERMISSION}"
done
done
else
textPass "${regx}: There is no VPC endpoints services" "${regx}"
fi
done
}

View File

@@ -29,12 +29,12 @@ extra791(){
for dist in $LIST_OF_DISTRIBUTIONS; do
CHECK_ORIGINSSLPROTOCOL_STATUS=$($AWSCLI cloudfront get-distribution --id $dist --query Distribution.DistributionConfig.Origins.Items[].CustomOriginConfig.OriginSslProtocols.Items $PROFILE_OPT --output text)
if [[ $CHECK_ORIGINSSLPROTOCOL_STATUS == *"SSLv2"* ]] || [[ $CHECK_ORIGINSSLPROTOCOL_STATUS == *"SSLv3"* ]]; then
textFail "CloudFront distribution $dist is using a deprecated SSL protocol!" "$regx" "$dist"
textFail "$REGION: CloudFront distribution $dist is using a deprecated SSL protocol!" "$REGION" "$dist"
else
textPass "CloudFront distribution $dist is not using a deprecated SSL protocol" "$regx" "$dist"
textPass "$REGION: CloudFront distribution $dist is not using a deprecated SSL protocol" "$REGION" "$dist"
fi
done
else
textInfo "No CloudFront distributions found" "$regx"
textInfo "$REGION: No CloudFront distributions found" "$REGION"
fi
}

View File

@@ -14,7 +14,6 @@
GROUP_ID[10]='hipaa'
GROUP_NUMBER[10]='10.0'
GROUP_TITLE[10]='HIPAA Compliance - ONLY AS REFERENCE - [hipaa] ****************'
GROUP_RUN_BY_DEFAULT[10]='N' # run it when execute_all is called
GROUP_CHECKS[10]='check12,check113,check23,check26,check27,check29,extra718,extra725,extra72,extra75,extra717,extra729,extra734,check38,extra73,extra740,extra735,check112,check13,check15,check16,check17,check18,check19,check21,check24,check28,check31,check310,check311,check312,check313,check314,check32,check33,check34,check35,check36,check37,check39,extra792'
# Resources:

View File

@@ -14,9 +14,7 @@
GROUP_ID[11]='secrets'
GROUP_NUMBER[11]='11.0'
GROUP_TITLE[11]='Look for keys secrets or passwords around resources - [secrets]'
GROUP_RUN_BY_DEFAULT[11]='N' # but it runs when execute_all is called (default)
GROUP_CHECKS[11]='extra741,extra742,extra759,extra760,extra768,extra775,extra7141'
# requires https://github.com/Yelp/detect-secrets
# `pip install detect-secrets`
# `pip install detect-secrets`

View File

@@ -14,6 +14,4 @@
GROUP_ID[12]='apigateway'
GROUP_NUMBER[12]='12.0'
GROUP_TITLE[12]='API Gateway security checks - [apigateway] ********************'
GROUP_RUN_BY_DEFAULT[12]='N' # run it when execute_all is called
GROUP_CHECKS[12]='extra722,extra743,extra744,extra745,extra746'

View File

@@ -14,5 +14,4 @@
GROUP_ID[13]='rds'
GROUP_NUMBER[13]='13.0'
GROUP_TITLE[13]='RDS security checks - [rds] ***********************************'
GROUP_RUN_BY_DEFAULT[13]='N' # run it when execute_all is called
GROUP_CHECKS[13]='extra78,extra723,extra735,extra739,extra747,extra7113,extra7131,extra7132,extra7133'

View File

@@ -14,5 +14,4 @@
GROUP_ID[14]='elasticsearch'
GROUP_NUMBER[14]='14.0'
GROUP_TITLE[14]='Elasticsearch related security checks - [elasticsearch] *******'
GROUP_RUN_BY_DEFAULT[14]='N' # run it when execute_all is called
GROUP_CHECKS[14]='extra715,extra716,extra779,extra780,extra781,extra782,extra783,extra784,extra785,extra787,extra788,extra7101'
GROUP_CHECKS[14]='extra715,extra716,extra779,extra780,extra781,extra782,extra783,extra784,extra785,extra787,extra788,extra7101'

View File

@@ -14,7 +14,6 @@
GROUP_ID[15]='pci'
GROUP_NUMBER[15]='15.0'
GROUP_TITLE[15]='PCI-DSS v3.2.1 Readiness - ONLY AS REFERENCE - [pci] **********'
GROUP_RUN_BY_DEFAULT[15]='N' # run it when execute_all is called
GROUP_CHECKS[15]='check11,check12,check13,check14,check15,check16,check17,check18,check19,check110,check112,check113,check114,check116,check21,check23,check25,check26,check27,check28,check29,check314,check36,check38,check43,extra711,extra713,extra717,extra718,extra72,extra729,extra735,extra738,extra740,extra744,extra748,extra75,extra750,extra751,extra753,extra754,extra755,extra773,extra78,extra780,extra781,extra782,extra783,extra784,extra785,extra787,extra788,extra798'
# Resources:

View File

@@ -14,7 +14,6 @@
GROUP_ID[16]='trustboundaries'
GROUP_NUMBER[16]='16.0'
GROUP_TITLE[16]='Find cross-account trust boundaries - [trustboundaries] *******'
GROUP_RUN_BY_DEFAULT[16]='N' # run it when execute_all is called
GROUP_CHECKS[16]='extra789,extra790'
# Single account environment: No action required. The AWS account number will be automatically added by the checks.

View File

@@ -14,7 +14,6 @@
GROUP_ID[17]='internet-exposed'
GROUP_NUMBER[17]='17.0'
GROUP_TITLE[17]='Find resources exposed to the internet - [internet-exposed] ***'
GROUP_RUN_BY_DEFAULT[17]='N' # run it when execute_all is called
GROUP_CHECKS[17]='check41,check42,check45,check46,extra72,extra73,extra74,extra76,extra77,extra78,extra79,extra710,extra711,extra716,extra723,extra727,extra731,extra736,extra738,extra745,extra748,extra749,extra750,extra751,extra752,extra753,extra754,extra755,extra770,extra771,extra778,extra779,extra787,extra788,extra795,extra796,extra798,extra7102,extra7134,extra7135,extra7136,extra7137,extra7138'
# 4.1 [check41] Ensure no security groups allow ingress from 0.0.0.0/0 or ::/0 to port 22 (Scored) [group4, cislevel1, cislevel2]

View File

@@ -14,7 +14,6 @@
GROUP_ID[18]='iso27001'
GROUP_NUMBER[18]='18.0'
GROUP_TITLE[18]='ISO 27001:2013 Readiness - ONLY AS REFERENCE - [iso27001] *****'
GROUP_RUN_BY_DEFAULT[18]='N' # run it when execute_all is called
GROUP_CHECKS[18]='check11,check110,check111,check112,check113,check114,check115,check116,check119,check12,check122,check13,check14,check15,check16,check17,check18,check19,check21,check22,check23,check24,check25,check26,check27,check28,check29,check31,check310,check311,check312,check313,check314,check32,check33,check34,check35,check36,check37,check38,check39,check41,check42,check43,check44,extra71,extra710,extra7100,extra711,extra7113,extra7123,extra7125,extra7126,extra7128,extra7129,extra713,extra714,extra7130,extra718,extra719,extra72,extra720,extra721,extra722,extra723,extra724,extra725,extra726,extra727,extra728,extra729,extra731,extra73,extra731,extra735,extra739,extra74,extra741,extra747,extra748,extra75,extra757,extra758,extra759,extra76,extra760,extra761,extra762,extra763,extra764,extra765,extra767,extra768,extra769,extra77,extra771,extra772,extra774,extra776,extra777,extra778,extra78,extra789,extra79,extra790,extra792,extra793,extra794,extra795,extra796,extra798'
# # Category Objective ID Objective Name Prowler check ID Check Summary
@@ -103,7 +102,7 @@ GROUP_CHECKS[18]='check11,check110,check111,check112,check113,check114,check115,
# 83 A.12 Operations Security A.12.4 Logging and Monitoring check39 Ensure a log metric filter and alarm exist for AWS Config configuration changes
# 84 A.12 Operations Security A.12.4 Logging and Monitoring check39 Check if CloudFront distributions have logging enabled
# 85 A.12 Operations Security A.12.4 Logging and Monitoring extra719 Check if Route53 public hosted zones are logging queries to CloudWatch Logs
# 86 A.12 Operations Security A.12.4 Logging and Monitoring extra720 Check if Lambda functions invoke API operations are being recorded by CloudTrail
# 86 A.12 Operations Security A.12.4 Logging and Monitoring extra720 Check if Lambda functions invoke API operations are being recorded by CloudTrail
# 87 A.12 Operations Security A.12.4 Logging and Monitoring extra722 Check if API Gateway has logging enabled
# 88 A.12 Operations Security A.12.4 Logging and Monitoring check38 Ensure a log metric filter and alarm exist for S3 bucket policy changes
# 89 A.12 Operations Security A.12.4 Logging and Monitoring check37 Ensure a log metric filter and alarm exist for disabling or scheduled deletion of customer created CMKs
@@ -136,7 +135,7 @@ GROUP_CHECKS[18]='check11,check110,check111,check112,check113,check114,check115,
#116 A.12 Operations Security A.12.6 Technical Vulnerability Management check23 Ensure the S3 bucket CloudTrail logs to is not publicly accessible
#117 A.12 Operations Security A.12.6 Technical Vulnerability Management extra713 Check if GuardDuty is enabled
#118 A.12 Operations Security A.12.6 Technical Vulnerability Management extra726 Check Trusted Advisor for errors and warnings
#119 A.12 Operations Security A.12.6 Technical Vulnerability Management extra776 Check if ECR image scan found vulnerabilities in the newest image version
#119 A.12 Operations Security A.12.6 Technical Vulnerability Management extra776 Check if ECR image scan found vulnerabilities in the newest image version
#120 A.13 Communications Security A.13.1 Network Security Management check43 Ensure the default security group of every VPC restricts all traffic
#121 A.13 Communications Security A.13.1 Network Security Management check42 Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389
#122 A.13 Communications Security A.13.1 Network Security Management check41 Ensure no security groups allow ingress from 0.0.0.0/0 to port 22

View File

@@ -11,5 +11,4 @@
GROUP_ID[19]='eks-cis'
GROUP_NUMBER[19]='19.0'
GROUP_TITLE[19]='CIS EKS Benchmark - [eks-cis] *********************************'
GROUP_RUN_BY_DEFAULT[19]='N' # run it when execute_all is called
GROUP_CHECKS[19]='extra765,extra794,extra795,extra796,extra797'

View File

@@ -11,5 +11,4 @@
GROUP_ID[1]='group1'
GROUP_NUMBER[1]='1.0'
GROUP_TITLE[1]='Identity and Access Management - CIS only - [group1] ***********'
GROUP_RUN_BY_DEFAULT[1]='Y' # run it when execute_all is called
GROUP_CHECKS[1]='check11,check12,check13,check14,check15,check16,check17,check18,check19,check110,check111,check112,check113,check114,check115,check116,check117,check118,check119,check120,check121,check122'

View File

@@ -14,7 +14,6 @@
GROUP_ID[20]='ffiec'
GROUP_NUMBER[20]='20.0'
GROUP_TITLE[20]='FFIEC Cybersecurity Readiness - ONLY AS REFERENCE - [ffiec] ***'
GROUP_RUN_BY_DEFAULT[20]='N' # run it when execute_all is called
GROUP_CHECKS[20]='check11,check12,check13,check14,check16,check18,check19,check21,check23,check25,check29,check29,check31,check32,check33,check34,check35,check36,check37,check37,check38,check39,check41,check42,check43,check110,check112,check113,check116,check310,check311,check312,check313,check314,extra72,extra76,extra78,extra711,extra723,extra729,extra731,extra734,extra735,extra763,extra792'
# References:

View File

@@ -14,7 +14,6 @@
GROUP_ID[21]='soc2'
GROUP_NUMBER[21]='21.0'
GROUP_TITLE[21]='SOC2 Readiness - ONLY AS REFERENCE - [soc2] *******************'
GROUP_RUN_BY_DEFAULT[21]='N' # run it when execute_all is called
GROUP_CHECKS[21]='check110,check111,check113,check12,check122,check13,check15,check16,check17,check18,check19,check21,check31,check310,check32,check33,check34,check35,check36,check37,check38,check39,check41,check42,check43,extra711,extra72,extra723,extra729,extra731,extra734,extra735,extra739,extra76,extra78,extra792'
# References:

View File

@@ -14,6 +14,4 @@
GROUP_ID[22]='sagemaker'
GROUP_NUMBER[22]='22.0'
GROUP_TITLE[22]='Amazon SageMaker related security checks - [sagemaker] ********'
GROUP_RUN_BY_DEFAULT[22]='N' # run it when execute_all is called
GROUP_CHECKS[22]='extra7103,extra7104,extra7111,extra7112,extra7105,extra7106,extra7107,extra7108,extra7109,extra7110'

Some files were not shown because too many files have changed in this diff Show More