* feat(check): iam-policy-allows-privilege-escalation
* feat(metadata): Enrich check metadata
Co-authored-by: Toni de la Fuente <toni@blyx.com>
Co-authored-by: Toni de la Fuente <toni@blyx.com>
* test(pre-commit): Include security checks
* test(pre-commit): Include dependencies
* test(aws-provider): First unit tests
* test(arn-parsing): Include first tests
* chore(providers): Remove old comments
* chore(csv): first version csv output
* chore(pytest): added pytest dependency
* chore(outputs): organizations demo
* chore(compliance): Added new dataclass for each compliance framework
* fix(test org values): deleted test values in orgs instantiation
* fix(csv): formatted to match output format
* fix(csv output): Reformulation of check report and minor changes
* fix(minor issues): Fix various issues coming from PR comments
* fix(csv): Renamed csv output data model
* fix(output dir): create default if not present
* fix(typo): remove s
* fix(oldcode)
* fix(typo)
* fix(output): Only send to csv when -M is passed
Co-authored-by: sergargar <sergio@verica.io>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
* fix(audit info): Common data structure for current audit
* fix(iam): iam session audit fixed
* feat(aws_session): Include else block
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
* feat(checks): Select checks to run
* feat(checks): Include tests
* feat(checks): Exclude checks with -e
* fix(checks): Include missing path
* fix(checks): Include comments
* chore(assuming role): assume role logic and exceptions demo
* chore(exceptions): Exception handling
* fix(get_caller_identity): Deleted duplicate get_caller_identity and add info entries
* chore(creds renewal): Added support to credential renewal
* chore(assume options): Added condition for -I/-T options
* fix(typo/comments): Deleted f in logger config and comments
* chore(session_duration): limits for -T option
* fix(log messages): Changed -A/-R log messages
* fix(critical error): Errors in input options are critical
* fix(ClientError): IAM service ClientError exception support
* feat(actions): Upload Prowler latest to dockerhub
* feat(upload-container): Action to Public Registries
* feat(upload-container): Include env secrets
* feat(actions): Include Docker linters
* feat(linters): include pre-commit
* fix(names)
* feat(add_prowler_pro_banner): include Prowler Pro banner in README
Context
Include Prowler Pro banner in README.md
Description
Add Prowler Pro banner in README.md for giving visibility to the Enterprise version of Prowler.
License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
* Update README.md
* check empty array in SECURITYGROUPS object
Logic is only checking an object to see if it is null. This should be checking for the array in the object to see if it is empty.
* Replace new conditional with the old one
* Update check_extra75
Co-authored-by: Sergio Garcia <38561120+sergargar@users.noreply.github.com>
+ use primary repository rather than fork.
+ use default branch.
+ fixed a missing character typos.
+ remove blank end-of-line spaces.
@singergs: thanks for adding this code and the video.
* new check
* added check to group
* fixed name
* added testpass logic
* Fixed a few issues
* Fixed more issues
* Updated to add extended information
* Added new line at end of file
* Fixed Spelling
* fix(title): Update title name
* refactor(style): Minor changes
Co-authored-by: Andrea Di Fabio <adifabio@amazon.com>
* regions separated by a comma deliminator
* Update README.md
Co-authored-by: Toni de la Fuente <toni@blyx.com>
* Update README.md
Co-authored-by: David Childs <d.childs@elsevier.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: Toni de la Fuente <toni@blyx.com>
* Add support for organizations accounts metadata part 1
* Add support for organizations accounts metadata part 2
* Add gathering account metadata from org
* chore(prowler): get accounts metadata
Use assume_role backing up normal assumed credentials to assume management account and then restore it to old ones
* fix(orgs metadata): deleted assume_role_orgs
* refactor(organization_metadata)
Reformulate to extract AWS Organizations metadata
* doc(org_metadata): include required -R in usage
* docs(org-metadata): Update README
Co-authored-by: n4ch04 <nachor1992@gmail.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
* fix(extra715): Change lower case from bash variable expansion to tr command
* fix: Change from bash variable expansion to tr command
* Change the way to handle lower case
* fix: not to flag as finding for account without cloudfront distributions
* fix: output empty for None from cloudfront list-distributions
* fix: extra7167 Advanced Shield and CloudFront bug parsing None output without distributions
Co-authored-by: moo.xin.foo <moo.xin.foo@accenture.com>
* fix: not to flag as finding for account without cloudfront distributions
* fix: output empty for None from cloudfront list-distributions
Co-authored-by: moo.xin.foo <moo.xin.foo@accenture.com>
* Fix error handling and policy output
* Fix jq filter when Action is an array
Fix jq select condition to handle Action as string or as array.
Add error handling.
When fail, print policies as just one line.
* Double quote variables to prevent globbing and word splitting
* Replace comma character from json by word comma
* Fix CLI query and add error handling
Check extra781, extra782, extra783, extra784 and extra785
* Fix CLI query, add error handling, combine AWS CLI calls when possible
Checks related to Opensearch/ElasticSearch.
* Fix CLI query, add error handling, combine AWS CLI calls when possible
Checks related to Opensearch/ElasticSearch.
* fix(check41/42): Added tcp protocol filter to query
* Include {} in vars
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
* Include {} in vars
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
* Fix AccessDenied issue when get document
Add check to validate access denied when get document from SSM.
Add missing action permission to allow ssm:GetDocument.
* Double quote variables to prevent globbing and word splitting
* added check7172 for s3 bucket acls
* Added more errors to error handling and an access check for s3
* Removed extra api call
Co-authored-by: Jeff Maley <jeff.maley@symmetry-systems.com>
Since 2.7.0 this template failed:
```
An error occurred (AccessDeniedException) when calling the GetSubscriptionState operation: User: arn:aws:sts::863046042023:assumed-role/prowler-codebuild-role/AWSCodeBuild-2c3151c9-7c5d-4618-94e5-0234bddce775 is not authorized to perform: shield:GetSubscriptionState on resource: arn:aws:shield::863046042023:subscription/* because no identity-based policy allows the shield:GetSubscriptionState action
INFO! No AWS Shield Advanced subscription found. Skipping check.
7.167 [extra7167] Check if Cloudfront distributions are protected by AWS Shield Advanced - shield [Medium]
```
I aligned it with https://github.com/prowler-cloud/prowler/blob/master/iam/prowler-additions-policy.json#L19 .
* fix(inlcude/outputs) Whitelist logic reformulated to exactly match input
* fix(include/outputs): Changed name of iterative variable that browses whitelisted values
* fix(include/outputs): Deleted missing echo and include and put variables in brackets
* Fix: when prowler exits with a non-zero status, the remainder of the block is not executed
* Fix: do not trigger exit code 3 on failed checks, so that the remainder of the block is executed
* Extra7161 EFS encryption at rest check
* Added check_extra7162 which checks if Log groups have 365 days retention
* fixed code to handle all regions and formatted output
* changed check title, resource type and service name as well as making the code more dynamic
* Extra7161 EFS encryption at rest check
* New check_extra7163 Secrets Manager key rotation enabled
* New check7160 Enabled AutomaticVersionUpgrade on RedShift Cluster
* Update ProwlerRole.yaml to have same permissions as util/org-multi-account/ProwlerRole.yaml
* Fix link to quicksight dashboard
* Install detect-secrets (e.g. for check_extra742)
* Updating check_extra7163 with requested changes
* fix(assumed-role): Check if -T and -A options are set
* docs(Readme): `-T` option is not mandatory
* fix(assume-role): Handle AWS STS CLI errors
* fix(assume-role): Handle AWS STS CLI errors
* Update group25_FTR
When trying to run the group 25 (Amazon FTR related security checks) nothing happens, after looking at the code there is a misconfiguration in 2 params: GROUP_RUN_BY_DEFAULT[9] and GROUP_CHECKS[9]. Updating values to 25 fixed the issue.
* Update README.md
broken link for capital letters in group file (group25_FTR)
* #938 issue assume_role multiple times should be fixed
* Label 2.7.0-1December2021 for tests
* Fixed error that appeared if the number of findings was very high.
* Adjusted the batch to only do 50 at a time. 100 caused capacity issues. Also added a check for an edge case where if the updated findings was a multiple of the batch size, it would throw an error for attempting to import 0 findings.
* Added line to delete the temp folder after everything is done.
* New check 7164 Check if Cloudwatch log groups are protected by AWS KMS@maisenhe
* updated CHECK_RISK
* Added checks extra7160,extra7161,extra7162,extra7163 to group Extras
* Added checks extra7160,extra7161,extra7162,extra7163 to group Extras
* Added issue templates
* New check 7165 DynamoDB: DAX encrypted at rest @Daniel-Peladeau
* New check 7165 DynamoDB: DAX encrypted at rest @Daniel-Peladeau
* Fix#963 check 792 to force json in ELB queries
* Fix#957 check 763 had us-east-1 region hardcoded
* Fix#962 check 7147 ALTERNATE NAME
* Fix#940 handling error when can not list functions
* Added new checks 7164 and 7165 to group extras
* Added invalid check or group id to the error message #962
* Fix Broken Link
* Add docker volume example to README.md
* Updated Dockerfile to use amazonlinux container
* Updated Dockerfile with AWS cli v2
* Added upgrade to the RUN
* Added cache purge to Dockerfile
* Backup AWS Credentials before AssumeRole and Restore them before CopyToS3
* exporting the ENV variables
* fixed bracket
* Improved documentation for install process
* fix checks with comma issues
* Added -D option to copy to S3 with the initial AWS credentials
* Cosmetic variable name change
* Added $PROFILE_OPT to CopyToS3 commands
* remove commas
* removed file as it is not needed
* Improved help usage options -h
* Fixed CIS LEVEL on 7163 through 7165
* When performing a restoreInitialAWSCredentials, unset the credentials ENV variables if they were never set
* New check 7166 Elastic IP addresses with associations are protected by AWS Shield Advanced
* New check 7167 Cloudfront distributions are protected by AWS Shield Advanced
* New check 7168 Route53 hosted zones are protected by AWS Shield Advanced
* New check 7169 Global accelerators are protected by AWS Shield Advanced
* New check 7170 Application load balancers are protected by AWS Shield Advanced
* New check 7171 Classic load balancers are protected by AWS Shield Advanced
* Include example for global resources
* Add AWS Advance Shield protection checks corrections
* Added Shield actions GetSubscriptionState and DescribeProtection
* Added Shield actions GetSubscriptionState and DescribeProtection
* docs(templates): Improve bug template with more info (#982)
* Removed echoes after role chaining fix
* Changed Route53 checks7152 and 7153 to INFO when no domains found
* Changed Route53 checks 7152 and 7153 title to clarify
* Added passed security groups in output to check 778
* Added passed security groups and updated title to check 777
* Added FAIL as error handling when SCP prevents queries to regions
* Label version 2.7.0-6January2022
* Updated .dockerignore with .github/
* Fix: issue #758 and #984
* Fix: issue #741 CloudFront and real-time logs
* Fix issues #971 set all as INFO instead of FAIL when no access to resource
* Fix: issue #986
* Add additional action permissions for Glue and Shield Advanced checks @lazize
* Add extra shield action permission
Allows the shield:GetSubscriptionState action
* Add permission actions
Make sure all files where permission actions are necessary will have the same actions
* Fix: Credential chaining from environment variables @lazize #996f
If profile is not defined, restore original credentials from environment variables,
if they exists, before assume-role
* Lable version 2.7.0-24January2022
Co-authored-by: Lee Myers <ichilegend@gmail.com>
Co-authored-by: Chinedu Obiakara <obiakac@amazon.com>
Co-authored-by: Daniel Peladeau <dcpeladeau@gmail.com>
Co-authored-by: Jonathan Lozano <jonloza@amazon.com>
Co-authored-by: Daniel Lorch <dlorch@gmail.com>
Co-authored-by: Pepe Fagoaga <jose.fagoaga@smartprotection.com>
Co-authored-by: Israel <6672089+lopmoris@users.noreply.github.com>
Co-authored-by: root <halfluke@gmail.com>
Co-authored-by: nikirby <nikirby@amazon.com>
Co-authored-by: Joel Maisenhelder <maisenhe@gmail.com>
Co-authored-by: RT <35173068+rtcms@users.noreply.github.com>
Co-authored-by: Andrea Di Fabio <39841198+sectoramen@users.noreply.github.com>
Co-authored-by: Joseph de CLERCK <clerckj@amazon.fr>
Co-authored-by: Michael Dickinson <45626543+michael-dickinson-sainsburys@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: Leonardo Azize Martins <lazize@users.noreply.github.com>
Key id is in position 6 in aws cli version 2.2.5, but in position 4 in aws cli 1.x
Use --query to select only the data necessary and output in a consistent format
Instead of looking for a fixed error string, it uses error codes from aws cli
Previos condition was not catching this error message:
An error occurred (ExpiredToken) when calling the GetCallerIdentity operation: The security token included in the request is expired
Also forced the output of the command to json. In some tests I was doing was failing becuase it was sending output as text
* feat(aws-securitygroups): include new control to test ingress from 0.0.0.0/0 or ::/0 to FTP ports 20 or 21
* feat(aws-securitygroups): include extra control 7134 in extra group
* feat(aws-securitygroups): include new control to test ingress from 0.0.0.0/0 or ::/0 to Kafka port 9092
* feat(aws-securitygroups): include new control to test ingress from 0.0.0.0/0 or ::/0 to Telnet port 23
* feat(aws-securitygroups): include new control to test ingress from 0.0.0.0/0 or ::/0 to Microsoft SQL Server ports 1433 or 1434
* feat(aws-securitygroups): include extra controls 7135, 7136 and 7137 in extra and internet-exposed groups
The `ELBSecurityPolicy-FS-1-2-Res-2020-10` policy is the most
restrictive TLS v1.2 only SSL/TLS security policy available, and is a
subset of the already accepted `ELBSecurityPolicy-FS-1-2-Res-2019-08`
policy - this commit adds `ELBSecurityPolicy-FS-1-2-Res-2020-10` to
the list of acceptable "secure" security policies.
`ELBSecurityPolicy-FS-1-2-Res-2020-10` has a very limited set of
ciphers, is TLS v1.2 only and supports Forward Secrecy.
Current SSL Labs tests gives it an "A" rating for another source of
confirmation.
As per this bug report:
https://github.com/toniblyx/prowler/issues/693
Add detection for freebsd releases which should be similar to darwin
in that it will use GNU coreutils for date and base64.
Following the merge of #651, prowler now calls the GetFindings API when using Security Hub integration - this action needs to be added to the required policy
Custom checks in folder are not being sourced. `./prowler -c extra800 -x custom` results in empty EXTERNAL_CHECKS_PATH variables due to missing colon.
The fix was tested in both OSX and toniblyx/prowler:latest Docker.
Regards,
Force the batch-import-findings AWS CLI call to be directed at the region the currently reporting resource is located in, as Security Hub enforces this requirement
When checking that Security Hub is enabled, check for all regions that are in scope, e.g. all regions, unless '-f <region>' is used
Fixes#618
Write output files (CSV, JSON, etc.) to an `output` directory that is relative to prowler itself, no matter where prowler is invoked from.
Simplify Dockerfile by specifying a WORKDIR
Replace ADD command with the more recommended COPY command
Update README to cover how to run in Docker and access saved reports
Add a .dockerignore file to ignore .git and output directories
This partially addresses #570 - previously, within Docker, Prowler was attempting to write
reports to the root `/` directory in the container, which it did not have permission to do.
Instead, reports are now written to a path relative to Prowler
Continue to show (unwhitelisted) failed checks as failures in JUnit output, but rather than exclude failing whitelisted checks from JUnit, mark them as skipped
Fixes#590
Create `include/check_creds_last_used` and move all logic for checking last usages of passwords and access keys there
Modify check13 and extra774 to call new function, specifying time-range of last 90 days and last 30 days respectively
Modify messages in check14 and check121 so that all mentions of 'access key's are consistent
Fixes#496
Change `-l` flag to print a unique list of every single check (assuming none are orphaned outside of all groups)
Allow `-g <group_id>` to be specified in combination with `-l`, to only print checks that are referenced by the specified group
When listing all checks with `-l` only, print out all groups that reference each check
Fixes: #545
Rearrange output functions so they support outputting text alongside other formats, if specified
Add a convenience function for checking if JUnit output is enabled
Move monochrome setting into loop so it better supports multiple formats
Update README
If the -J flag is passed, generate JUnit XML reports for each check, in-line with how Java tools generate JUnit reports.
Check section numbers equate to 'root packages', checks are second-level packages, each check equates to a testsuite (mirroring Java where each test class is a testsuite) and each pass/fail of a check equates to a testcase
Time the execution of each check and include this in the report
Include properties (Prowler version, check level etc.) in-line with standard JUnit files
XML escape all strings for safety
Detect if a user has GNU coreutils installed on Mac OS X, but not as their default, switching to using gdate for date commands if so, as it has more features, including getting dates in milliseconds
Add prowler-output, junit-reports and VSCode files to .gitignore
Update README to include JUnit info, address markdownlint warnings
Remove unused arguments to jq in generateJsonAsffOutput
Fixes#537
Remove the second entry in any comma-separated check IDs from each check, formatting
the check ID with leading zeros in `include/outputs` if the `-n` flag is active
Replace the use of `sort -u` to remove duplicate checks, which has the side-effect of reordering checks alphabetically when one or more are excluded with awk, which preserves the check order
Adjust indentation and formatting to be more consistent with the rest of the file
Fixes#492
According to the benchmark, only users with a console password should be considered for this check,
therefore filter out any users who do not have a console password
Fixes#513
As some users may have installed GNU coreutils on Mac OS X, e.g. `brew install coreutils`, it's possible that
the `date` command uses the GNU version, instead of the standard BSD version.
- Detect if GNU coreutils is installed on Mac and if it is, use the GNU variants of date functions
- Reduce some of the duplication in the file, which resolves a bug where the cygwin version of `how_many_days_from_today()`
had the operands switched around, leading to a positive result instead of negative
- Add test_tcp_connectivity function for cygwin (uses the GNU variant)
Fixes#534
A user who has never logged into the console, or not logged in since Oct 2014 will present as 'no_information' in the
'password_last_used' column of the credential report. Handle this scenario and output a failed message if it has been
more than MAX_DAYS days since the user was created, or an info message if it is less than MAX_DAYS
Fixes#501
- Move Security Hub related code to a dedicated include/securityhub_integration file
- Check that Security Hub is enabled in the target region before beginning checks when -S is specified
- Add error handling to the batch-import-findings call
- Add CHECK_ASFF_TYPE variables to all CIS checks to override the default
- Add support for CHECK_ASFF_RESOURCE_TYPE variables which override the default 'AwsAccount' value for the resource a finding relates to.
- Add CHECK_ASFF_RESOURCE_TYPE variables to all checks where there is a suitable value in the schema
- Remove json-asff output for info messages as they are not appropriate for possible submission to Security Hub
- Update the README to cover Security Hub integration
- Add an IAM policy JSON document that provides the necessary BatchImportFindings permission for Security Hub
- Remove trailing whitespace and periods in pass/fail messages to be consistent with the majority of messages, to prevent future tidy-up from changing the finding IDs
Buckets that log to one or more trails are logged as `PASS!` for each trail they are associated with.
Buckets that aren't associated with any trails are logged as `FAIL!` once.
```
...
PASS! : S3 bucket bucket-one has Object-level logging enabled in trails: arn:aws:cloudtrail:eu-west-2:123456789012:trail/central-trail
PASS! : S3 bucket bucket-two has Object-level logging enabled in trails: arn:aws:cloudtrail:eu-west-2:9876543210989:trail/trail-two
PASS! : S3 bucket bucket-two has Object-level logging enabled in trails: arn:aws:cloudtrail:eu-west-2:123456789012:trail/central-trail
PASS! : S3 bucket bucket-three has Object-level logging enabled in trails: arn:aws:cloudtrail:eu-west-2:123456789012:trail/central-trail
...
```
This change should also address #387
json-asff mode outputs JSON, similar to the standard 'json' mode with one check per line, but in AWS Security Finding Format - used by AWS Security Hub
Currently uses a generic Type, Resources and ProductArn value, but sets the Id to a unique value that includes the details of the message, in order to separate out checks that run against multiple resources and output one result per resource per check. This ensures that findings can be updated, should the resource move in or out of compliance
securityhub mode generates the ASFF JSON and then passes it to an 'aws securityhub batch-import-findings' call, once per resource per check. Output to the screen is similar to the standard mode, but prints whether or not the finding was submitted successfully
Fixes#524
- Add ISLOGGING_STATUS, INCLUDEMANAGEMENTEVENTS_STATUS, READWRITETYPE_STATUS to check
- Remove ` --no-include-shadow-trails ` from CLI
2.1 Ensure CloudTrail is enabled in all regions (Scored):
Via CLI
1. ` aws cloudtrail describe-trails `
Ensure `IsMultiRegionTrail` is set to true
2. `aws cloudtrail get-trail-status --name <trailname shown in describe-trails>`
Ensure `IsLogging` is set to true
3. `aws cloudtrail get-event-selectors --trail-name <trailname shown in describetrails>`
Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to
`true` and `ReadWriteType` set to `All`
Associate VPCFlowLow with the VPC it is for to ensure accurate check. If there are multiple VPCs in a region and only some have VPC flow logs, current check will pass all VPCs even those without VPC flow logs.
The check: 7.60 [extra760] Find secrets in Lambda functions code (Not Scored) (Not part of CIS benchmark)
errors by default, with the following:
An error occurred (AccessDeniedException) when calling the GetFunction operation: User: user/prowler is not authorized to perform: lambda:GetFunction on resource: arn:aws:lambda:eu-west-2:347708466071:function:ApiSimpleDelayDDMonitor
Adding this policy to be successfully run that check.
Add missing bracket to prevent:
```
jq: error: syntax error, unexpected INVALID_CHARACTER, expecting $end (Unix shell quoting issues?) at <top-level>, line 1:
.Statement[]|select(((.Principal|type == "object") and .Principal.AWS == "*") or ((.Principal|type == "string") and
.Principal == "*")) and .Action=="s3:*" and (.Resource|type == "array") and (.Resource|map({(.):0})[]|has($arn)) and
(.Resource|map({(.):0})[]|has($arn+"/*")) and .Condition.Bool."aws:SecureTransport" == "false")
```
(line breaks added to reduce commit width)
Misc prowler fixes
Add GetEbsEncryptionByDefault wherever Prowler policies are mentioned
Update Extra718 check to be aware of access denied responses
Update Extra726 check to be more verbose for non-failure items
Update Extra73 check to be aware of access denied responses
Update Extra734 check to be aware of access denied responses and parse policies with jq for better accuracy
Update Extra742 check for verbiage
Update Extra756 check for verbiage and parameter order
Update Extra761 check for failure scenarios (requires most recent awscli and addition to Prowler IAM policy)
Added Extra763 check to verify that object versioning is enabled on S3 buckets
Added Extra764 check to verify that S3 buckets enforce a secure transport policy
If using a terminal then jq prints out JSON with color.
I suggest color should either be disabled always or with some other flag (more complicated)
jq flag: -M monochrome (don't colorize JSON);
Added new option "-E" which will execute all tests except a list of specified checks separated by comma (i.e. check21,check31). Any invalid check name will be discarded. And if just one argument is passed and this is invalid, then Prowler will execute all checks.
To save space, the option will return a list of total checks excluding the list provided. Then, the functionality will overwrite CHECK_ID with the final list and the program will continue as if the user entered "-c" option and the final list of checks.
Added support for option "-c" to specify one or multiple specific checks to be performed. To specify multiple tests include them using a comma delimiter (i.e. check21,check22).
When executing Prowler using a specific profile (in my case to assume a role) , check_extra730 returns:
"An error occurred (AccessDeniedException) when calling the DescribeCertificate operation: User: [ASSUMED_ROLE_ARN] is not authorized to perform: acm:DescribeCertificate on resource: [RESOURCE_ARN]"
This is because line 28 did not contain the following parameters: "$PROFILE_OPT --region $regx" .
New checks, documentation and fixes:
Added extra739 ELB logging and typos
Added extra740 EBS snapshots are encrypted and HIPAA
Added info about GDPR and HIPAA
Improved Prowler description
fixed issue #268
AWS_PROFILE is a default AWSCLI environment variable configuring the profile to use. Prowler should accept it as well and not set the default profile.
More information on AWSCLI environment variables can be found in the docs: https://docs.aws.amazon.com/cli/latest/userguide/cli-environment.html
Fixes#249.
The text output of `aws guardduty get-detector` has changed with awscli release 1.16.25, leading to GuardDuty detectors misreported as suspended.
The previous check didnt accept lower password expiration time. Updated to accept less than or equal to 90 days. Also edited printed statement to include set value.
According to the AWS documentation for the CloudWatch Logs permissions reference [1], the IAM policy to permit or deny CloudWatch Logs actions uses the `logs:` prefix rather than `cloudwatchlogs:`. This commit updates the policy additions JSON file as well as the README to reflect this change.
I confirmed this having assumed an appropriate role in an AWS account, then executing the AWS CLI command `aws logs describe-log-groups`; with the `cloudwatchlogs:` prefix an AccessDeniedException was returned to the client.
[1] https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/permissions-reference-cwl.html
The CIS benchmarks state that only customer managed CMKs should be checked, so
exclude all AWS managed CMKs, not just the one for ACM.
Also fix up some formatting and dead code.
Improved check41 and check42 to ensure no inbound rule exists that has:-
# port no 22 and source of 0.0.0.0/0
# port no in the range (i.e 0-1024) and source of 0.0.0.0/0
# port value of all and source of 0.0.0.0/0
Updated 'check114' to ensure hardware MFA is enabled for root account by:-
1) Querying for 'SerialNumber' of the Virtual MFA Devices list
2) 'SerialNumber' is ARN for Virtual MFA Device and Device Number for Hardware MFA Device; so did grep for ARN with 'root-account-mfa-device' in the expression
3. Environment you have, like single account, multi-account, organizations, multi or single subscription, etc.
4. See error
validations:
required:true
- type:textarea
id:expected
attributes:
label:Expected behavior
description:A clear and concise description of what you expected to happen.
validations:
required:true
- type:textarea
id:actual
attributes:
label:Actual Result with Screenshots or Logs
description:If applicable, add screenshots to help explain your problem. Also, you can add logs (anonymize them first!). Here a command that may help to share a log `prowler <your arguments> --log-level DEBUG --log-file $(date +%F)_debug.log` then attach here the log file.
validations:
required:true
- type:dropdown
id:type
attributes:
label:How did you install Prowler?
options:
- Cloning the repository from github.com (git clone)
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name:"CodeQL"
on:
push:
branches:["master",prowler-2, prowler-3.0-dev ]
pull_request:
# The branches below must be a subset of the branches above
branches:["master"]
schedule:
- cron:'00 12 * * *'
jobs:
analyze:
name:Analyze
runs-on:ubuntu-latest
permissions:
actions:read
contents:read
security-events:write
strategy:
fail-fast:false
matrix:
language:['python']
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
steps:
- name:Checkout repository
uses:actions/checkout@v3
# Initializes the CodeQL tools for scanning.
- name:Initialize CodeQL
uses:github/codeql-action/init@v2
with:
languages:${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# Details on CodeQL's query packs refer to : https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
- [Third Party Integrations](#third-party-integrations)
# Description
## Description
`Prowler` is an Open Source security tool to perform AWS, GCP and Azure security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness.
Tool based on AWS-CLI commands for AWS account security assessment and hardening, following guidelines of the [CIS Amazon Web Services Foundations Benchmark 1.1 ](https://benchmarks.cisecurity.org/tools2/amazon/CIS_Amazon_Web_Services_Foundations_Benchmark_v1.1.0.pdf)
It contains hundreds of controls covering CIS, NIST 800, NIST CSF, CISA, RBI, FedRAMP, PCI-DSS, GDPR, HIPAA, FFIEC, SOC2, GXP, AWS Well-Architected Framework Security Pillar, AWS Foundational Technical Review (FTR), ENS (Spanish National Security Scheme) and your custom security frameworks.
Prowler has been written in Python using the [AWS SDK (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html#), [Azure SDK](https://azure.github.io/azure-sdk-for-python/) and [GCP API Python Client](https://github.com/googleapis/google-api-python-client/).
## AWS
Since Prowler uses AWS Credentials under the hood, you can follow any authentication method as described [here](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-precedence).
Make sure you have properly configured your AWS-CLI with a valid Access Key and Region or declare AWS variables properly (or instance profile/role):
```console
aws configure
```
or
```console
export AWS_ACCESS_KEY_ID="ASXXXXXXX"
export AWS_SECRET_ACCESS_KEY="XXXXXXXXX"
export AWS_SESSION_TOKEN="XXXXXXXXX"
```
Those credentials must be associated to a user or role with proper permissions to do all checks. To make sure, add the following AWS managed policies to the user or role being used:
> Moreover, some read-only additional permissions are needed for several checks, make sure you attach also the custom policy [prowler-additions-policy.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-additions-policy.json) to the role you are using.
> If you want Prowler to send findings to [AWS Security Hub](https://aws.amazon.com/security-hub), make sure you also attach the custom policy [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-security-hub.json).
## Azure
Prowler for Azure supports the following authentication types:
- Service principal authentication by environment variables (Enterprise Application)
- Current az cli credentials stored
- Interactive browser authentication
- Managed identity authentication
### Service Principal authentication
To allow Prowler assume the service principal identity to start the scan, it is needed to configure the following environment variables:
```console
export AZURE_CLIENT_ID="XXXXXXXXX"
export AZURE_TENANT_ID="XXXXXXXXX"
export AZURE_CLIENT_SECRET="XXXXXXX"
```
- Make sure your Secret and Access Keys are associated to a user with proper permissions to do all checks. To make sure add SecurityAuditor default policy to your user. Policy ARN is
If you try to execute Prowler with the `--sp-env-auth` flag and those variables are empty or not exported, the execution is going to fail.
### AZ CLI / Browser / Managed Identity authentication
```
arn:aws:iam::aws:policy/SecurityAudit
```
> In some cases you may need more list or get permissions in some services, look at the Troubleshooting section for a more comprehensive policy if you find issues with the default SecurityAudit policy.
The other three cases do not need additional configuration, `--az-cli-auth` and `--managed-identity-auth` are automated options, `--browser-auth` needs the user to authenticate using the default browser to start the scan. Also `--browser-auth` needs the tenant id to be specified with `--tenant-id`.
## Usage
### Permissions
1 - Run the prowler.sh command without options (it will use your environment variable credentials if exist or default in ~/.aws/credentials file and run checks over all regions when needed, default region is us-east-1):
To use each one, you need to pass the proper flag to the execution. Prowler for Azure handles two types of permission scopes, which are:
```
./prowler
- **Azure Active Directory permissions**: Used to retrieve metadata from the identity assumed by Prowler and future AAD checks (not mandatory to have access to execute the tool)
- **Subscription scope permissions**: Required to launch the checks against your resources, mandatory to launch the tool.
#### Azure Active Directory scope
Azure Active Directory (AAD) permissions required by the tool are the following:
- `Directory.Read.All`
- `Policy.Read.All`
#### Subscriptions scope
Regarding the subscription scope, Prowler by default scans all the subscriptions that is able to list, so it is required to add the following RBAC builtin roles per subscription to the entity that is going to be assumed by the tool:
- `Security Reader`
- `Reader`
## Google Cloud Platform
Prowler will follow the same credentials search as [Google authentication libraries](https://cloud.google.com/docs/authentication/application-default-credentials#search_order):
2. [User credentials set up by using the Google Cloud CLI](https://cloud.google.com/docs/authentication/application-default-credentials#personal)
3. [The attached service account, returned by the metadata server](https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa)
Those credentials must be associated to a user or service account with proper permissions to do all checks. To make sure, add the `Viewer` role to the member associated with the credentials.
> By default, `prowler` will scan all accessible GCP Projects, use flag `--project-ids` to specify the projects to be scanned.
# 💻 Basic Usage
To run prowler, you will need to specify the provider (e.g aws or azure):
```console
prowler <provider>
```
2 - For custom AWS-CLI profile and region, use the following: (it will use your custom profile and run checks over all regions when needed):
> Running the `prowler` command without options will use your environment variable credentials.
By default, prowler will generate a CSV, a JSON and a HTML report, however you can generate JSON-ASFF (only for AWS Security Hub) report with `-M` or `--output-modes`:
```console
prowler <provider> -M csv json json-asff html
```
3 - For a single check use option -c:
The html report will be located in the `output` directory as the other files and it will look like:
You can use `-l`/`--list-checks` or `--list-services` to list all available checks or services within the provider.
```console
prowler <provider> --list-checks
prowler <provider> --list-services
```
Valid check numbers are based on the AWS CIS Benchmark guide, so 1.1 is check11 and 3.10 is check310
For executing specific checks or services you can use options `-c`/`--checks` or `-s`/`--services`:
4 - If you want to save your report for later analysis:
```
./prowler > prowler-report.txt
```
or if you want a colored HTML report do:
```
pip install ansi2html
./prowler | ansi2html -la > report.html
```
or if you want a pipe-delimited report file, do:
```
./prowler -M csv > output.psv
```console
prowler aws --checks s3_bucket_public_access
prowler aws --services s3 ec2
```
5 - To perform an assessment based on CIS Profile Definitions you can use level1 or level2 with `-c` flag, more information about this [here, page 8](https://d0.awsstatic.com/whitepapers/compliance/AWS_CIS_Foundations_Benchmark.pdf):
```
./prowler -c level1
Also, checks and services can be excluded with options `-e`/`--excluded-checks` or `--excluded-services`:
Several Prowler's checks have user configurable variables that can be modified in a common **configuration file**.
This file can be found in the following path:
```
./prowler -h
USAGE:
prowler -p <profile> -r <region> [ -h ]
Options:
-p <profile> specify your AWS profile to use (i.e.: default)
-r <region> specify an AWS region to direct API requests to (i.e.: us-east-1), all regions are checked anyway
-c <checknum> specify a check number or group from the AWS CIS benchmark (i.e.: check11 for check 1.1, check3 for entire section 3 or level1 for CIS Level 1 Profile Definitions)
-f <filterregion> specify an AWS region to run checks against (i.e.: us-west-1)
-m <maxitems> specify the maximum number of items to return for long-running requests (default: 100)
-M <mode> output mode: text (defalut), mono, csv (separator is ","; data is on stdout; progress on stderr)
-k keep the credential report
-n show check numbers to sort easier (i.e.: 1.01 instead of 1.1)
-l list all available checks only (does not perform any check)
-h this help
```
## Fix:
Check your report and fix the issues following all specific guidelines per check in https://benchmarks.cisecurity.org/tools2/amazon/CIS_Amazon_Web_Services_Foundations_Benchmark_v1.1.0.pdf
To fix it, please renew your token by authenticating again to the AWS API.
> By default, `prowler` will scan all Azure subscriptions.
### Custom IAM Policy
Instead of using default policy SecurityAudit for the account you use for checks you may need to create a custom policy with a few more permissions (get and list, not change!) here you go a good example for a "ProwlerPolicyReadOnly":
## Google Cloud Platform
Optionally, you can provide the location of an application credential JSON file with the following argument:
```console
prowler gcp --credentials-file path
```
{
"Version": "2012-10-17",
"Statement": [{
"Action": [
"acm:describecertificate",
"acm:listcertificates",
"autoscaling:describe*",
"cloudformation:describestack*",
"cloudformation:getstackpolicy",
"cloudformation:gettemplate",
"cloudformation:liststack*",
"cloudfront:get*",
"cloudfront:list*",
"cloudtrail:describetrails",
"cloudtrail:gettrailstatus",
"cloudtrail:listtags",
"cloudwatch:describe*",
"cloudwatchlogs:describeloggroups",
"cloudwatchlogs:describemetricfilters",
"codecommit:batchgetrepositories",
"codecommit:getbranch",
"codecommit:getobjectidentifier",
"codecommit:getrepository",
"codecommit:list*",
"codedeploy:batch*",
"codedeploy:get*",
"codedeploy:list*",
"config:deliver*",
"config:describe*",
"config:get*",
"datapipeline:describeobjects",
"datapipeline:describepipelines",
"datapipeline:evaluateexpression",
"datapipeline:getpipelinedefinition",
"datapipeline:listpipelines",
"datapipeline:queryobjects",
"datapipeline:validatepipelinedefinition",
"directconnect:describe*",
"dynamodb:listtables",
"ec2:describe*",
"ecs:describe*",
"ecs:list*",
"elasticache:describe*",
"elasticbeanstalk:describe*",
"elasticloadbalancing:describe*",
"elasticmapreduce:describejobflows",
"elasticmapreduce:listclusters",
"es:describeelasticsearchdomainconfig",
"es:listdomainnames",
"firehose:describe*",
"firehose:list*",
"glacier:listvaults",
"iam:generatecredentialreport",
"iam:get*",
"iam:list*",
"kms:describe*",
"kms:get*",
"kms:list*",
"lambda:getpolicy",
"lambda:listfunctions",
"logs:DescribeMetricFilters",
"rds:describe*",
"rds:downloaddblogfileportion",
"rds:listtagsforresource",
"redshift:describe*",
"route53:getchange",
"route53:getcheckeripranges",
"route53:getgeolocation",
"route53:gethealthcheck",
"route53:gethealthcheckcount",
"route53:gethealthchecklastfailurereason",
"route53:gethostedzone",
"route53:gethostedzonecount",
"route53:getreusabledelegationset",
"route53:listgeolocations",
"route53:listhealthchecks",
"route53:listhostedzones",
"route53:listhostedzonesbyname",
"route53:listresourcerecordsets",
"route53:listreusabledelegationsets",
"route53:listtagsforresource",
"route53:listtagsforresources",
"route53domains:getdomaindetail",
"route53domains:getoperationdetail",
"route53domains:listdomains",
"route53domains:listoperations",
"route53domains:listtagsfordomain",
"s3:getbucket*",
"s3:getlifecycleconfiguration",
"s3:getobjectacl",
"s3:getobjectversionacl",
"s3:listallmybuckets",
"sdb:domainmetadata",
"sdb:listdomains",
"ses:getidentitydkimattributes",
"ses:getidentityverificationattributes",
"ses:listidentities",
"ses:listverifiedemailaddresses",
"ses:sendemail",
"sns:gettopicattributes",
"sns:listsubscriptionsbytopic",
"sns:listtopics",
"sqs:getqueueattributes",
"sqs:listqueues",
"tag:getresources",
"tag:gettagkeys"
],
"Effect": "Allow",
"Resource": "*"
}]
}
```
> By default, `prowler` will scan all accessible GCP Projects, use flag `--project-ids` to specify the projects to be scanned.
### Incremental IAM Policy
# 📃 License
Alternatively, here is a policy which defines the permissions which are NOT present in the AWS Managed SecurityAudit policy. Attach both this policy and the AWS Managed SecurityAudit policy to the group and you're good to go.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"acm:DescribeCertificate",
"acm:ListCertificates",
"cloudwatchlogs:describeLogGroups",
"cloudwatchlogs:DescribeMetricFilters",
"es:DescribeElasticsearchDomainConfig",
"ses:GetIdentityVerificationAttributes",
"sns:ListSubscriptionsByTopic"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
```
### Bootstrap Script
Quick bash script to set up a "prowler" IAM user and "SecurityAudit" group with the required permissions. To run the script below, you need user with administrative permissions; set the AWS_DEFAULT_PROFILE to use that account.
aws iam create-policy --policy-name ProwlerAuditAdditions --policy-document file://$(pwd)/prowler-policy-additions.json
aws iam attach-group-policy --group-name SecurityAudit --policy-arn arn:aws:iam::aws:policy/SecurityAudit
aws iam attach-group-policy --group-name SecurityAudit --policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/ProwlerAuditAdditions
aws iam create-user --user-name prowler
aws iam add-user-to-group --user-name prowler --group-name SecurityAudit
aws iam create-access-key --user-name prowler
unset ACCOUNT_ID AWS_DEFAULT_PROFILE
```
The `aws iam create-access-key` command will output the secret access key and the key id; keep these somewhere safe, and add them to ~/.aws/credentials with an appropriate profile name to use them with prowler. This is the only time they secret key will be shown. If you loose it, you will need to generate a replacement.
## Extras
We are adding additional checks to improve the information gather from each account, these checks are out of the scope of the CIS benchmark for AWS but we consider them very helpful to get to know each AWS account set up and find issues on it.
At this momment we have 5 extra checks:
- 7.1 (`extra71`) Ensure users with AdministratorAccess policy have MFA tokens enabled (Not Scored) (Not part of CIS benchmark)
- 7.2 (`extra72`) Ensure there are no EBS Snapshots set as Public (Not Scored) (Not part of CIS benchmark)
- 7.3 (`extra73`) Ensure there are no S3 buckets open to the Everyone or Any AWS user (Not Scored) (Not part of CIS benchmark)
- 7.4 (`extra74`) Ensure there are no Security Groups without ingress filtering being used (Not Scored) (Not part of CIS benchmark)
- 7.5 (`extra75`) Ensure there are no Security Groups not being used (Not Scored) (Not part of CIS benchmark)
- 7.6 (`extra76`) Ensure there are no EC2 AMIs set as Public (Not Scored) (Not part of CIS benchmark)
- 7.7 (`extra77`) Ensure there are no ECR repositories set as Public (Not Scored) (Not part of CIS benchmark)
```
./prowler -c extras
```
or to run just one of the checks, to see if you have S3 buckets open:
```
./prowler -c extraNUMBER
```
## Add Custom Checks
In order to add any new check feel free to create a new extra check in the extras section. To do so, you will need to follow these steps:
1. use any existing extra check as reference
2. add `ID7N` and `TITLE7N`, where N is a new check number part of the extras section (7) around line 361 `# List of checks IDs and Titles`
3. add your new extra check function name at `callCheck` function (around line 1817) and below in that case inside extras option (around line 1853)
4. finally add it in `# List only check tittles` around line 1930
5. save changes and run it as ./prowler -c extraNN
6. send me a pull request! :)
## Third Party Integrations
### Telegram
Javier Pecete has done an awesome job integrating Prowler with Telegram, you have more details here https://github.com/i4specete/ServerTelegramBot
### Cloud Security Suite
The guys of SecurityFTW have added Prowler in their Cloud Security Suite along with other cool security tools https://github.com/SecurityFTW/cs-suite
Prowler is licensed as Apache License 2.0 as specified in each file. You may obtain a copy of the License at
As an **AWS Partner** and we have passed the [AWS Foundation Technical Review (FTR)](https://aws.amazon.com/partners/foundational-technical-review/) and we use the following tools and automation to make sure our code is secure and dependencies up-to-dated:
-`bandit` for code security review.
-`safety` and `dependabot` for dependencies.
-`hadolint` and `dockle` for our containers security.
-`snyk` in Docker Hub.
-`clair` in Amazon ECR.
-`vulture`, `flake8`, `black` and `pylint` for formatting and best practices.
## Reporting a Vulnerability
If you would like to report a vulnerability or have a security concern regarding Prowler Open Source or ProwlerPro service, please submit the information by contacting to help@prowler.pro.
The information you share with Verica as part of this process is kept confidential within Verica and the Prowler team. We will only share this information with a third party if the vulnerability you report is found to affect a third-party product, in which case we will share this information with the third-party product's author or manufacturer. Otherwise, we will only share this information as permitted by you.
We will review the submitted report, and assign it a tracking number. We will then respond to you, acknowledging receipt of the report, and outline the next steps in the process.
You will receive a non-automated response to your initial contact within 24 hours, confirming receipt of your reported vulnerability.
We will coordinate public notification of any validated vulnerability with you. Where possible, we prefer that our respective public disclosures be posted simultaneously.
Description:Creates a CodeBuild project to audit an AWS account with Prowler Version 2 and stores the html report in a S3 bucket. This will run onece at the beginning and on a schedule afterwards. Partial contribution from https://github.com/stevecjones
Parameters:
ServiceName:
Description:'Specifies the service name used within component naming'
Type:String
Default:'prowler'
LogsRetentionInDays:
Description:'Specifies the number of days you want to retain CodeBuild run log events in the specified log group. Junit reports are kept for 30 days, HTML reports in S3 are not deleted'
Type:Number
Default:3
AllowedValues:[1,3,5,7,14,30,60,90,180,365]
ProwlerOptions:
Description:'Options to pass to Prowler command, make sure at least -M junit-xml is used for CodeBuild reports. Use -r for the region to send API queries, -f to filter only one region, -M output formats, -c for comma separated checks, for all checks do not use -c or -g, for more options see -h. For a complete assessment use "-M text,junit-xml,html,csv,json", for SecurityHub integration use "-r region -f region -M text,junit-xml,html,csv,json,json-asff -S -q"'
Type:String
# Prowler command below runs a set of checks, configure it base on your needs, no options will run all regions all checks.
# option -M junit-xml is required in order to get the report in CodeBuild.
Description:The time when Prowler will run in cron format. Default is daily at 22:00h or 10PM 'cron(0 22 * * ? *)', for every 5 hours also works 'rate(5 hours)'. More info here https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html.
Description:Creates a CodeBuild project to audit an AWS account with Prowler Version 2 and stores the html report in a S3 bucket. This will run onece at the beginning and on a schedule afterwards. Partial contribution from https://github.com/stevecjones
Parameters:
ServiceName:
Description:'Specifies the service name used within component naming'
Type:String
Default:'prowler'
LogsRetentionInDays:
Description:'Specifies the number of days you want to retain CodeBuild run log events in the specified log group. Junit reports are kept for 30 days, HTML reports in S3 are not deleted'
Type:Number
Default:3
AllowedValues:[1,3,5,7,14,30,60,90,180,365]
ProwlerOptions:
Description:'Options to pass to Prowler command, use -f to filter specific regions, -c for specific checks, -s for specific services, for SecurityHub integration use "-f shub_region -S", for more options see -h. For a complete assessment leave this empty.'
Type:String
# Prowler command below runs a set of checks, configure it base on your needs, no options will run all regions all checks.
Default:-f eu-west-1 -s s3 iam ec2
ProwlerScheduler:
Description:The time when Prowler will run in cron format. Default is daily at 22:00h or 10PM 'cron(0 22 * * ? *)', for every 5 hours also works 'rate(5 hours)'. More info here https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html.
# Example Solution: Serverless Organizational Prowler Deployment with SecurityHub
Deploys [Prowler](https://github.com/prowler-cloud/prowler) with AWS Fargate to assess all Accounts in an AWS Organization on a schedule, and sends the results to Security Hub.
## Context
Originally based on [org-multi-account](https://github.com/prowler-cloud/prowler/tree/master/util/org-multi-account), but changed in the following ways:
- No HTML reports and no S3 buckets
- Findings sent directly to Security Hub using the native integration
- AWS Fargate Task with EventBridge Rule instead of EC2 instance with cronjob
- Based on amazonlinux:2022 to leverage "wait -n" for improved parallelization as new jobs are launched as one finishes
## Architecture Explanation
The solution is designed to be very simple. Prowler is run via an ECS Task definition that launches a single Fargate container. This Task Definition is executed on a schedule using an EventBridge Rule.
## CloudFormation Templates
### CF-Prowler-IAM.yml
Creates the following IAM Roles:
1.**ECSExecutionRole**: Required for the Task Definition to be able to fetch the container image from ECR and launch the container.
2.**ProwlerTaskRole**: Role that the container itself runs with. It allows it to assume the ProwlerCrossAccountRole.
3.**ECSEventRoleName**: Required for the EventBridge Rule to execute the Task Definition.
### CF-Prowler-ECS.yml
Creates the following resources:
1.**ProwlerECSCluster**: Cluster to be used to execute the Task Definition.
2.**ProwlerECSCloudWatchLogsGroup**: Log group for the Prowler container logs. This is required because it's the only log driver supported by Fargate. The Prowler executable logs are suppressed to prevent unnecessary logs, but error logs are kept for debugging.
3.**ProwlerECSTaskDefinition**: Task Definition for the Fargate container. CPU and memory can be increased as needed. In my experience, 1 CPU per parallel Prowler job is ideal, but further performance testing may be required to find the optimal configuration for a specific organization. Enabling container insights helps a lot with this.
4.**ProwlerSecurityGroup**: Security Group for the container. It only allows TCP 443 outbound, as it is the only port needed for awscli.
5.**ProwlerTaskScheduler**: EventBridge Rule that schedules the execution of the Task Definition. The cron expression is specified as a CloudFormation template parameter.
### CF-Prowler-CrossAccountRole.yml
Creates the cross account IAM Role required for Prowler to run. Deploy it as StackSet in every account in the AWS Organization.
## Docker Container
### Dockerfile
The Dockerfile does the following:
1. Uses amazonlinux:2022 as a base.
2. Downloads required dependencies.
3. Copies the .awsvariables and run-prowler-securityhub.sh files into the root.
4. Downloads the specified version of Prowler as recommended in the release notes.
5. Assigns permissions to a lower privileged user and then drops to it.
6. Runs the script.
### .awsvariables
The .awsvariables file is used to pass required configuration to the script:
1.**ROLE**: The cross account Role to be assumed for the Prowler assessments.
2.**PARALLEL_ACCOUNTS**: The number of accounts to be scanned in parallel.
3.**REGION**: Region where Prowler will run its assessments.
### run-prowler-securityhub.sh
The script gets the list of accounts in AWS Organizations, and then executes Prowler as a job for each account, up to PARALLEL_ACCOUNT accounts at the same time.
The logs that are generated and sent to Cloudwatch are error logs, and assessment start and finish logs.
## Instructions
1. Create a Private ECR Repository in the account that will host the Prowler container. The Audit account is recommended, but any account can be used.
2. Configure the .awsvariables file. Note the ROLE name chosen as it will be the CrossAccountRole.
3. Follow the steps from "View Push Commands" to build and upload the container image. You need to have Docker and AWS CLI installed, and use the cli to login to the account first. After upload note the Image URI, as it is required for the CF-Prowler-ECS template.
4. Make sure SecurityHub is enabled in every account in AWS Organizations, and that the SecurityHub integration is enabled as explained in [Prowler - Security Hub Integration](https://github.com/prowler-cloud/prowler#security-hub-integration)
5. Deploy **CF-Prowler-CrossAccountRole.yml** in the Master Account as a single stack. You will have to choose the CrossAccountRole name (ProwlerXA-Role by default) and the ProwlerTaskRoleName (ProwlerECSTask-Role by default)
6. Deploy **CF-Prowler-CrossAccountRole.yml** in every Member Account as a StackSet. Choose the same CrossAccountName and ProwlerTaskRoleName as the previous step.
7. Deploy **CF-Prowler-IAM.yml** in the account that will host the Prowler container (the same from step 1). The following template parameters must be provided:
- **ProwlerCrossAccountRoleName**: Name of the from CF-Prowler-CrossAccountRole (default ProwlerXA-Role).
- **ECSExecutionRoleName**: Name for the ECS Task Execution Role (default ECSTaskExecution-Role).
- **ProwlerTaskRoleName**: Name for the ECS Prowler Task Role (default ProwlerECSTask-Role).
- **ECSEventRoleName**: Name for the Eventbridge Task Role (default ProwlerEvents-Role).
8. Deploy **CF-Prowler-ECS.yml** in the account that will host the Prowler container (the same from step 1). The following template parameters must be provided:
- **ProwlerClusterName**: Name for the ECS Cluster (default ProwlerCluster)
- **ProwlerContainerName**: Name for the Prowler container (default prowler)
- **ProwlerContainerInfo**: ECR URI from step 1.
- **ProwlerECSLogGroupName**: CloudWatch Log Group name (default /aws/ecs/SecurityHub-Prowler)
- **SecurityGroupVPCId**: VPC ID for the VPC where the container will run.
- **ProwlerScheduledSubnet1 and 2**: Subnets IDs from the VPC specified. Choose private subnets if possible.
- **ECSExecutionRole**: ECS Execution Task Role ARN from CF-Prowler-IAM outputs.
- **ProwlerTaskRole**: Prowler ECS Task Role ARN from CF-Prowler-IAM outputs.
- **ECSEventRole**: Eventbridge Task Role ARN from CF-Prowler-IAM outputs.
- **CronExpression**: Valid Cron Expression for the scheduling of the Task Definition.
9. Verify that Prowler runs correctly by checking the CloudWatch logs after the scheduled task is executed.
---
## Troubleshooting
If you permission find errors in the CloudWatch logs, the culprit might be a [Service Control Policy (SCP)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html). You will need to exclude the Prowler Cross Account Role from those SCPs.
---
## Upgrading Prowler
Prowler version is controlled by the PROWLERVER argument in the Dockerfile, change it to the desired version and follow the ECR Push Commands to update the container image.
Old images can be deleted from the ECR Repository after the new image is confirmed to work. They will show as "untagged" as only one image can hold the "latest" tag.
# Example Solution: Organizational Prowler Deployment
Deploys [Prowler](https://github.com/prowler-cloud/prowler) to assess all Accounts in an AWS Organization on a schedule, creates assessment reports in HTML, and stores them in an S3 bucket.
---
## Example Solution Goals
- Using minimal technologies, so solution can be more easily adopted, and further enhanced as needed.
- [Amazon EC2](https://aws.amazon.com/ec2/), to run Prowler
- [Amazon S3](https://aws.amazon.com/s3/), to store Prowler script & reports.
- [AWS CloudFormation](https://aws.amazon.com/cloudformation/), to provision the AWS resources.
- [AWS Systems Manager Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html), Optional, but recommended, to manage the Prowler EC2 instance, without having to allow inbound ssh.
- Staying cohesive with Prowler, for scripting, only leveraging:
- Bash Shell
- AWS CLI
- Adhering to the principle of least privilege.
- Supporting an AWS Multi-Account approach
- Runs Prowler against All accounts in the AWS Organization
- ***NOTE: If using this solution, you are responsible for making your own independent assessment of the solution and ensuring it complies with your company security and operational standards.***
---
## Components
1. [ProwlerS3.yaml](ProwlerS3.yaml)
- Creates Private S3 Bucket for Prowler script and reports.
- Enables [Amazon S3 Block Public Access](https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-block-public-access.html)
- Enables SSE-S3 with [Amazon S3 Default Encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html)
- Versioning Enabled
- Bucket Policy limits API actions to Principals from the same AWS Organization.
1. [ProwlerRole.yaml](ProwlerRole.yaml)
- Creates Cross-Account Role for Prowler to assess accounts in AWS Organization
- Allows Role to be assumed by the Prowler EC2 instance role in the AWS account where Prowler EC2 resides (preferably the Audit/Security account).
- Role has [permissions](https://github.com/prowler-cloud/prowler#custom-iam-policy) needed for Prowler to assess accounts.
- Role has rights to Prowler S3 from Component #1.
1. [ProwlerEC2.yaml](ProwlerEC2.yaml)
- Creates Prowler EC2 instance
- Uses the Latest Amazon Linux 2 AMI
- Uses ```t2.micro``` Instance Type
- Encrypts Root Volume with AWS Managed Key "aws/ebs"
- Uses [cfn-init](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-init.html) for prepping the Prowler EC2
- Installs necessary [packages](https://github.com/prowler-cloud/prowler#requirements-and-installation) for Prowler
- Downloads [run-prowler-reports.sh](src/run-prowler-reports.sh) script from Prowler S3 from Component #1.
- Creates ```/home/ec2-user/.awsvariables```, to store CloudFormation data as variables to be used in script.
- Creates cron job for Prowler to run on a schedule.
- Creates Prowler Security Group
- Denies inbound access. If using ssh to manage Prowler, then update Security Group with pertinent rule.
- Allows outbound 80/443 for updates, and Amazon S3 communications -
- Creates Instance Role that is used for Prowler EC2
- Role has permissions for [Systems Manager Agent](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent.html) communications, and [Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html)
- Role has rights to Prowler S3 from Component #1.
- Role has rights to Assume Cross-Account Role from Component #2.
- Script loops through all AWS Accounts in AWS Organization, and by default, Runs Prowler as follows:
- -R: used to specify Cross-Account role for Prowler to assume to run its assessment.
- -A: used to specify AWS Account number for Prowler to run assessment against.
- -g cislevel1: used to specify cislevel1 checks for Prowler to assess
```bash
./prowler/prowler -R "$ROLE" -A "$accountId" -g cislevel1 -M html
```
- NOTE: Script can be modified to run Prowler as desired.
- Script runs Prowler against 1 AWS Account at a time.
- Update PARALLEL_ACCOUNTS variable in script, to specify how many Accounts to assess with Prowler in parallel.
- If running against multiple AWS Accounts in parallel, monitor performance, and upgrade Instance Type as necessary.
```bash
PARALLEL_ACCOUNTS="1"
```
- In summary:
- Download latest version of [Prowler](https://github.com/prowler-cloud/prowler)
- Find AWS Master Account
- Lookup All Accounts in AWS Organization
- Run Prowler against All Accounts in AWS Organization
- Save Reports to reports prefix in S3 from Component #1
- Report Names: date+time-accountid-report.html
---
## Instructions
1. Deploy [ProwlerS3.yaml](ProwlerS3.yaml) in the Logging Account.
- Could be deployed to any account in the AWS Organizations, if desired.
- See [How to get AWS Organization ID](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_org_details.html#orgs_view_org)
- Take Note of CloudFormation Outputs, that will be needed in deploying the below CloudFormation templates.
1. Upload [run-prowler-reports.sh](src/run-prowler-reports.sh) to the root of the S3 Bucket created in Step #1.
1. Deploy [ProwlerRole.yaml](ProwlerRole.yaml) in the Master Account
- Use CloudFormation Stacks, to deploy to Master Account, as organizational StackSets don't apply to the Master Account.
- Use CloudFormation StackSet, to deploy to all Member Accounts. See [Create Stack Set with Service-Managed Permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-getting-started-create.html#stacksets-orgs-associate-stackset-with-org)
- Take Note of CloudFormation Outputs, that will be needed in deploying the below CloudFormation templates.
1. Deploy [ProwlerEC2.yaml](ProwlerEC2.yaml) in the Audit/Security Account
- Could be deployed to any account in the AWS Organizations, if desired.
1. Prowler will run against all Accounts in AWS Organization, per the schedule you provided, and set in a cron job for ```ec2-user```
---
## Post-Setup
### Run Prowler on a Schedule against all Accounts in AWS Organization
1. Prowler will run on the Schedule you provided.
1. Cron job for ```ec2-user``` is managing the schedule.
1. This solution implemented this automatically. Nothing for you to do.
### Ad hoc Run Prowler against all Accounts in AWS Organization
1. Connect to Prowler EC2 Instance
- If using Session Manager, then after login, switch to ```ec2-user```, via: ```sudo bash``` and ```su - ec2-user```
- If using SSH, then login as ```ec2-user```
1. Run Prowler Script
```bash
cd /home/ec2-user
./run-prowler-reports.sh
```
### Ad hoc Run Prowler Interactively
1. Connect to Prowler EC2 Instance
- If using Session Manager, then after login, switch to ```ec2-user```, via: ```sudo bash``` and ```su - ec2-user```
- If using SSH, then login as ```ec2-user```
1. See Cross-Account Role and S3 Bucket being used for Prowler
```bash
cd /home/ec2-user
cat .awsvariables
```
1. Run Prowler interactively. See [Usage Examples](https://github.com/prowler-cloud/prowler#usage)
```bash
cd /home/ec2-user
./prowler/prowler
```
### Upgrading Prowler to Latest Version
1. Connect to Prowler EC2 Instance
- If using Session Manager, then after login, switch to ```ec2-user```, via: ```sudo bash``` and ```su - ec2-user```
- If using SSH, then login as ```ec2-user```
1. Delete the existing version of Prowler, and download the latest version of Prowler
This project is created to apply prowler in a multi-account environment within AWS Organizations.
CloudWatch triggers CodeBuild every fixed time.
CodeBuild executes the script which clones the latest prowler from [here](https://github.com/prowler-cloud/prowler) and performs security assessment on all the accounts in AWS Organizations. The assessment reports are sent to S3 bucket in Log Archive Account.
For more information on how to use prowler, see [here](https://github.com/prowler-cloud/prowler#usage).
1. Deploy [ProwlerS3.yaml](templates/ProwlerS3.yaml) in CloudFormation console.
The template creates S3 bucket for reports and bucket policy that limits API actions to principals from its AWS Organizations.
- AwsOrgId : AWS Organizations' Organization ID
- S3Prefix : The prefix included in the bucket name
2.**Master Account**
1. Deploy [ProwlerRole.yaml](templates/ProwlerRole.yaml) stack to CloudFormation in a bid to create resources to master account itself.
(The template will be also deployed for other member accounts as a StackSet)
- ProwlerCodeBuildAccount : Audit Account ID where CodeBuild resides. (preferably Audit/Security account)
- ProwlerCodeBuildRole : Role name to use in CodeBuild service
- ProwlerCrossAccountRole : Role name to assume for Cross account
- ProwlerS3 : The S3 bucket name where reports will be put
1. Create **StackSet** with [ProwlerRole.yaml](templates/ProwlerRole.yaml) to deploy Role into member accounts in AWS Organizations.
- ProwlerCodeBuildAccount : Audit Account ID where CodeBuild resides. (preferably Audit/Security account)
- ProwlerCodeBuildRole : Role name to use in CodeBuild service
- ProwlerCrossAccountRole : Role name to assume for Cross account
- ProwlerS3 : The S3 bucket name where reports will be put
- Permission : Service-managed permissions
- Deploy target : Deploy to organization 선택, Enable, Delete stacks 선택
- Specify regions : Region to deploy
3.**Audit Account**
1. Go to S3 console, create a bucket, upload [run-prowler-reports.sh.zip](src/run-prowler-reports.sh.zip)
- bucket name : prowler-util-*[Account ID]*-*[region]*

1. Deploy [ProwlerCodeBuildStack.yaml](templates/ProwlerCodeBuildStack.yaml) which creates CloudWatch Rule to trigger CodeBuild every fixed time, allowing prowler to audit multi-accounts.
- AwsOrgId : AWS Organizations' Organization ID
- CodeBuildRole : Role name to use in CodeBuild service
- CodeBuildSourceS3 : Object location uploaded from i
| <a name="input_enable_security_hub_prowler_subscription"></a> [enable\_security\_hub\_prowler\_subscription](#input\_enable\_security\_hub\_prowler\_subscription) | Enable a Prowler Subscription. | `bool` | `true` | no |
| <a name="input_prowler_cli_options"></a> [prowler\_cli\_options](#input\_prowler\_cli\_options) | Run Prowler With The Following Command | `string` | `"_q _M json_asff _S _f us_east_1"` | no |
| <a name="input_prowler_schedule"></a> [prowler\_schedule](#input\_prowler\_schedule) | Run Prowler based on cron schedule | `string` | `"cron(0 0 ? * * *)"` | no |
## Outputs
| Name | Description |
|------|-------------|
| <a name="output_account_id"></a> [account\_id](#output\_account\_id) | TODO Move these to outputs file |
# Install Security Baseline Kickstarter with Prowler
## Introduction
The following demonstrates how to quickly install the resources necessary to perform a security baseline using Prowler. The speed is based on the prebuilt terraform module that can configure all the resources necessary to run Prowler with the findings being sent to AWS Security Hub.
## Install
Installing Prowler with Terraform is simple and can be completed in under 1 minute.
- Start AWS CloudShell
- Run the following commands to install Terraform and clone the Prowler git repo
- It is likely an error will return related to the SecurityHub subscription. This appears to be Terraform related and you can validate the configuration by navigating to the SecurityHub console. Click Integrations and search for Prowler. Take note of the green check where it says *Accepting findings*
Thats it! Install is now complete. The resources include a Cloudwatch event that will trigger the AWS Codebuild to run daily at 00:00 GMT. If you'd like to run an assessment after the deployment then simply navigate to the Codebuild console and start the job manually.
| <a name="input_enable_security_hub_prowler_subscription"></a> [enable\_security\_hub\_prowler\_subscription](#input\_enable\_security\_hub\_prowler\_subscription) | Enable a Prowler Subscription. | `bool` | `true` | no |
| <a name="input_prowler_cli_options"></a> [prowler\_cli\_options](#input\_prowler\_cli\_options) | Run Prowler With The Following Command | `string` | `"-q -M json-asff -S -f us-east-1"` | no |
| <a name="input_prowler_schedule"></a> [prowler\_schedule](#input\_prowler\_schedule) | Run Prowler based on cron schedule | `string` | `"cron(0 0 ? ** *)"` | no |
| <a name="input_select_region"></a> [select\_region](#input\_select\_region) | Uses the following AWS Region. | `string` | `"us-east-1"` | no |
Prowler integration with WAZUH using a python wrapper. Due to the wrapper limitations, this integration can be considered as a proof of concept at this time.
## Features
Wazuh, using a wodle, runs Prowler every certain time and stores alerts (failed checks) using JSON output which Wazuh processes and sends to Elastic Search to be queried from Kibana.
## Requirements
1. Latest AWS-CLI client (`pip install awscli`). If you have it already installed, make sure you are using the latest version, upgrade it: `pip install awscli --upgrade`.
2. Also `jq` is needed (`pip install jq`).
Remember, you must have AWS-CLI credentials already configured in the same instance running Wazuh (run `aws configure` if needed). In this DRAFT I'm using `/root/.aws/credentials` file with [default] as AWS-CLI profile and access keys but you can use assume role configuration as well. For the moment instance profile is not supported in this wrapper.
It may work in previous versions of Wazuh, but this document and integration was tested on Wazuh 3.7.1. So to have a Wazuh running installation is obviously required.
Edit `/var/ossec/etc/ossec.conf` and add the following wodle configuration. Remember that here `timeout 21600 seconds` is 6 hours, just to allow Prowler runs completely in case of a large account. The interval recommended is 1d:
To check multiple AWS accounts, add a wodle per account.
Now restart `wazuh-manager` and look at `/var/ossec/logs/alerts/alerts.json`, eventually you should see FAIL checks detected by Prowler, then you will find them using Kibana. Some Kibana search examples are:
```
data.integration:"prowler" and data.prowler.status:"Fail"
data.integration:"prowler" AND rule.level >= 5
data.integration:"prowler" AND rule.level : 7 or 9
```
Adjust the level range to what alerts you want to include, as alerts, Elastic Search only gets fail messages (7 and 9).
1 - pass
3 - info
5 - error
7 - fail: not scored
9 - fail: scored
## Troubleshooting
To make sure rules are working fine, run `/var/ossec/bin/ossec-logtest` and copy/paste this sample JSON:
```json
{"prowler":{"Timestamp":"2018-11-29T03:15:50Z","Region":"us-east-1","Profile":"default","Account Number”:”1234567890”,”Control":"[check34] Ensure a log metric filter and alarm exist for IAM policy changes (Scored)","Message":"No CloudWatch group found for CloudTrail events","Status":"Fail","Scored":"Scored","Level":"Level 1","Control ID":"3.4"},"integration":"prowler"}
```
You must see 3 phases goin on.
To check if there is any error you can enable the debug mode of `modulesd` setting the `wazuh_modules.debug=0` variable to 2 in `/var/ossec/etc/internal_options.conf` file. Restart wazuh-manager and errors should appear in the `/var/ossec/logs/ossec.log` file.
## Thanks
To Jeremy Phillips <jeremy@uranusbytes.com>, who wrote the initial rules file and wrapper and helped me to understand how it works and debug it.
To [Marta Gomez](https://github.com/mgmacias95) and the [Wazuh](https://www.wazuh.com) team for their support to debug this integration and make it work properly. Their job on Wazuh and willingness to help is invaluable.
## License
All CIS based checks in the checks folder are licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License.
Any other piece of code is licensed as Apache License 2.0 as specified in each file. You may obtain a copy of the License at
<http://www.apache.org/licenses/LICENSE-2.0>
NOTE: If you are interested in using Prowler for commercial purposes remember that due to the CC4.0 license “The distributors or partners that are interested and using Prowler would need to enroll as CIS SecureSuite Members to incorporate this product, which includes references to CIS resources, in their offering.". Information about CIS pricing for vendors here: <https://www.cisecurity.org/cis-securesuite/pricing-and-categories/product-vendor/>
**I'm not related anyhow with CIS organization, I just write and maintain Prowler to help companies over the world to make their cloud infrastructure more secure.**
If you want to contact me visit <https://blyx.com/contact>
For technical support or any type of inquiries, you are very welcome to:
- Reach out to community members on the [**Prowler Slack channel**](https://join.slack.com/t/prowler-workspace/shared_invite/zt-1hix76xsl-2uq222JIXrC7Q8It~9ZNog)
- Open an Issue or a Pull Request in our [**GitHub repository**](https://github.com/prowler-cloud/prowler).
We will appreciate all types of feedback and contribution, Prowler would not be the same without our vibrant community! 😃
In each Prowler provider we have a Python object called `audit_info` which is in charge of keeping the credentials, the configuration and the state of each audit, and it's passed to each service during the `__init__`.
This `audit_info` object is shared during the Prowler execution and for that reason is important to mock it in each test to isolate them. See the [testing guide](./unit-testing.md) for more information.
Here you can find how to create new checks for Prowler.
**To create a check is required to have a Prowler provider service already created, so if the service is not present or the attribute you want to audit is not retrieved by the service, please refer to the [Service](./services.md) documentation.**
## Introduction
To create a new check for a supported Prowler provider, you will need to create a folder with the check name inside the specific service for the selected provider.
We are going to use the `ec2_ami_public` check form the `AWS` provider as an example. So the folder name will `prowler/providers/aws/services/ec2/ec2_ami_public` (following the format `prowler/providers/<provider>/services/<service>/<check_name>`), with the name of check following the pattern: `service_subservice/resource_action`.
Inside that folder, we need to create three files:
- An empty `__init__.py`: to make Python treat this check folder as a package.
- A `check_name.py` with the above format containing the check's logic. Refer to the [check](./checks.md#check)
- A `check_name.metadata.json` containing the check's metadata. Refer to the [check metadata](./checks.md#check-metadata)
## Check
The Prowler's check structure is very simple and following it there is nothing more to do to include a check in a provider's service because the load is done dynamically based on the paths.
The following is the code for the `ec2_ami_public` check:
```python title="Check Class"
# At the top of the file we need to import the following:
# - Check class which is in charge of the following:
# - Retrieve the check metadata and expose the `metadata()`
# to return a JSON representation of the metadata,
# read more at Check Metadata Model down below.
# - Enforce that each check requires to have the `execute()` function
from prowler.lib.check.models import Check, Check_Report_AWS
# Then you have to import the provider service client
# read more at the Service documentation.
from prowler.providers.aws.services.ec2.ec2_client import ec2_client
# For each check we need to create a python class called the same as the
# file which inherits from the Check class.
class ec2_ami_public(Check):
"""ec2_ami_public verifies if an EC2 AMI is publicly shared"""
# Then, within the check's class we need to create the "execute(self)"
# function, which is enforce by the "Check" class to implement
# the Check's interface and let Prowler to run this check.
def execute(self):
# Inside the execute(self) function we need to create
# the list of findings initialised to an empty list []
findings = []
# Then, using the service client we need to iterate by the resource we
# want to check, in this case EC2 AMIs stored in the
# "ec2_client.images" object.
for image in ec2_client.images:
# Once iterating for the images, we have to intialise
# the Check_Report_AWS class passing the check's metadata
# using the "metadata" function explained above.
report = Check_Report_AWS(self.metadata())
# For each Prowler check we MUST fill the following
# Check_Report_AWS fields:
# - region
# - resource_id
# - resource_arn
# - resource_tags
# - status
# - status_extended
report.region = image.region
report.resource_id = image.id
report.resource_arn = image.arn
# The resource_tags should be filled if the resource has the ability
# of having tags, please check the service first.
report.resource_tags = image.tags
# Then we need to create the business logic for the check
# which always should be simple because the Prowler service
# must do the heavy lifting and the check should be in charge
# of parsing the data provided
report.status = "PASS"
report.status_extended = f"EC2 AMI {image.id} is not public."
# In this example each "image" object has a boolean attribute
# called "public" to set if the AMI is publicly shared
if image.public:
report.status = "FAIL"
report.status_extended = (
f"EC2 AMI {image.id} is currently public."
)
# Then at the same level as the "report"
# object we need to append it to the findings list.
findings.append(report)
# Last thing to do is to return the findings list to Prowler
return findings
```
### Check Status
All the checks MUST fill the `report.status` and `report.status_extended` with the following criteria:
- Status -- `report.status`
- `PASS` --> If the check is passing against the configured value.
- `FAIL` --> If the check is passing against the configured value.
- `INFO` --> This value cannot be used unless a manual operation is required in order to determine if the `report.status` is whether `PASS` or `FAIL`.
- Status Extended -- `report.status_extended`
- MUST end in a dot `.`
- MUST include the service audited with the resource and a brief explanation of the result generated, e.g.: `EC2 AMI ami-0123456789 is not public.`
### Check Region
All the checks MUST fill the `report.region` with the following criteria:
- If the audited resource is regional use the `region` attribute within the resource object.
- If the audited resource is global use the `service_client.region` within the service client object.
### Resource ID, Name and ARN
All the checks MUST fill the `report.resource_id` and `report.resource_arn` with the following criteria:
The following is the Python model for the check's class.
As per August 5th 2023 the `Check_Metadata_Model` can be found [here](https://github.com/prowler-cloud/prowler/blob/master/prowler/lib/check/models.py#L59-L80).
```python
class Check(ABC, Check_Metadata_Model):
"""Prowler Check"""
def __init__(self, **data):
"""Check's init function. Calls the CheckMetadataModel init."""
data = Check_Metadata_Model.parse_file(metadata_file).dict()
# Calls parents init function
super().__init__(**data)
def metadata(self) -> dict:
"""Return the JSON representation of the check's metadata"""
return self.json()
@abstractmethod
def execute(self):
"""Execute the check's logic"""
```
###Using the audit config
Prowler has a [configuration file](../tutorials/configuration_file.md) which is used to pass certain configuration values to the checks, like the following:
As you can see in the above code, within the service client, in this case the `ec2_client`, there is an object called `audit_config` which is a Python dictionary containing the values read from the configuration file.
In order to use it, you have to check first if the value is present in the configuration file. If the value is not present, you can create it in the `config.yaml` file and then, read it from the check.
> It is mandatory to always use the `dictionary.get(value, default)` syntax to set a default value in the case the configuration value is not present.
## Check Metadata
Each Prowler check has metadata associated which is stored at the same level of the check's folder in a file called A `check_name.metadata.json` containing the check's metadata.
> We are going to include comments in this example metadata JSON but they cannot be included because the JSON format does not allow comments.
```json
{
# Provider holds the Prowler provider which the checks belongs to
"Provider": "aws",
# CheckID holds check name
"CheckID": "ec2_ami_public",
# CheckTitle holds the title of the check
"CheckTitle": "Ensure there are no EC2 AMIs set as Public.",
# CheckType holds Software and Configuration Checks, check more here
# Description holds the title of the check, for now is the same as CheckTitle
"Description": "Ensure there are no EC2 AMIs set as Public.",
# Risk holds the check risk if the result is FAIL
"Risk": "When your AMIs are publicly accessible, they are available in the Community AMIs where everyone with an AWS account can use them to launch EC2 instances. Your AMIs could contain snapshots of your applications (including their data), therefore exposing your snapshots in this manner is not advised.",
# RelatedUrl holds an URL with more information about the check purpose
"RelatedUrl": "",
# Remediation holds the information to help the practitioner to fix the issue in the case of the check raise a FAIL
"Remediation": {
# Code holds different methods to remediate the FAIL finding
"Code": {
# CLI holds the command in the provider native CLI to remediate it
The RelatedURL field must be filled with an URL from the provider's official documentation like https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharingamis-intro.html
Also, if not present you can use the Risk and Recommendation texts from the TrendMicro [CloudConformity](https://www.trendmicro.com/cloudoneconformity) guide.
### Python Model
The following is the Python model for the check's metadata model. We use the Pydantic's [BaseModel](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel) as the parent class.
As per August 5th 2023 the `Check_Metadata_Model` can be found [here](https://github.com/prowler-cloud/prowler/blob/master/prowler/lib/check/models.py#L34-L56).
We use `mkdocs` to build this Prowler documentation site so you can easily contribute back with new docs or improving them.
1. Install `mkdocs` with your favorite package manager.
2. Inside the `prowler` repository folder run `mkdocs serve` and point your browser to `http://localhost:8000` and you will see live changes to your local copy of this documentation site.
3. Make all needed changes to docs or add new documents. To do so just edit existing md files inside `prowler/docs` and if you are adding a new section or file please make sure you add it to `mkdocs.yaml` file in the root folder of the Prowler repo.
4. Once you are done with changes, please send a pull request to us for review and merge. Thank you in advance!
You can extend Prowler in many different ways, in most cases you will want to create your own checks and compliance security frameworks, here is where you can learn about how to get started with it. We also include how to create custom outputs, integrations and more.
## Get the code and install all dependencies
First of all, you need a version of Python 3.9 or higher and also pip installed to be able to install all dependencies required. Once that is satisfied go a head and clone the repo:
For isolation and avoid conflicts with other environments, we recommend usage of `poetry`:
```
pip install poetry
```
Then install all dependencies including the ones for developers:
```
poetry install
poetry shell
```
## Contributing with your code or fixes to Prowler
This repo has git pre-commit hooks managed via the [pre-commit](https://pre-commit.com/) tool. [Install](https://pre-commit.com/#install) it how ever you like, then in the root of this repo run:
```shell
pre-commit install
```
You should get an output like the following:
```shell
pre-commit installed at .git/hooks/pre-commit
```
Before we merge any of your pull requests we pass checks to the code, we use the following tools and automation to make sure the code is secure and dependencies up-to-dated (these should have been already installed if you ran `pipenv install -d`):
- [`bandit`](https://pypi.org/project/bandit/) for code security review.
- [`safety`](https://pypi.org/project/safety/) and [`dependabot`](https://github.com/features/security) for dependencies.
- [`hadolint`](https://github.com/hadolint/hadolint) and [`dockle`](https://github.com/goodwithtech/dockle) for our containers security.
- [`Snyk`](https://docs.snyk.io/integrations/snyk-container-integrations/container-security-with-docker-hub-integration) in Docker Hub.
- [`clair`](https://github.com/quay/clair) in Amazon ECR.
- [`vulture`](https://pypi.org/project/vulture/), [`flake8`](https://pypi.org/project/flake8/), [`black`](https://pypi.org/project/black/) and [`pylint`](https://pypi.org/project/pylint/) for formatting and best practices.
You can see all dependencies in file `pyproject.toml`.
## Pull Request Checklist
If you create or review a PR in https://github.com/prowler-cloud/prowler please follow this checklist:
- [ ] Make sure you've read the Prowler Developer Guide at https://docs.prowler.cloud/en/latest/developer-guide/introduction/
- [ ] Are we following the style guide, hence installed all the linters and formatters? Please check https://docs.prowler.cloud/en/latest/developer-guide/introduction/#contributing-with-your-code-or-fixes-to-prowler
- [ ] Are we increasing/decreasing the test coverage? Please, review if we need to include/modify tests for the new code.
- [ ] Are we modifying outputs? Please review it carefully.
- [ ] Do we need to modify the Prowler documentation to reflect the changes introduced?
- [ ] Are we introducing possible breaking changes? Are we modifying a core feature?
## Want some swag as appreciation for your contribution?
If you are like us and you love swag, we are happy to thank you for your contribution with some laptop stickers or whatever other swag we may have at that time. Please, tell us more details and your pull request link in our [Slack workspace here](https://join.slack.com/t/prowler-workspace/shared_invite/zt-1hix76xsl-2uq222JIXrC7Q8It~9ZNog). You can also reach out to Toni de la Fuente on Twitter [here](https://twitter.com/ToniBlyx), his DMs are open.
If you want to create or contribute with your own security frameworks or add public ones to Prowler you need to make sure the checks are available if not you have to create your own. Then create a compliance file per provider like in `prowler/compliance/<provider>/` and name it as `<framework>_<version>_<provider>.json` then follow the following format to create yours.
## Compliance Framework
Each file version of a framework will have the following structure at high level with the case that each framework needs to be generally identified, one requirement can be also called one control but one requirement can be linked to multiple prowler checks.:
-`Framework`: string. Distinguish name of the framework, like CIS
-`Provider`: string. Provider where the framework applies, such as AWS, Azure, OCI,...
-`Version`: string. Version of the framework itself, like 1.4 for CIS.
-`Requirements`: array of objects. Include all requirements or controls with the mapping to Prowler.
-`Requirements_Id`: string. Unique identifier per each requirement in the specific framework
-`Requirements_Description`: string. Description as in the framework.
-`Requirements_Attributes`: array of objects. Includes all needed attributes per each requirement, like levels, sections, etc. Whatever helps to create a dedicated report with the result of the findings. Attributes would be taken as closely as possible from the framework's own terminology directly.
-`Requirements_Checks`: array. Prowler checks that are needed to prove this requirement. It can be one or multiple checks. In case of no automation possible this can be empty.
```
{
"Framework": "<framework>-<provider>",
"Version": "<version>",
"Requirements": [
{
"Id": "<unique-id>",
"Description": "Requiemente full description",
"Checks": [
"Here is the prowler check or checks that is going to be executed"
],
"Attributes": [
{
<Add here your custom attributes.>
}
]
},
...
]
}
```
Finally, to have a proper output file for your reports, your framework data model has to be created in `prowler/lib/outputs/models.py` and also the CLI table output in `prowler/lib/outputs/compliance.py`.
Here you can find how to create a new service, or to complement an existing one, for a Prowler Provider.
## Introduction
To create a new service, you will need to create a folder inside the specific provider, i.e. `prowler/providers/<provider>/services/<service>/`.
Inside that folder, you MUST create the following files:
- An empty `__init__.py`: to make Python treat this service folder as a package.
- A `<service>_service.py`, containing all the service's logic and API calls.
- A `<service>_client_.py`, containing the initialization of the service's class we have just created so the checks's checks can use it.
## Service
The Prowler's service structure is the following and the way to initialise it is just by importing the service client in a check.
## Service Base Class
All the Prowler provider's services inherits from a base class depending on the provider used.
- [AWS Service Base Class](https://github.com/prowler-cloud/prowler/blob/22f8855ad7dad2e976dabff78611b643e234beaf/prowler/providers/aws/lib/service/service.py)
- [GCP Service Base Class](https://github.com/prowler-cloud/prowler/blob/22f8855ad7dad2e976dabff78611b643e234beaf/prowler/providers/gcp/lib/service/service.py)
- [Azure Service Base Class](https://github.com/prowler-cloud/prowler/blob/22f8855ad7dad2e976dabff78611b643e234beaf/prowler/providers/azure/lib/service/service.py)
Each class is used to initialize the credentials and the API's clients to be used in the service. If some threading is used it must be coded there.
## Service Class
Due to the complexity and differencies of each provider API we are going to use an example service to guide you in how can it be created.
The following is the `<service>_service.py` file:
```python title="Service Class"
from datetime import datetime
from typing import Optional
# The following is just for the AWS provider
from botocore.client import ClientError
# To use the Pydantic's BaseModel
from pydantic import BaseModel
# Prowler logging library
from prowler.lib.logger import logger
# Prowler resource filter, only for the AWS provider
from prowler.lib.scan_filters.scan_filters import is_resource_filtered
# Provider parent class
from prowler.providers.<provider>.lib.service.service import ServiceParentClass
# Create a class for the Service
################## <Service>
class <Service>(ServiceParentClass):
def __init__(self, audit_info):
# Call Service Parent Class __init__
# We use the __class__.__name__ to get it automatically
# from the Service Class name but you can pass a custom
# string if the provider's API service name is different
super().__init__(__class__.__name__, audit_info)
#Create an empty dictionary of items to be gathered,
For each class object we need to model we use the Pydantic's [BaseModel](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel) to take advantage of the data validation.
```python title="Service Model"
# In each service class we have to create some classes using
# the Pydantic's Basemodel for the resources we want to audit.
class <Item>(BaseModel):
"""<Item> holds a <Service> <Item>"""
arn: str
"""<Items>[].arn"""
name: str
"""<Items>[].name"""
region: str
"""<Items>[].region"""
public: bool
"""<Items>[].public"""
# We can create Optional attributes set to None by default
tags: Optional[list]
"""<Items>[].tags"""
```
### Service Objects
In the service each group of resources should be created as a Python [dictionary](https://docs.python.org/3/tutorial/datastructures.html#dictionaries). This is because we are performing lookups all the time and the Python dictionary lookup has [O(1) complexity](https://en.wikipedia.org/wiki/Big_O_notation#Orders_of_common_functions).
We MUST set as the dictionary key a unique ID, like the resource Unique ID or ARN.
Each Prowler service requires a service client to use the service in the checks.
The following is the `<service>_client.py` containing the initialization of the service's class we have just created so the service's checks can use them:
```python
from prowler.providers.<provider>.lib.audit_info.audit_info import audit_info
from prowler.providers.<provider>.services.<service>.<service>_service import <Service>
<service>_client = <Service>(audit_info)
```
## Permissions
It is really important to check if the current Prowler's permissions for each provider are enough to implement a new service. If we need to include more please refer to the following documentaion and update it:
When creating tests for some provider's checks we follow these guidelines trying to cover as much test scenarios as possible:
1. Create a test without resource to generate 0 findings, because Prowler will generate 0 findings if a service does not contain the resources the check is looking for audit.
2. Create test to generate both a `PASS` and a `FAIL` result.
3. Create tests with more than 1 resource to evaluate how the check behaves and if the number of findings is right.
## How to run Prowler tests
To run the Prowler test suite you need to install the testing dependencies already included in the `pyproject.toml` file. If you didn't install it yet please read the developer guide introduction [here](./introduction.md#get-the-code-and-install-all-dependencies).
Then in the project's root path execute `pytest -n auto -vvv -s -x` or use the `Makefile` with `make test`.
Other commands to run tests:
- Run tests for a provider: `pytest -n auto -vvv -s -x tests/providers/<provider>/services`
- Run tests for a provider service: `pytest -n auto -vvv -s -x tests/providers/<provider>/services/<service>`
- Run tests for a provider check: `pytest -n auto -vvv -s -x tests/providers/<provider>/services/<service>/<check>`
> Refer to the [pytest documentation](https://docs.pytest.org/en/7.1.x/getting-started.html) documentation for more information.
## AWS
For the AWS provider we have ways to test a Prowler check based on the following criteria:
> Note: We use and contribute to the [Moto](https://github.com/getmoto/moto) library which allows us to easily mock out tests based on AWS infrastructure. **It's awesome!**
- AWS API calls covered by [Moto](https://github.com/getmoto/moto):
- Service tests with `@mock_<service>`
- Checks tests with `@mock_<service>`
- AWS API calls not covered by Moto:
- Service test with `mock_make_api_call`
- Checks tests with [MagicMock](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.MagicMock)
- AWS API calls partially covered by Moto:
- Service test with `@mock_<service>` and `mock_make_api_call`
- Checks tests with `@mock_<service>` and `mock_make_api_call`
In the following section we are going to explain all of the above scenarios with examples. The main difference between those scenarios comes from if the [Moto](https://github.com/getmoto/moto) library covers the AWS API calls made by the service. You can check the covered API calls [here](https://github.com/getmoto/moto/blob/master/IMPLEMENTATION_COVERAGE.md).
An important point for the AWS testing is that in each check we MUST have a unique `audit_info` which is the key object during the AWS execution to isolate the test execution.
Check the [Audit Info](./audit-info.md) section to get more details.
```python
# We need to import the AWS_Audit_Info and the Audit_Metadata
For the AWS tests examples we are going to use the tests for the `iam_password_policy_uppercase` check.
This section is going to be divided based on the API coverage of the [Moto](https://github.com/getmoto/moto) library.
#### API calls covered
If the [Moto](https://github.com/getmoto/moto) library covers the API calls we want to test, we can use the `@mock_<service>` decorator. This will mocked out all the API calls made to AWS keeping the state within the code decorated, in this case the test function.
```python
# We need to import the unittest.mock to allow us to patch some objects
# not to use shared ones between test, hence to isolate the test
fromunittestimportmock
# Boto3 client and session to call the AWS APIs
fromboto3importclient,session
# Moto decorator for the IAM service we want to mock
frommotoimportmock_iam
# Constants used
AWS_ACCOUNT_NUMBER="123456789012"
AWS_REGION="us-east-1"
# We always name the test classes like Test_<check_name>
classTest_iam_password_policy_uppercase:
# We include the Moto decorator for the service we want to use
# You can include more than one if two or more services are
# involved in test
@mock_iam
# We name the tests with test_<service>_<check_name>_<test_action>
If the IAM service for the check's we want to test is not covered by Moto, we have to inject the objects in the service client using [MagicMock](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.MagicMock). As we have pointed above, we cannot instantiate the service since it will make real calls to the AWS APIs.
> The following example uses the IAM GetAccountPasswordPolicy which is covered by Moto but this is only for demonstration purposes.
The following code shows how to use MagicMock to create the service objects.
```python
# We need to import the unittest.mock to allow us to patch some objects
# not to use shared ones between test, hence to isolate the test
fromunittestimportmock
# Constants used
AWS_ACCOUNT_NUMBER="123456789012"
AWS_REGION="us-east-1"
# We always name the test classes like Test_<check_name>
classTest_iam_password_policy_uppercase:
# We name the tests with test_<service>_<check_name>_<test_action>
# We set a mocked audit_info for AWS not to share the same audit state
# between checks
current_audit_info=self.set_mocked_audit_info()
# In this scenario we have to mock also the IAM service and the iam_client from the check to enforce # that the iam_client used is the one created within this check because patch != import, and if you # execute tests in parallel some objects can be already initialised hence the check won't be isolated.
# In this case we don't use the Moto decorator, we use the mocked IAM client for both objects
As it can be seen in the above scenarios, the check execution should always be into the context of mocked/patched objects. This way we ensure it reviews only the objects created under the scope the test.
#### API calls partially covered
If the API calls we want to use in the service are partially covered by the Moto decorator we have to create our own mocked API calls to use it in combination.
To do so, you need to mock the `botocore.client.BaseClient._make_api_call` function, which is the Boto3 function in charge of making the real API call to the AWS APIs, using `mock.patch <https://docs.python.org/3/library/unittest.mock.html#patch>`:
```python
importboto3
importbotocore
fromunittest.mockimportpatch
frommotoimportmock_iam
# Original botocore _make_api_call function
orig=botocore.client.BaseClient._make_api_call
# Mocked botocore _make_api_call function
defmock_make_api_call(self,operation_name,kwarg):
# As you can see the operation_name has the get_account_password_policy snake_case form but
# Check the previous section to see the check test since is the same
```
Note that this does not use Moto, to keep it simple, but if you use any `moto`-decorators in addition to the patch, the call to `orig(self, operation_name, kwarg)` will be intercepted by Moto.
> The above code comes from here https://docs.getmoto.org/en/latest/docs/services/patching_other_services.html
#### Mocking more than one service
If the test your are creating belongs to a check that uses more than one provider service, you should mock each of the services used. For example, the check `cloudtrail_logs_s3_bucket_access_logging_enabled` requires the CloudTrail and the S3 client, hence the service's mock part of the test will be as follows:
As you can see in the above code, it is required to mock the AWS audit info and both services used.
#### Patching vs. Importing
This is an important topic within the Prowler check's unit testing. Due to the dynamic nature of the check's load, the process of importing the service client from a check is the following:
Due to the above import path it's not the same to patch the following objects because if you run a bunch of tests, either in parallel or not, some clients can be already instantiated by another check, hence your test execution will be using another test's service instance:
-`<service>_client` imported at `<check>.py`
-`<service>_client` initialised at `<service>_client.py`
-`<SERVICE>` imported at `<service>_client.py`
A useful read about this topic can be found in the following article: https://stackoverflow.com/questions/8658043/how-to-mock-an-import
#### Different ways to mock the service client
##### Mocking the service client at the service client level
Mocking a service client using the following code ...
will cause that the service will be initialised twice:
1. When the `<SERVICE>(audit_info)` is mocked out using `mock.patch` to have the object ready for the patching.
2. At the `<service>_client.py` when we are patching it since the `mock.patch` needs to go to that object an initialise it, hence the `<SERVICE>(audit_info)` will be called again.
Then, when we import the `<service>_client.py` at `<check>.py`, since we are mocking where the object is used, Python will use the mocked one.
In the [next section](./unit-testing.md#mocking-the-service-and-the-service-client-at-the-service-client-level) you will see an improved version to mock objects.
##### Mocking the service and the service client at the service client level
Mocking a service client using the following code ...
```python title="Mocking the service and the service_client"
will cause that the service will be initialised once, just when the `<SERVICE>(audit_info)` is mocked out using `mock.patch`.
Then, at the check_level when Python tries to import the client with `from prowler.providers.<provider>.services.<service>.<service>_client`, since it is already mocked out, the execution will continue using the `service_client` without getting into the `<service>_client.py`.
### Services
For testing the AWS services we have to follow the same logic as with the AWS checks, we have to check if the AWS API calls made by the service are covered by Moto and we have to test the service `__init__` to verifiy that the information is being correctly retrieved.
The service tests could act as *Integration Tests* since we test how the service retrieves the information from the provider, but since Moto or the custom mock objects mocks that calls this test will fall into *Unit Tests*.
Please refer to the [AWS checks tests](./unit-testing.md#checks) for more information on how to create tests and check the existing services tests [here](https://github.com/prowler-cloud/prowler/tree/master/tests/providers/aws/services).
## GCP
### Checks
For the GCP Provider we don't have any library to mock out the API calls we use. So in this scenario we inject the objects in the service client using [MagicMock](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.MagicMock).
The following code shows how to use MagicMock to create the service objects for a GCP check test.
```python
# We need to import the unittest.mock to allow us to patch some objects
# not to use shared ones between test, hence to isolate the test
from unittest import mock
# GCP Constants
GCP_PROJECT_ID = "123456789012"
# We are going to create a test for the compute_firewall_rdp_access_from_the_internet_allowed check
class Test_compute_firewall_rdp_access_from_the_internet_allowed:
# We name the tests with test_<service>_<check_name>_<test_action>
# In this scenario we have to mock also the Compute service and the compute_client from the check to enforce that the compute_client used is the one created within this check because patch != import, and if you execute tests in parallel some objects can be already initialised hence the check won't be isolated.
# In this case we don't use the Moto decorator, we use the mocked Compute client for both objects
# We import the check within the two mocks not to initialise the iam_client with some shared information from
# the current_audit_info or the Compute service.
from prowler.providers.gcp.services.compute.compute_firewall_rdp_access_from_the_internet_allowed.compute_firewall_rdp_access_from_the_internet_allowed import (
# And then, call the execute() function to run the check
# against the IAM client we've set up.
result = check.execute()
# Last but not least, we need to assert all the fields
# from the check's results
assert len(result) == 1
assert result[0].status == "PASS"
assert result[0].status_extended == f"Firewall {firewall.name} does not expose port 3389 (RDP) to the internet."
assert result[0].resource_name = firewall.name
assert result[0].resource_id == firewall.id
assert result[0].project_id = GCP_PROJECT_ID
assert result[0].location = compute_client.region
```
### Services
Coming soon ...
## Azure
### Checks
For the Azure Provider we don't have any library to mock out the API calls we use. So in this scenario we inject the objects in the service client using [MagicMock](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.MagicMock).
The following code shows how to use MagicMock to create the service objects for a Azure check test.
```python
# We need to import the unittest.mock to allow us to patch some objects
# not to use shared ones between test, hence to isolate the test
from unittest import mock
from uuid import uuid4
# Azure Constants
AZURE_SUSCRIPTION = str(uuid4())
# We are going to create a test for the Test_defender_ensure_defender_for_arm_is_on check
class Test_defender_ensure_defender_for_arm_is_on:
# We name the tests with test_<service>_<check_name>_<test_action>
# Import the service resource model to create the mocked object
from prowler.providers.azure.services.defender.defender_service import Defender_Pricing
# Create the custom Defender object to be tested
defender_client.pricings = {
AZURE_SUSCRIPTION: {
"Arm": Defender_Pricing(
resource_id=resource_id,
pricing_tier="Not Standard",
free_trial_remaining_time=0,
)
}
}
# In this scenario we have to mock also the Defender service and the defender_client from the check to enforce that the defender_client used is the one created within this check because patch != import, and if you execute tests in parallel some objects can be already initialised hence the check won't be isolated.
# In this case we don't use the Moto decorator, we use the mocked Defender client for both objects
Prowler has been written in Python using the [AWS SDK (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html#), [Azure SDK](https://azure.github.io/azure-sdk-for-python/) and [GCP API Python Client](https://github.com/googleapis/google-api-python-client/).
## AWS
Since Prowler uses AWS Credentials under the hood, you can follow any authentication method as described [here](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-precedence).
### AWS Authentication
Make sure you have properly configured your AWS-CLI with a valid Access Key and Region or declare AWS variables properly (or instance profile/role):
```console
aws configure
```
or
```console
export AWS_ACCESS_KEY_ID="ASXXXXXXX"
export AWS_SECRET_ACCESS_KEY="XXXXXXXXX"
export AWS_SESSION_TOKEN="XXXXXXXXX"
```
Those credentials must be associated to a user or role with proper permissions to do all checks. To make sure, add the following AWS managed policies to the user or role being used:
> Moreover, some read-only additional permissions are needed for several checks, make sure you attach also the custom policy [prowler-additions-policy.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-additions-policy.json) to the role you are using.
> If you want Prowler to send findings to [AWS Security Hub](https://aws.amazon.com/security-hub), make sure you also attach the custom policy [prowler-security-hub.json](https://github.com/prowler-cloud/prowler/blob/master/permissions/prowler-security-hub.json).
### Multi-Factor Authentication
If your IAM entity enforces MFA you can use `--mfa` and Prowler will ask you to input the following values to get a new session:
- ARN of your MFA device
- TOTP (Time-Based One-Time Password)
## Azure
Prowler for azure supports the following authentication types:
- Service principal authentication by environment variables (Enterprise Application)
- Current az cli credentials stored
- Interactive browser authentication
- Managed identity authentication
### Service Principal authentication
To allow Prowler assume the service principal identity to start the scan it is needed to configure the following environment variables:
```console
export AZURE_CLIENT_ID="XXXXXXXXX"
export AZURE_TENANT_ID="XXXXXXXXX"
export AZURE_CLIENT_SECRET="XXXXXXX"
```
If you try to execute Prowler with the `--sp-env-auth` flag and those variables are empty or not exported, the execution is going to fail.
### AZ CLI / Browser / Managed Identity authentication
The other three cases does not need additional configuration, `--az-cli-auth` and `--managed-identity-auth` are automated options. To use `--browser-auth` the user needs to authenticate against Azure using the default browser to start the scan, also `tenant-id` is required.
### Permissions
To use each one you need to pass the proper flag to the execution. Prowler fro Azure handles two types of permission scopes, which are:
- **Azure Active Directory permissions**: Used to retrieve metadata from the identity assumed by Prowler and future AAD checks (not mandatory to have access to execute the tool)
- **Subscription scope permissions**: Required to launch the checks against your resources, mandatory to launch the tool.
#### Azure Active Directory scope
Azure Active Directory (AAD) permissions required by the tool are the following:
-`Directory.Read.All`
-`Policy.Read.All`
The best way to assign it is through the azure web console:

#### Subscriptions scope
Regarding the subscription scope, Prowler by default scans all the subscriptions that is able to list, so it is required to add the following RBAC builtin roles per subscription to the entity that is going to be assumed by the tool:
-`Security Reader`
-`Reader`
## Google Cloud
### GCP Authentication
Prowler will follow the same credentials search as [Google authentication libraries](https://cloud.google.com/docs/authentication/application-default-credentials#search_order):
2. [User credentials set up by using the Google Cloud CLI](https://cloud.google.com/docs/authentication/application-default-credentials#personal)
3. [The attached service account, returned by the metadata server](https://cloud.google.com/docs/authentication/application-default-credentials#attached-sa)
Those credentials must be associated to a user or service account with proper permissions to do all checks. To make sure, add the `Viewer` role to the member associated with the credentials.
> By default, `prowler` will scan all accessible GCP Projects, use flag `--project-ids` to specify the projects to be scanned.
**Welcome to [Prowler Open Source v3](https://github.com/prowler-cloud/prowler/) Documentation!** 📄
For **Prowler v2 Documentation**, please go [here](https://github.com/prowler-cloud/prowler/tree/2.12.0) to the branch and its README.md.
- You are currently in the **Getting Started** section where you can find general information and requirements to help you start with the tool.
- In the [Tutorials](./tutorials/misc.md) section you will see how to take advantage of all the features in Prowler.
- In the [Contact Us](./contact.md) section you can find how to reach us out in case of technical issues.
- In the [About](./about.md) section you will find more information about the Prowler team and license.
## About Prowler
**Prowler** is an Open Source security tool to perform AWS, Azure and Google Cloud security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness.
It contains hundreds of controls covering CIS, PCI-DSS, ISO27001, GDPR, HIPAA, FFIEC, SOC2, AWS FTR, ENS and custom security frameworks.
<a href="https://prowler.pro"><img align="right" src="./img/prowler-pro-light.png" width="350"></a> **ProwlerPro** gives you the benefits of Prowler Open Source plus continuous monitoring, faster execution, personalized support, visualization of your data with dashboards, alerts and much more.
Visit <a href="https://prowler.pro">prowler.pro</a> for more info.
## Quick Start
### Installation
Prowler is available as a project in [PyPI](https://pypi.org/project/prowler-cloud/), thus can be installed using pip with `Python >= 3.9`:
=== "Generic"
_Requirements_:
*`Python >= 3.9`
*`Python pip >= 3.9`
* AWS, GCP and/or Azure credentials
_Commands_:
``` bash
pip install prowler
prowler -v
```
=== "Docker"
_Requirements_:
* Have `docker` installed: https://docs.docker.com/get-docker/.
* AWS, GCP and/or Azure credentials
* In the command below, change `-v` to your local directory path in order to access the reports.
_Commands_:
``` bash
docker run -ti --rm -v /your/local/dir/prowler-output:/home/prowler/output \
* Latest Amazon Linux 2 should come with Python 3.9 already installed however it may need pip. Install Python pip 3.9 with: `sudo yum install -y python3-pip`.
* Make sure setuptools for python is already installed with: `pip3 install setuptools`
_Commands_:
```
pip3.9 install prowler
export PATH=$PATH:/home/$HOME/.local/bin/
prowler -v
```
=== "Brew"
_Requirements_:
* `Brew` installed in your Mac or Linux
* AWS, GCP and/or Azure credentials
_Commands_:
``` bash
brew install prowler
prowler -v
```
=== "AWS CloudShell"
Prowler can be easely executed in AWS CloudShell but it has some prerequsites to be able to to so. AWS CloudShell is a container running with `Amazon Linux release 2 (Karoo)` that comes with Python 3.7, since Prowler requires Python >= 3.9 we need to first install a newer version of Python. Follow the steps below to successfully execute Prowler v3 in AWS CloudShell:
_Requirements_:
* First install all dependences and then Python, in this case we need to compile it because there is not a package available at the time this document is written:
* Once Python 3.9 is available we can install Prowler from pip:
```
pip3.9 install prowler
prowler -v
```
> To download the results from AWS CloudShell, select Actions -> Download File and add the full path of each file. For the CSV file it will be something like `/home/cloudshell-user/output/prowler-output-123456789012-20221220191331.csv`
=== "Azure CloudShell"
_Requirements_:
* Open Azure CloudShell `bash`.
_Commands_:
```
pip install prowler
prowler -v
```
## Prowler container versions
The available versions of Prowler are the following:
- `latest`: in sync with master branch (bear in mind that it is not a stable version)
- `<x.y.z>` (release): you can find the releases [here](https://github.com/prowler-cloud/prowler/releases), those are stable releases.
- `stable`: this tag always point to the latest release.
- [AWS Public ECR](https://gallery.ecr.aws/prowler-cloud/prowler)
## High level architecture
You can run Prowler from your workstation, an EC2 instance, Fargate or any other container, Codebuild, CloudShell, Cloud9 and many more.

## Basic Usage
To run Prowler, you will need to specify the provider (e.g aws, gcp or azure):
> If no provider specified, AWS will be used for backward compatibility with most of v2 options.
```console
prowler <provider>
```

> Running the `prowler` command without options will use your environment variable credentials, see [Requirements](./getting-started/requirements.md) section to review the credentials settings.
If you miss the former output you can use `--verbose` but Prowler v3 is smoking fast, so you won't see much ;)
By default, Prowler will generate a CSV, JSON and HTML reports, however you can generate a JSON-ASFF (used by AWS Security Hub) report with `-M` or `--output-modes`:
```console
prowler <provider> -M csv json json-asff html
```
The html report will be located in the output directory as the other files and it will look like:

You can use `-l`/`--list-checks` or `--list-services` to list all available checks or services within the provider.
```console
prowler <provider> --list-checks
prowler <provider> --list-services
```
For executing specific checks or services you can use options `-c`/`checks` or `-s`/`services`:
See more details about Azure Authentication in [Requirements](getting-started/requirements.md)
Prowler by default scans all the subscriptions that is allowed to scan, if you want to scan a single subscription or various specific subscriptions you can use the following flag (using az cli auth as example):
```console
prowler azure --az-cli-auth --subscription-ids <subscription ID 1> <subscription ID 2> ... <subscription ID N>
```
### Google Cloud
Prowler will use by default your User Account credentials, you can configure it using:
- `gcloud init` to use a new account
- `gcloud config set account <account>` to use an existing account
Then, obtain your access credentials using: `gcloud auth application-default login`
Otherwise, you can generate and download Service Account keys in JSON format (refer to https://cloud.google.com/iam/docs/creating-managing-service-account-keys) and provide the location of the file with the following argument:
```console
prowler gcp --credentials-file path
```
Prowler by default scans all the GCP Projects that is allowed to scan, if you want to scan a single project or various specific projects you can use the following flag:
```console
prowler gcp --project-ids <Project ID 1> <Project ID 2> ... <Project ID N>
```
See more details about GCP Authentication in [Requirements](getting-started/requirements.md)
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.