mirror of
https://github.com/prowler-cloud/prowler.git
synced 2026-05-17 09:43:28 +00:00
Compare commits
906 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| ef39a500b4 | |||
| 9b399409d5 | |||
| ccb1ca53ff | |||
| 2225cc48a5 | |||
| e3c63c01bb | |||
| 5fa44572d5 | |||
| 0da9bd80fb | |||
| b03a807d33 | |||
| c3e55b383b | |||
| 5daed282d5 | |||
| 98da3059b4 | |||
| 80fd5d1ba6 | |||
| 85242c7909 | |||
| ea6ab406c8 | |||
| cbf2a28bac | |||
| 5b1e7bb7f9 | |||
| e108b2caed | |||
| df1abb2152 | |||
| e0465f2aa2 | |||
| 51467767cd | |||
| bc71e7fb3b | |||
| 6a331c05e8 | |||
| 7ab503a096 | |||
| b368190c9f | |||
| 8915fdff18 | |||
| 9bf108e9cc | |||
| 87708e39cf | |||
| 44927c44e9 | |||
| 71aa29cf24 | |||
| aa14daf0db | |||
| eb5dbab86e | |||
| 223aab8ece | |||
| 3ec57340a0 | |||
| 80d73cc05b | |||
| 94f02df11e | |||
| c454ceb296 | |||
| 76ec13a1d6 | |||
| 783b6ea982 | |||
| 6b7b700a98 | |||
| b3f2a1c532 | |||
| c4e1bd3ed2 | |||
| d0d4e0d483 | |||
| 14a9f0e765 | |||
| b572575c8d | |||
| a626e41162 | |||
| 22343faa1e | |||
| c5b37887ef | |||
| f9aed36d0b | |||
| facc0627d7 | |||
| 76f0d890e9 | |||
| 7de7122c3b | |||
| 1b73ab2fe4 | |||
| cc8f6131e6 | |||
| dfd5c9aee7 | |||
| 3986bf3f42 | |||
| c45ef1e286 | |||
| 8d8f498dc2 | |||
| c4bd9122d4 | |||
| 644cdc81b9 | |||
| e5584f21b3 | |||
| b868d39bef | |||
| ef9809f61f | |||
| 9a04ca3611 | |||
| 1c9b3a1394 | |||
| 5ee7bd6459 | |||
| 05d2b86ba8 | |||
| 84c30af6f8 | |||
| e8a829b75e | |||
| a0d169470d | |||
| 1fd6046511 | |||
| 524455b0f3 | |||
| e6e1e37c1e | |||
| 2914510735 | |||
| 7e43c7797f | |||
| 6954ef880e | |||
| 5f5e7015a9 | |||
| bfafa518b1 | |||
| e34e59ff2d | |||
| 7f80d2db46 | |||
| 4a2a3921da | |||
| e26b2e6527 | |||
| 954814c1d7 | |||
| 113224cbd9 | |||
| f5f1fce779 | |||
| 0ba9383202 | |||
| 8e9a9797c7 | |||
| 2b4e6bffae | |||
| 74f7a86c2b | |||
| e218435b2f | |||
| 5ec34ad5e7 | |||
| c4b0859efd | |||
| 1241a490f9 | |||
| 4ec498a612 | |||
| 119c5e80a9 | |||
| d393bc48a2 | |||
| e09e3855b1 | |||
| 8751615faa | |||
| e7c17ab0b3 | |||
| f05d3eb334 | |||
| cf449d4607 | |||
| b338ac9add | |||
| 366d2b392a | |||
| 41fc536b44 | |||
| e042445ecf | |||
| c17129afe3 | |||
| 4876d8435c | |||
| 1bd0d774e5 | |||
| c119cece89 | |||
| e24b211d22 | |||
| c589c95727 | |||
| 7e4f1a73bf | |||
| 4d00aece45 | |||
| 49aaf011aa | |||
| 898934c7f8 | |||
| 81c4b5a9c1 | |||
| fe31656ffe | |||
| 359059dee6 | |||
| 2eaa37921d | |||
| 3a99909b75 | |||
| 2ecd9ad2c5 | |||
| 50dc396aa3 | |||
| acf333493a | |||
| bd6272f5a7 | |||
| 8c95e1efaf | |||
| 845a0aa0d5 | |||
| 75a11be9e6 | |||
| a778d005b6 | |||
| 1281f4ec5e | |||
| 6332427e5e | |||
| d89df83904 | |||
| be420afebc | |||
| fb914a2c90 | |||
| 4ac3cfc33d | |||
| c74360ab63 | |||
| 4dc4d82d42 | |||
| 6e7a32cb51 | |||
| 49e501c4be | |||
| 9ee78fe65f | |||
| 7a0549d39c | |||
| 3e8c86d880 | |||
| e34c18757d | |||
| 5c1a47d108 | |||
| 59c51d5a4a | |||
| 66aa67f636 | |||
| bdda377482 | |||
| aa11ed70bd | |||
| 0580dca6cf | |||
| 678ef0ab5a | |||
| 4888c27713 | |||
| b256c10622 | |||
| 878e4e0bbc | |||
| 6c3653c483 | |||
| 71ac703e6f | |||
| a89e3598f2 | |||
| 5d043cc929 | |||
| 921f94ebbf | |||
| 48c9ed8a79 | |||
| 12987ec9f9 | |||
| 40b90ed063 | |||
| 60314e781f | |||
| bc56d48595 | |||
| 2d71cef3d5 | |||
| 41f6637497 | |||
| c2e54bbbcc | |||
| df8aacd09d | |||
| 2dd6be59b9 | |||
| 9e8e3eb0e6 | |||
| 3728430f8c | |||
| ea97de7f43 | |||
| f254a4bc0d | |||
| 66acfd8691 | |||
| 02ca82004f | |||
| 60b5a79b27 | |||
| be1e3e942b | |||
| 3658e85cfc | |||
| 15e4d1acce | |||
| 44afd9ed31 | |||
| 4f099c5663 | |||
| eaec683eb9 | |||
| 50bcd828e9 | |||
| 91545e409e | |||
| 33031d2c96 | |||
| 1b42dda817 | |||
| f726d964a8 | |||
| 36aaec8a55 | |||
| 99164ce93e | |||
| 7ebc5d3c31 | |||
| 06ff3db8af | |||
| c44ea3943e | |||
| d036e0054b | |||
| f72eb7e212 | |||
| 62dcbc2961 | |||
| dddec4c688 | |||
| 6d00554082 | |||
| 65d3fcee4c | |||
| 16cd0e4661 | |||
| 6e184dae93 | |||
| 118f3d163d | |||
| 7d84d67935 | |||
| 1c1c58c975 | |||
| 31ea672c61 | |||
| 7016779b8e | |||
| 4e958fdf39 | |||
| c6259b6c75 | |||
| 021e243ada | |||
| acdf420941 | |||
| 4e84507130 | |||
| 2a61610fec | |||
| 9b127eba93 | |||
| 1a89d65516 | |||
| 84749df708 | |||
| 6f7cd85a18 | |||
| ad39061e1a | |||
| 615bacccaf | |||
| b3a2479fab | |||
| 871c877a33 | |||
| 7fd58de3bf | |||
| 40f24b4d70 | |||
| d8f80699d4 | |||
| f24d0efc77 | |||
| a18dd76a5a | |||
| a2362b4bbc | |||
| e5f1c2b19c | |||
| 0490ab6944 | |||
| 97baa8a1e6 | |||
| 637ebdc3db | |||
| 451b36093f | |||
| beb0457aff | |||
| 0335ea4e0b | |||
| 355abca5a3 | |||
| 7d69cc4cd9 | |||
| cdc4b362a4 | |||
| 6417e6bbba | |||
| b810d45d34 | |||
| f5a2695c3b | |||
| 977c788fff | |||
| 21f8b5dbad | |||
| 1c874d1283 | |||
| 8f9bdae2b7 | |||
| 600813fb99 | |||
| 5a9ccd60a0 | |||
| beb7a53efe | |||
| 8431ce42a1 | |||
| c5a9b63970 | |||
| a765c1543e | |||
| 484a773f5b | |||
| 9ecf570790 | |||
| f8c840f283 | |||
| deec9efa97 | |||
| 2ee62cca8e | |||
| 413b948ca0 | |||
| d548e869fa | |||
| 5c8919372c | |||
| 9baac9fd89 | |||
| 252b664e49 | |||
| 496e0f1e0a | |||
| 80342d612f | |||
| 02d7eaf268 | |||
| 1a8df3bf18 | |||
| 16f2209d3f | |||
| 70e22af550 | |||
| 44f26bc0d5 | |||
| a19f5d9a9a | |||
| b78f53a722 | |||
| c20f07ced4 | |||
| 7c3a53908b | |||
| ea3c71e22c | |||
| 40eaa79777 | |||
| aa8119970e | |||
| 55fc8cb55b | |||
| abf51eceee | |||
| 458c51dda3 | |||
| c8d2a44ab0 | |||
| 0a71628298 | |||
| 60e0040577 | |||
| 5c375d63c5 | |||
| 4d84529ba2 | |||
| 0737d9e8bb | |||
| 50c5294bc0 | |||
| f63e9e5e77 | |||
| 3cab52772c | |||
| 81aa035451 | |||
| 899f31f1ee | |||
| e142a9e0f4 | |||
| ed26c2c42c | |||
| 1017510a67 | |||
| bfa16607b0 | |||
| 4c874b68f5 | |||
| 9458e2bbc4 | |||
| 2da7b926ed | |||
| 8d4f0ab90a | |||
| 83aefc42c1 | |||
| a6489f39fd | |||
| 15c34952cf | |||
| d002f2f719 | |||
| 8530676419 | |||
| fe5a78e4d4 | |||
| d823b2b9de | |||
| 3b17eb024c | |||
| 87951a8371 | |||
| e5ca51d1e7 | |||
| e2fd3fe36e | |||
| 6b0d73d7f9 | |||
| 7eec60f4d9 | |||
| 9d788af932 | |||
| bbc0388d4d | |||
| 887db29d96 | |||
| ae74cab70a | |||
| e6d48c1fa4 | |||
| d5ab72a97c | |||
| 473631f83b | |||
| a580b1ee04 | |||
| 844dd5ba95 | |||
| 44f8e4c488 | |||
| 180eb61fee | |||
| 9828824b73 | |||
| c938a25693 | |||
| cccd69f27c | |||
| 3949806b5d | |||
| e7d249784d | |||
| 25b1efe532 | |||
| c289ddacf2 | |||
| 3fd9c51086 | |||
| de01087246 | |||
| fe42bb47f7 | |||
| c56bd519bb | |||
| 79b29d9437 | |||
| 82eecec277 | |||
| ceacd077d2 | |||
| 5a0fb13ece | |||
| 78439b4c0c | |||
| 06f94f884f | |||
| b8836c6404 | |||
| ac79b86810 | |||
| 793c2ae947 | |||
| cdcc5c6e35 | |||
| 51db81aa5c | |||
| a51a185f49 | |||
| 90453fd07e | |||
| d740bf84c3 | |||
| d13d2677ea | |||
| b076c98ba1 | |||
| d071dea7f7 | |||
| d9782c7b8a | |||
| f85450d0b5 | |||
| b129326ed6 | |||
| eaf0d06b63 | |||
| 87f3e0a138 | |||
| 8e3c856a14 | |||
| 12c2439196 | |||
| deb1e0ff34 | |||
| 808e8297b0 | |||
| 738ce56955 | |||
| 190fd0b93c | |||
| ca6df26918 | |||
| bcfeb97e4a | |||
| 0234957907 | |||
| 8713b74204 | |||
| cbaddad358 | |||
| 2379544425 | |||
| 29fefba62e | |||
| 098382117e | |||
| d816d73174 | |||
| 30eb78c293 | |||
| a671b092ee | |||
| 0edf199282 | |||
| 2478555f0e | |||
| b07080245d | |||
| 2ebf217bb0 | |||
| bb527024d9 | |||
| e897978c3e | |||
| 00f1c02532 | |||
| 348d1a2fda | |||
| f1df8ba458 | |||
| b5ea418933 | |||
| 734fa5a4e6 | |||
| 08f6d4b69b | |||
| 29d3bb9f9a | |||
| 4d217e642b | |||
| bd56e03991 | |||
| 0b6aa0ddcd | |||
| 4f3496194d | |||
| d09a680aaa | |||
| 56d7431d56 | |||
| abae5f1626 | |||
| 7d0e94eecb | |||
| 23b65c7728 | |||
| aa3182ebc5 | |||
| 32d27df0ba | |||
| 6439f0a5f3 | |||
| 19476632ff | |||
| d4c12e4632 | |||
| 52bd48168f | |||
| c0d935e232 | |||
| 24dfd47329 | |||
| fbae338689 | |||
| 186fd88f8c | |||
| 14ff34c00a | |||
| a66fa394d3 | |||
| 931766fe08 | |||
| c134914896 | |||
| 25dac080a5 | |||
| 910d39eee4 | |||
| d604ae5569 | |||
| 42f46b0fb1 | |||
| abb5864224 | |||
| 2e2a2bd89a | |||
| f8ee841921 | |||
| ceda8c76d2 | |||
| afe0b7443f | |||
| 9b773897d2 | |||
| d6ec4c2c96 | |||
| 14ef169e99 | |||
| 22141f9706 | |||
| a5c6fee5b4 | |||
| d3a5a5c0a1 | |||
| 5d81869de4 | |||
| 73ebf95d89 | |||
| 9f4574f4ff | |||
| cb239b20ab | |||
| 3ef79588b4 | |||
| 61000e386b | |||
| 53cb57901f | |||
| 993ff4d78e | |||
| 8fb10fbbf7 | |||
| 11e834f639 | |||
| 62bf2fbb9c | |||
| e57930d6c2 | |||
| e0c417a466 | |||
| b55f8efed1 | |||
| 7cbc60d977 | |||
| 5b7912b558 | |||
| 57fca3e54d | |||
| e31c27b123 | |||
| 74f1da818e | |||
| 910cfa601b | |||
| fe321c3f8a | |||
| 43de0d405f | |||
| ac6ed31c8e | |||
| 9d47437de4 | |||
| eb7a62ff77 | |||
| 67bc16b46d | |||
| 8552a578a0 | |||
| a5d277e045 | |||
| 6dbf2ac606 | |||
| b1569ac2f3 | |||
| 3d0145b522 | |||
| 44174526d6 | |||
| 0fd395ea83 | |||
| 5e9d4a80a1 | |||
| e4d234fe03 | |||
| 3202184718 | |||
| 41e576f4f1 | |||
| d8dce07019 | |||
| 2b0a3144c7 | |||
| 62fbce0b5e | |||
| 5a59bb335c | |||
| 2719991630 | |||
| 6a3b8c4674 | |||
| 191fbf0177 | |||
| 228dd2952a | |||
| 97db38aa25 | |||
| dc953a6e22 | |||
| 51e796a48d | |||
| 024f1425df | |||
| a7ed610da9 | |||
| 7ba99f22cd | |||
| b8ce09ec34 | |||
| c243110a49 | |||
| ee27636f32 | |||
| f2f41c9c44 | |||
| 9312890e6a | |||
| 9578281b4f | |||
| 08690068fc | |||
| e06a33de84 | |||
| 6a3db10fda | |||
| bbed445efa | |||
| 9d65fb0bf2 | |||
| 34f03ca110 | |||
| 87c038f0c2 | |||
| b3014f03b1 | |||
| d39598c9fc | |||
| 5ea9106259 | |||
| bcc0b59de1 | |||
| 5d6ed640f0 | |||
| dd1cc2d025 | |||
| 52e5cc23e4 | |||
| 76a8e2be1f | |||
| d989425490 | |||
| 1e324b7ed2 | |||
| e68aa62f94 | |||
| 332b98a1ab | |||
| dd05ef7974 | |||
| d6862766d3 | |||
| f52d005e2d | |||
| bf475234a5 | |||
| cd5985c056 | |||
| ce33dbf823 | |||
| 0a9d0688a7 | |||
| 24784f2ce5 | |||
| 7a1e611b88 | |||
| 3073150008 | |||
| 9923def4cb | |||
| a7f612303f | |||
| 64c2a2217a | |||
| 4689d7a952 | |||
| 87cd143967 | |||
| e37fd05d58 | |||
| acc708bda5 | |||
| c7460bb69c | |||
| 84b273dab9 | |||
| bb7ce2157e | |||
| 07b9e1d3a4 | |||
| 96a879d761 | |||
| 283127c3f4 | |||
| beeee80a0b | |||
| 06b62826b4 | |||
| d0736af209 | |||
| 716c8c1a5f | |||
| e6cdda1bd9 | |||
| 2747a633bc | |||
| 74118f5cfe | |||
| 598bdf28bb | |||
| d75f681c87 | |||
| c7956ede6a | |||
| 64f5a69e84 | |||
| bfb15c34b8 | |||
| 638b3ac0cd | |||
| 9d6147a037 | |||
| 802c786ac2 | |||
| c8be8dbd9a | |||
| 7053b2bb37 | |||
| 447bf832cd | |||
| 7c4571b55e | |||
| eb7c16aba5 | |||
| b09e83b171 | |||
| bb149a30a7 | |||
| d5be35af49 | |||
| f6aa56d92b | |||
| 6a4df15c47 | |||
| 72de5fdb1b | |||
| a7f55d06af | |||
| 97da78d4e7 | |||
| c4f6161c73 | |||
| db7ffea24d | |||
| 489b5abf82 | |||
| 3a55c2ee07 | |||
| 64d866271c | |||
| 1ab2a80eab | |||
| 89d4c521ba | |||
| f2e19d377a | |||
| 2b7b887b87 | |||
| 44c70b5d01 | |||
| 7514484c42 | |||
| 9594c4c99f | |||
| 56445c9753 | |||
| 07419fd5e1 | |||
| 2e4dd12b41 | |||
| fed2046c49 | |||
| db79db4786 | |||
| 6f027e3c57 | |||
| bdb877009f | |||
| 6564ec1ff5 | |||
| 443dc067b3 | |||
| 6221650c5f | |||
| 034d0fd1f4 | |||
| e617ff0460 | |||
| 4b1ed607a7 | |||
| 137365a670 | |||
| 1891a1b24f | |||
| e57e070866 | |||
| 66998cd1ad | |||
| c0b1833446 | |||
| 329a72c77c | |||
| 2610ee9d0c | |||
| a13ca9034e | |||
| 5d1abb3689 | |||
| e1d1c6d154 | |||
| e18e0e7cd4 | |||
| eaf3d07a3f | |||
| c88ae32b7f | |||
| 605613e220 | |||
| d2772000ec | |||
| 42939a79f5 | |||
| ed17931117 | |||
| 66df5f7a1c | |||
| fc6e6696e5 | |||
| 465748c8a1 | |||
| e59cd71bbf | |||
| 8a76fea310 | |||
| 0e46be54ec | |||
| dc81813fdf | |||
| eaa0df16bb | |||
| c23e911028 | |||
| 06b96a1007 | |||
| fa545c591f | |||
| e828b780c7 | |||
| eca8c5cabd | |||
| b7bce6008f | |||
| 2fdf89883d | |||
| 6c5d4bbaaa | |||
| cb2f926d4f | |||
| 12c01b437e | |||
| 3253a58942 | |||
| 199f7f14ea | |||
| d42406d765 | |||
| 2276ffb1f6 | |||
| 218fb3afb0 | |||
| a9fb890979 | |||
| 54ebf5b455 | |||
| c9a0475aa8 | |||
| 5567d9f88c | |||
| 56f3e661ae | |||
| 1aa4479a10 | |||
| 7b625d0a91 | |||
| fd0529529d | |||
| af43191954 | |||
| 2ce2ca7c91 | |||
| a0fc3db665 | |||
| feb458027f | |||
| e5a5b7af5c | |||
| ad456ae2fe | |||
| 690cb51f6c | |||
| 14aaa2f376 | |||
| 6e47ca2c41 | |||
| 0d99d2be9b | |||
| c322ef00e7 | |||
| 3513421225 | |||
| b0e6bfbefe | |||
| f7a918730e | |||
| cef33319c5 | |||
| 2036a59210 | |||
| e5eccb6227 | |||
| 48c2c8567c | |||
| bbeef0299f | |||
| bec5584d63 | |||
| bdc759d34c | |||
| 8db442d8ba | |||
| 9e7a0d4175 | |||
| 9c33b3f5a9 | |||
| 7e7e2c87dc | |||
| 2f741f35a8 | |||
| c411466df7 | |||
| 9679939307 | |||
| 8539423b22 | |||
| 81edafdf09 | |||
| e0a262882a | |||
| 89237ab99e | |||
| 0f414e451e | |||
| 1180522725 | |||
| 81c7ebf123 | |||
| 258f05e6f4 | |||
| 53efb1c153 | |||
| 26014a9705 | |||
| 00ef037e45 | |||
| 669ec74e67 | |||
| c4528200b0 | |||
| ba7cd0250a | |||
| c5e97678a1 | |||
| 337a46cdcc | |||
| 7f74b67f1f | |||
| 5dcc48d2e5 | |||
| 8b04aab07d | |||
| eab4f6cf2e | |||
| 7f8d623283 | |||
| dbffed8f1f | |||
| 7e3688fdd0 | |||
| 2e111e9ad3 | |||
| 6d6070ff3f | |||
| 391bbde353 | |||
| 3c56eb3762 | |||
| 7c14ea354b | |||
| c96aad0b77 | |||
| a9dd3e424b | |||
| 8a144a4046 | |||
| 75f86d7267 | |||
| bbf875fc2f | |||
| 59d491f61b | |||
| ed640a1324 | |||
| e86fbcaef7 | |||
| 7f48212054 | |||
| a2c5c71baf | |||
| b904f81cb9 | |||
| d64fe374dd | |||
| fe25e7938e | |||
| 931df361bf | |||
| d7c45f4aee | |||
| 5e5bef581b | |||
| 2d9e95d812 | |||
| e5f979d106 | |||
| c7a5815203 | |||
| 03e268722e | |||
| 78a2774329 | |||
| c1b5ab7f53 | |||
| b861d97ad4 | |||
| f3abcc9dd6 | |||
| cab13fe018 | |||
| cc4b19c7ce | |||
| a754d9aee5 | |||
| 22b54b2d8d | |||
| d12ca6301a | |||
| bc1b2ad9ab | |||
| 1782ab1514 | |||
| 0384fc50e3 | |||
| cc46dee9ee | |||
| ed5a0ae45a | |||
| 928ccfefb8 | |||
| 7f6bfb7b3e | |||
| bcbc9bf675 | |||
| 0ec4366f4c | |||
| ff72b7eea1 | |||
| a32ca19251 | |||
| b79508956a | |||
| d76c5bd658 | |||
| 580e11126c | |||
| 736d40546a | |||
| 88810d2bb5 | |||
| 3a8f4d2ffb | |||
| 1fe125a65f | |||
| 0ff4df0836 | |||
| 16b4775e2d | |||
| c3a13b8a29 | |||
| d1053375b7 | |||
| 0fa4538256 | |||
| 738644f288 | |||
| 2f80b055ac | |||
| fd62a1df10 | |||
| a85d0ebd0a | |||
| 2c06902baa | |||
| 76ac6429fe | |||
| 43cae66b0d | |||
| dacddecc7d | |||
| dcb9267c2f | |||
| ff35fd90fa | |||
| 7469377079 | |||
| c8441f8d38 | |||
| abf4eb0ffc | |||
| 93717cc830 | |||
| b629bc81f8 | |||
| f628897fe1 | |||
| 54b82a78e3 | |||
| 377faf145f | |||
| 69e316948f | |||
| 62cbff4f53 | |||
| 5582265e9d | |||
| fb5ea3c324 | |||
| 9b5f676f50 | |||
| 88cfc0fa7e | |||
| 665bfa2f13 | |||
| b89b1a64f4 | |||
| 9ba657c261 | |||
| bce958b8e6 | |||
| 914012de2b | |||
| 8d1c476aed | |||
| 567c729e9e | |||
| 3f03dd20e4 | |||
| 1c778354da | |||
| 3a149fa459 | |||
| f3b121950d | |||
| 43c13b7ba1 | |||
| 9447b33800 | |||
| 2934752eeb | |||
| dd6d8c71fd | |||
| 80267c389b | |||
| acfbaf75d5 | |||
| 5f54377407 | |||
| 552aa64741 | |||
| d64f611f51 | |||
| a96cc92d77 | |||
| 3858cccc41 | |||
| 072828512a | |||
| a73ffe5642 | |||
| 8e784a5b6d | |||
| 1b6f9332f1 | |||
| db8b472729 | |||
| 867b371522 | |||
| c0d7c9fc7d | |||
| bb4685cf90 | |||
| 6a95426749 | |||
| ef6af8e84d | |||
| 763130f253 | |||
| 1256c040e9 | |||
| 18b7b48a99 | |||
| 627c11503f | |||
| 712ba84f06 | |||
| 5186e029b3 | |||
| 5bfaedf903 | |||
| 5061da6897 | |||
| c159a28016 | |||
| 82a1b1c921 | |||
| bf2210d0f4 | |||
| 8f0772cb94 | |||
| 5b57079ecd | |||
| 350d759517 | |||
| edd793c9f5 | |||
| 545c2dc685 | |||
| 84955c066c | |||
| 06dd03b170 | |||
| 47bc2ed2dc | |||
| 44281afc54 | |||
| 4d2859d145 | |||
| 45d44a1669 | |||
| ddd83b340e | |||
| ccdb54d7c3 | |||
| bcc246d950 | |||
| 62139e252a | |||
| 86950c3a0a | |||
| f4865ef68d | |||
| ea7209e7ae | |||
| 998c551cf3 | |||
| e6f29b0116 | |||
| eb90bb39dc | |||
| ad189b35ad | |||
| 7d2989a233 | |||
| 862137ae7d | |||
| c86e082d9a | |||
| 80fe048f97 | |||
| f2bffb3ce7 | |||
| cbe2f9eef8 | |||
| 688f41f570 | |||
| a29197637e | |||
| 7a2712a37f | |||
| 189f5cfd8c | |||
| e509480892 | |||
| 7f7955351a | |||
| 46f1db21a8 | |||
| fbe7bc6951 | |||
| f658507847 | |||
| 374078683b | |||
| 114c4e0886 | |||
| 67c62766d4 | |||
| 3f2947158d | |||
| 278a7cb356 | |||
| 890158a79c | |||
| 4dc1602b77 | |||
| bbba0abac9 | |||
| d04fd807c6 | |||
| 3456df4cf1 | |||
| f56aaa791e | |||
| 465a758770 | |||
| 0f7c0c1b2c | |||
| bf8d10b6f6 | |||
| 20d04553d6 | |||
| b56d62e3c4 | |||
| 9a332dcba1 | |||
| 166d9f8823 | |||
| 42f5eed75f | |||
| 01a7db18dd | |||
| d4507465a3 | |||
| 3ac92ed10a | |||
| 43c76ca85c | |||
| 54d87fa96a | |||
| f041f17268 | |||
| 31c80a6967 | |||
| 783ce136f4 | |||
| f829145781 | |||
| 389337f8cd | |||
| a0713c2d66 | |||
| f94d3cbce4 | |||
| 8d8994b468 | |||
| 784a9097a5 | |||
| b9601626e3 | |||
| dc80b011f2 | |||
| ee7d32d460 | |||
| 43fd9ee94e | |||
| 8821a91f3f | |||
| 98d9256f92 | |||
| b35495eaa7 | |||
| 74d6b614b3 | |||
| dd63c16a74 | |||
| 4280266a96 | |||
| b1f02098ff | |||
| 95189b574a | |||
| c5d23503bf | |||
| 77950f6069 | |||
| ec5f2b3753 | |||
| 9e7104fb7f | |||
| 6b3b6ca45e | |||
| 20b8b0b24e | |||
| 4e11540458 | |||
| ee87f2676d | |||
| 74a90aab98 | |||
| 48ff9a5100 | |||
| 3dfd578ee5 | |||
| 0db46cdc81 | |||
| fdac58d031 | |||
| df9d4ce856 | |||
| e6ae4e97e8 | |||
| 10a4c28922 | |||
| 8a828c6e51 | |||
| d7b40905ff | |||
| f9a3b5f3cd | |||
| b73b89242f | |||
| 23a0f6e8de | |||
| 87967abc3f | |||
| ce60c286dc | |||
| 90fd9b0eb8 | |||
| ca262a6797 | |||
| c056d39775 | |||
| 1c4426ea4b | |||
| 36520bd7a1 | |||
| badf0ace76 | |||
| f1f61249e0 | |||
| b371cac18c | |||
| 1846535d8d | |||
| d7d9118b9b |
@@ -11,6 +11,9 @@ AUTH_TRUST_HOST=true
|
||||
UI_PORT=3000
|
||||
# openssl rand -base64 32
|
||||
AUTH_SECRET="N/c6mnaS5+SWq81+819OrzQZlmx1Vxtp/orjttJSmw8="
|
||||
# Google Tag Manager ID
|
||||
NEXT_PUBLIC_GOOGLE_TAG_MANAGER_ID=""
|
||||
|
||||
|
||||
#### Prowler API Configuration ####
|
||||
PROWLER_API_VERSION="stable"
|
||||
@@ -127,7 +130,7 @@ SENTRY_ENVIRONMENT=local
|
||||
SENTRY_RELEASE=local
|
||||
|
||||
#### Prowler release version ####
|
||||
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.5.1
|
||||
NEXT_PUBLIC_PROWLER_RELEASE_VERSION=v5.6.0
|
||||
|
||||
# Social login credentials
|
||||
SOCIAL_GOOGLE_OAUTH_CALLBACK_URL="${AUTH_URL}/api/auth/callback/google"
|
||||
@@ -137,3 +140,13 @@ SOCIAL_GOOGLE_OAUTH_CLIENT_SECRET=""
|
||||
SOCIAL_GITHUB_OAUTH_CALLBACK_URL="${AUTH_URL}/api/auth/callback/github"
|
||||
SOCIAL_GITHUB_OAUTH_CLIENT_ID=""
|
||||
SOCIAL_GITHUB_OAUTH_CLIENT_SECRET=""
|
||||
|
||||
# Single Sign-On (SSO)
|
||||
SAML_PUBLIC_CERT=""
|
||||
SAML_PRIVATE_KEY=""
|
||||
|
||||
# Lighthouse tracing
|
||||
LANGSMITH_TRACING=false
|
||||
LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
|
||||
LANGSMITH_API_KEY=""
|
||||
LANGCHAIN_PROJECT=""
|
||||
|
||||
@@ -27,6 +27,11 @@ provider/github:
|
||||
- any-glob-to-any-file: "prowler/providers/github/**"
|
||||
- any-glob-to-any-file: "tests/providers/github/**"
|
||||
|
||||
provider/iac:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file: "prowler/providers/iac/**"
|
||||
- any-glob-to-any-file: "tests/providers/iac/**"
|
||||
|
||||
github_actions:
|
||||
- changed-files:
|
||||
- any-glob-to-any-file: ".github/workflows/*"
|
||||
|
||||
@@ -16,6 +16,7 @@ Please include a summary of the change and which issue is fixed. List any depend
|
||||
- [ ] Review if code is being documented following this specification https://github.com/google/styleguide/blob/gh-pages/pyguide.md#38-comments-and-docstrings
|
||||
- [ ] Review if backport is needed.
|
||||
- [ ] Review if is needed to change the [Readme.md](https://github.com/prowler-cloud/prowler/blob/master/README.md)
|
||||
- [ ] Ensure new entries are added to [CHANGELOG.md](https://github.com/prowler-cloud/prowler/blob/master/prowler/CHANGELOG.md), if applicable.
|
||||
|
||||
#### API
|
||||
- [ ] Verify if API specs need to be regenerated.
|
||||
|
||||
@@ -81,7 +81,7 @@ jobs:
|
||||
- name: Build and push container image (latest)
|
||||
# Comment the following line for testing
|
||||
if: github.event_name == 'push'
|
||||
uses: docker/build-push-action@14487ce63c7a62a4a324b0bfb37086795e31c6c1 # v6.16.0
|
||||
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
|
||||
with:
|
||||
context: ${{ env.WORKING_DIRECTORY }}
|
||||
# Set push: false for testing
|
||||
@@ -94,7 +94,7 @@ jobs:
|
||||
|
||||
- name: Build and push container image (release)
|
||||
if: github.event_name == 'release'
|
||||
uses: docker/build-push-action@14487ce63c7a62a4a324b0bfb37086795e31c6c1 # v6.16.0
|
||||
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
|
||||
with:
|
||||
context: ${{ env.WORKING_DIRECTORY }}
|
||||
push: true
|
||||
|
||||
@@ -48,12 +48,12 @@ jobs:
|
||||
|
||||
# Initializes the CodeQL tools for scanning.
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@28deaeda66b76a05916b6923827895f2b14ab387 # v3.28.16
|
||||
uses: github/codeql-action/init@ff0a06e83cb2de871e5a09832bc6a81e7276941f # v3.28.18
|
||||
with:
|
||||
languages: ${{ matrix.language }}
|
||||
config-file: ./.github/codeql/api-codeql-config.yml
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@28deaeda66b76a05916b6923827895f2b14ab387 # v3.28.16
|
||||
uses: github/codeql-action/analyze@ff0a06e83cb2de871e5a09832bc6a81e7276941f # v3.28.18
|
||||
with:
|
||||
category: "/language:${{matrix.language}}"
|
||||
|
||||
@@ -28,6 +28,10 @@ env:
|
||||
VALKEY_DB: 0
|
||||
API_WORKING_DIR: ./api
|
||||
IMAGE_NAME: prowler-api
|
||||
IGNORE_FILES: |
|
||||
api/docs/**
|
||||
api/README.md
|
||||
api/CHANGELOG.md
|
||||
|
||||
jobs:
|
||||
test:
|
||||
@@ -78,12 +82,7 @@ jobs:
|
||||
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
|
||||
with:
|
||||
files: api/**
|
||||
files_ignore: |
|
||||
api/.github/**
|
||||
api/docs/**
|
||||
api/permissions/**
|
||||
api/README.md
|
||||
api/mkdocs.yml
|
||||
files_ignore: ${{ env.IGNORE_FILES }}
|
||||
|
||||
- name: Replace @master with current branch in pyproject.toml
|
||||
working-directory: ./api
|
||||
@@ -113,6 +112,12 @@ jobs:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
cache: "poetry"
|
||||
|
||||
- name: Install system dependencies for xmlsec
|
||||
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y libxml2-dev libxmlsec1-dev libxmlsec1-openssl pkg-config
|
||||
|
||||
- name: Install dependencies
|
||||
working-directory: ./api
|
||||
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
|
||||
@@ -131,6 +136,12 @@ jobs:
|
||||
run: |
|
||||
poetry check --lock
|
||||
|
||||
- name: Prevents known compatibility error between lxml and libxml2/libxmlsec versions - https://github.com/xmlsec/python-xmlsec/issues/320
|
||||
working-directory: ./api
|
||||
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
|
||||
run: |
|
||||
poetry run pip install --force-reinstall --no-binary lxml lxml
|
||||
|
||||
- name: Lint with ruff
|
||||
working-directory: ./api
|
||||
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
|
||||
@@ -158,8 +169,9 @@ jobs:
|
||||
- name: Safety
|
||||
working-directory: ./api
|
||||
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
|
||||
# 76352 and 76353 come from SDK, but they cannot upgrade it yet. It does not affect API
|
||||
run: |
|
||||
poetry run safety check --ignore 70612,66963,74429
|
||||
poetry run safety check --ignore 70612,66963,74429,76352,76353
|
||||
|
||||
- name: Vulture
|
||||
working-directory: ./api
|
||||
@@ -181,7 +193,7 @@ jobs:
|
||||
|
||||
- name: Upload coverage reports to Codecov
|
||||
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
|
||||
uses: codecov/codecov-action@ad3126e916f78f00edff4ed0317cf185271ccc2d # v5.4.2
|
||||
uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24 # v5.4.3
|
||||
env:
|
||||
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
|
||||
with:
|
||||
@@ -190,10 +202,19 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
|
||||
- name: Test if changes are in not ignored paths
|
||||
id: are-non-ignored-files-changed
|
||||
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
|
||||
with:
|
||||
files: api/**
|
||||
files_ignore: ${{ env.IGNORE_FILES }}
|
||||
- name: Set up Docker Buildx
|
||||
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
|
||||
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
|
||||
- name: Build Container
|
||||
uses: docker/build-push-action@14487ce63c7a62a4a324b0bfb37086795e31c6c1 # v6.16.0
|
||||
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
|
||||
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
|
||||
with:
|
||||
context: ${{ env.API_WORKING_DIR }}
|
||||
push: false
|
||||
|
||||
@@ -0,0 +1,67 @@
|
||||
name: Create Backport Label
|
||||
|
||||
on:
|
||||
release:
|
||||
types: [published]
|
||||
|
||||
jobs:
|
||||
create_label:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
issues: write
|
||||
steps:
|
||||
- name: Create backport label
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
RELEASE_TAG: ${{ github.event.release.tag_name }}
|
||||
OWNER_REPO: ${{ github.repository }}
|
||||
run: |
|
||||
VERSION_ONLY=${RELEASE_TAG#v} # Remove 'v' prefix if present (e.g., v3.2.0 -> 3.2.0)
|
||||
|
||||
# Check if it's a minor version (X.Y.0)
|
||||
if [[ "$VERSION_ONLY" =~ ^[0-9]+\.[0-9]+\.0$ ]]; then
|
||||
echo "Release ${RELEASE_TAG} (version ${VERSION_ONLY}) is a minor version. Proceeding to create backport label."
|
||||
|
||||
TWO_DIGIT_VERSION=${VERSION_ONLY%.0} # Extract X.Y from X.Y.0 (e.g., 5.6 from 5.6.0)
|
||||
|
||||
FINAL_LABEL_NAME="backport-to-v${TWO_DIGIT_VERSION}"
|
||||
FINAL_DESCRIPTION="Backport PR to the v${TWO_DIGIT_VERSION} branch"
|
||||
|
||||
echo "Effective label name will be: ${FINAL_LABEL_NAME}"
|
||||
echo "Effective description will be: ${FINAL_DESCRIPTION}"
|
||||
|
||||
# Check if the label already exists
|
||||
STATUS_CODE=$(curl -s -o /dev/null -w "%{http_code}" -H "Authorization: token ${GITHUB_TOKEN}" "https://api.github.com/repos/${OWNER_REPO}/labels/${FINAL_LABEL_NAME}")
|
||||
|
||||
if [ "${STATUS_CODE}" -eq 200 ]; then
|
||||
echo "Label '${FINAL_LABEL_NAME}' already exists."
|
||||
elif [ "${STATUS_CODE}" -eq 404 ]; then
|
||||
echo "Label '${FINAL_LABEL_NAME}' does not exist. Creating it..."
|
||||
# Prepare JSON data payload
|
||||
JSON_DATA=$(printf '{"name":"%s","description":"%s","color":"B60205"}' "${FINAL_LABEL_NAME}" "${FINAL_DESCRIPTION}")
|
||||
|
||||
CREATE_STATUS_CODE=$(curl -s -o /tmp/curl_create_response.json -w "%{http_code}" -X POST \
|
||||
-H "Accept: application/vnd.github.v3+json" \
|
||||
-H "Authorization: token ${GITHUB_TOKEN}" \
|
||||
--data "${JSON_DATA}" \
|
||||
"https://api.github.com/repos/${OWNER_REPO}/labels")
|
||||
|
||||
CREATE_RESPONSE_BODY=$(cat /tmp/curl_create_response.json)
|
||||
rm -f /tmp/curl_create_response.json
|
||||
|
||||
if [ "$CREATE_STATUS_CODE" -eq 201 ]; then
|
||||
echo "Label '${FINAL_LABEL_NAME}' created successfully."
|
||||
else
|
||||
echo "Error creating label '${FINAL_LABEL_NAME}'. Status: $CREATE_STATUS_CODE"
|
||||
echo "Response: $CREATE_RESPONSE_BODY"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo "Error checking for label '${FINAL_LABEL_NAME}'. HTTP Status: ${STATUS_CODE}"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo "Release ${RELEASE_TAG} (version ${VERSION_ONLY}) is not a minor version. Skipping backport label creation."
|
||||
exit 0
|
||||
fi
|
||||
@@ -11,7 +11,7 @@ jobs:
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: TruffleHog OSS
|
||||
uses: trufflesecurity/trufflehog@b06f6d72a3791308bb7ba59c2b8cb7a083bd17e4 # v3.88.26
|
||||
uses: trufflesecurity/trufflehog@90694bf9af66e7536abc5824e7a87246dbf933cb # v3.88.35
|
||||
with:
|
||||
path: ./
|
||||
base: ${{ github.event.repository.default_branch }}
|
||||
|
||||
@@ -0,0 +1,86 @@
|
||||
name: Check Changelog
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened, synchronize, reopened, labeled, unlabeled]
|
||||
|
||||
jobs:
|
||||
check-changelog:
|
||||
if: contains(github.event.pull_request.labels.*.name, 'no-changelog') == false
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
id-token: write
|
||||
contents: read
|
||||
pull-requests: write
|
||||
env:
|
||||
MONITORED_FOLDERS: "api ui prowler"
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Get list of changed files
|
||||
id: changed_files
|
||||
run: |
|
||||
git fetch origin ${{ github.base_ref }}
|
||||
git diff --name-only origin/${{ github.base_ref }}...HEAD > changed_files.txt
|
||||
cat changed_files.txt
|
||||
|
||||
- name: Check for folder changes and changelog presence
|
||||
id: check_folders
|
||||
run: |
|
||||
missing_changelogs=""
|
||||
|
||||
for folder in $MONITORED_FOLDERS; do
|
||||
if grep -q "^${folder}/" changed_files.txt; then
|
||||
echo "Detected changes in ${folder}/"
|
||||
if ! grep -q "^${folder}/CHANGELOG.md$" changed_files.txt; then
|
||||
echo "No changelog update found for ${folder}/"
|
||||
missing_changelogs="${missing_changelogs}- \`${folder}\`\n"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
echo "missing_changelogs<<EOF" >> $GITHUB_OUTPUT
|
||||
echo -e "${missing_changelogs}" >> $GITHUB_OUTPUT
|
||||
echo "EOF" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Find existing changelog comment
|
||||
if: github.event.pull_request.head.repo.full_name == github.repository
|
||||
id: find_comment
|
||||
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e #v3.1.0
|
||||
with:
|
||||
issue-number: ${{ github.event.pull_request.number }}
|
||||
comment-author: 'github-actions[bot]'
|
||||
body-includes: '<!-- changelog-check -->'
|
||||
|
||||
- name: Comment on PR if changelog is missing
|
||||
if: github.event.pull_request.head.repo.full_name == github.repository && steps.check_folders.outputs.missing_changelogs != ''
|
||||
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
|
||||
with:
|
||||
issue-number: ${{ github.event.pull_request.number }}
|
||||
comment-id: ${{ steps.find_comment.outputs.comment-id }}
|
||||
body: |
|
||||
<!-- changelog-check -->
|
||||
⚠️ **Changes detected in the following folders without a corresponding update to the `CHANGELOG.md`:**
|
||||
|
||||
${{ steps.check_folders.outputs.missing_changelogs }}
|
||||
|
||||
Please add an entry to the corresponding `CHANGELOG.md` file to maintain a clear history of changes.
|
||||
|
||||
- name: Comment on PR if all changelogs are present
|
||||
if: github.event.pull_request.head.repo.full_name == github.repository && steps.check_folders.outputs.missing_changelogs == ''
|
||||
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
|
||||
with:
|
||||
issue-number: ${{ github.event.pull_request.number }}
|
||||
comment-id: ${{ steps.find_comment.outputs.comment-id }}
|
||||
body: |
|
||||
<!-- changelog-check -->
|
||||
✅ All necessary `CHANGELOG.md` files have been updated. Great job! 🎉
|
||||
|
||||
- name: Fail if changelog is missing
|
||||
if: steps.check_folders.outputs.missing_changelogs != ''
|
||||
run: |
|
||||
echo "ERROR: Missing changelog updates in some folders."
|
||||
exit 1
|
||||
@@ -18,7 +18,7 @@ jobs:
|
||||
- name: Set short git commit SHA
|
||||
id: vars
|
||||
run: |
|
||||
shortSha=$(git rev-parse --short ${{ github.sha }})
|
||||
shortSha=$(git rev-parse --short ${{ github.event.pull_request.merge_commit_sha }})
|
||||
echo "SHORT_SHA=${shortSha}" >> $GITHUB_ENV
|
||||
|
||||
- name: Trigger pull request
|
||||
@@ -28,7 +28,7 @@ jobs:
|
||||
repository: ${{ secrets.CLOUD_DISPATCH }}
|
||||
event-type: prowler-pull-request-merged
|
||||
client-payload: '{
|
||||
"PROWLER_COMMIT_SHA": "${{ github.sha }}",
|
||||
"PROWLER_COMMIT_SHA": "${{ github.event.pull_request.merge_commit_sha }}",
|
||||
"PROWLER_COMMIT_SHORT_SHA": "${{ env.SHORT_SHA }}",
|
||||
"PROWLER_PR_TITLE": "${{ github.event.pull_request.title }}",
|
||||
"PROWLER_PR_LABELS": ${{ toJson(github.event.pull_request.labels.*.name) }},
|
||||
|
||||
@@ -127,7 +127,7 @@ jobs:
|
||||
|
||||
- name: Build and push container image (latest)
|
||||
if: github.event_name == 'push'
|
||||
uses: docker/build-push-action@14487ce63c7a62a4a324b0bfb37086795e31c6c1 # v6.16.0
|
||||
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
|
||||
with:
|
||||
push: true
|
||||
tags: |
|
||||
@@ -140,7 +140,7 @@ jobs:
|
||||
|
||||
- name: Build and push container image (release)
|
||||
if: github.event_name == 'release'
|
||||
uses: docker/build-push-action@14487ce63c7a62a4a324b0bfb37086795e31c6c1 # v6.16.0
|
||||
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
|
||||
with:
|
||||
# Use local context to get changes
|
||||
# https://github.com/docker/build-push-action#path-context
|
||||
|
||||
@@ -56,12 +56,12 @@ jobs:
|
||||
|
||||
# Initializes the CodeQL tools for scanning.
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@28deaeda66b76a05916b6923827895f2b14ab387 # v3.28.16
|
||||
uses: github/codeql-action/init@ff0a06e83cb2de871e5a09832bc6a81e7276941f # v3.28.18
|
||||
with:
|
||||
languages: ${{ matrix.language }}
|
||||
config-file: ./.github/codeql/sdk-codeql-config.yml
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@28deaeda66b76a05916b6923827895f2b14ab387 # v3.28.16
|
||||
uses: github/codeql-action/analyze@ff0a06e83cb2de871e5a09832bc6a81e7276941f # v3.28.18
|
||||
with:
|
||||
category: "/language:${{matrix.language}}"
|
||||
|
||||
@@ -212,6 +212,21 @@ jobs:
|
||||
run: |
|
||||
poetry run pytest -n auto --cov=./prowler/providers/m365 --cov-report=xml:m365_coverage.xml tests/providers/m365
|
||||
|
||||
# Test IaC
|
||||
- name: IaC - Check if any file has changed
|
||||
id: iac-changed-files
|
||||
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c # v46.0.5
|
||||
with:
|
||||
files: |
|
||||
./prowler/providers/iac/**
|
||||
./tests/providers/iac/**
|
||||
.poetry.lock
|
||||
|
||||
- name: IaC - Test
|
||||
if: steps.iac-changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
poetry run pytest -n auto --cov=./prowler/providers/iac --cov-report=xml:iac_coverage.xml tests/providers/iac
|
||||
|
||||
# Common Tests
|
||||
- name: Lib - Test
|
||||
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
|
||||
@@ -226,7 +241,7 @@ jobs:
|
||||
# Codecov
|
||||
- name: Upload coverage reports to Codecov
|
||||
if: steps.are-non-ignored-files-changed.outputs.any_changed == 'true'
|
||||
uses: codecov/codecov-action@ad3126e916f78f00edff4ed0317cf185271ccc2d # v5.4.2
|
||||
uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24 # v5.4.3
|
||||
env:
|
||||
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
|
||||
with:
|
||||
|
||||
@@ -38,7 +38,7 @@ jobs:
|
||||
pip install boto3
|
||||
|
||||
- name: Configure AWS Credentials -- DEV
|
||||
uses: aws-actions/configure-aws-credentials@ececac1a45f3b08a01d2dd070d28d111c5fe6722 # v4.1.0
|
||||
uses: aws-actions/configure-aws-credentials@b47578312673ae6fa5b5096b330d9fbac3d116df # v4.2.1
|
||||
with:
|
||||
aws-region: ${{ env.AWS_REGION_DEV }}
|
||||
role-to-assume: ${{ secrets.DEV_IAM_ROLE_ARN }}
|
||||
|
||||
@@ -81,7 +81,7 @@ jobs:
|
||||
- name: Build and push container image (latest)
|
||||
# Comment the following line for testing
|
||||
if: github.event_name == 'push'
|
||||
uses: docker/build-push-action@14487ce63c7a62a4a324b0bfb37086795e31c6c1 # v6.16.0
|
||||
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
|
||||
with:
|
||||
context: ${{ env.WORKING_DIRECTORY }}
|
||||
build-args: |
|
||||
@@ -96,7 +96,7 @@ jobs:
|
||||
|
||||
- name: Build and push container image (release)
|
||||
if: github.event_name == 'release'
|
||||
uses: docker/build-push-action@14487ce63c7a62a4a324b0bfb37086795e31c6c1 # v6.16.0
|
||||
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
|
||||
with:
|
||||
context: ${{ env.WORKING_DIRECTORY }}
|
||||
build-args: |
|
||||
|
||||
@@ -48,12 +48,12 @@ jobs:
|
||||
|
||||
# Initializes the CodeQL tools for scanning.
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@28deaeda66b76a05916b6923827895f2b14ab387 # v3.28.16
|
||||
uses: github/codeql-action/init@ff0a06e83cb2de871e5a09832bc6a81e7276941f # v3.28.18
|
||||
with:
|
||||
languages: ${{ matrix.language }}
|
||||
config-file: ./.github/codeql/ui-codeql-config.yml
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@28deaeda66b76a05916b6923827895f2b14ab387 # v3.28.16
|
||||
uses: github/codeql-action/analyze@ff0a06e83cb2de871e5a09832bc6a81e7276941f # v3.28.18
|
||||
with:
|
||||
category: "/language:${{matrix.language}}"
|
||||
|
||||
@@ -50,7 +50,7 @@ jobs:
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
|
||||
- name: Build Container
|
||||
uses: docker/build-push-action@14487ce63c7a62a4a324b0bfb37086795e31c6c1 # v6.16.0
|
||||
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
|
||||
with:
|
||||
context: ${{ env.UI_WORKING_DIR }}
|
||||
# Always build using `prod` target
|
||||
|
||||
@@ -115,7 +115,7 @@ repos:
|
||||
- id: safety
|
||||
name: safety
|
||||
description: "Safety is a tool that checks your installed dependencies for known security vulnerabilities"
|
||||
entry: bash -c 'safety check --ignore 70612,66963,74429'
|
||||
entry: bash -c 'safety check --ignore 70612,66963,74429,76352,76353'
|
||||
language: system
|
||||
|
||||
- id: vulture
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
<img align="center" src="https://github.com/prowler-cloud/prowler/blob/master/docs/img/prowler-logo-white.png#gh-dark-mode-only" width="50%" height="50%">
|
||||
</p>
|
||||
<p align="center">
|
||||
<b><i>Prowler Open Source</b> is as dynamic and adaptable as the environment it secures. It is trusted by the industry leaders to uphold the highest standards in security.
|
||||
<b><i>Prowler</b> is the Open Cloud Security platform trusted by thousands to automate security and compliance in any cloud environment. With hundreds of ready-to-use checks and compliance frameworks, Prowler delivers real-time, customizable monitoring and seamless integrations, making cloud security simple, scalable, and cost-effective for organizations of any size.
|
||||
</p>
|
||||
<p align="center">
|
||||
<b>Learn more at <a href="https://prowler.com">prowler.com</i></b>
|
||||
@@ -86,21 +86,27 @@ prowler dashboard
|
||||
|
||||
| Provider | Checks | Services | [Compliance Frameworks](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/compliance/) | [Categories](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/misc/#categories) |
|
||||
|---|---|---|---|---|
|
||||
| AWS | 564 | 82 | 34 | 10 |
|
||||
| GCP | 79 | 13 | 7 | 3 |
|
||||
| Azure | 140 | 18 | 8 | 3 |
|
||||
| Kubernetes | 83 | 7 | 4 | 7 |
|
||||
| GitHub | 3 | 2 | 1 | 0 |
|
||||
| M365 | 44 | 2 | 2 | 0 |
|
||||
| AWS | 567 | 82 | 36 | 10 |
|
||||
| GCP | 79 | 13 | 10 | 3 |
|
||||
| Azure | 142 | 18 | 10 | 3 |
|
||||
| Kubernetes | 83 | 7 | 5 | 7 |
|
||||
| GitHub | 16 | 2 | 1 | 0 |
|
||||
| M365 | 69 | 7 | 3 | 2 |
|
||||
| NHN (Unofficial) | 6 | 2 | 1 | 0 |
|
||||
|
||||
> [!Note]
|
||||
> The numbers in the table are updated periodically.
|
||||
|
||||
> [!Tip]
|
||||
> For the most accurate and up-to-date information about checks, services, frameworks, and categories, visit [**Prowler Hub**](https://hub.prowler.com).
|
||||
|
||||
> [!Note]
|
||||
> Use the following commands to list Prowler's available checks, services, compliance frameworks, and categories: `prowler <provider> --list-checks`, `prowler <provider> --list-services`, `prowler <provider> --list-compliance` and `prowler <provider> --list-categories`.
|
||||
|
||||
# 💻 Installation
|
||||
|
||||
## Prowler App
|
||||
|
||||
Installing Prowler App
|
||||
Prowler App offers flexible installation methods tailored to various environments:
|
||||
|
||||
> For detailed instructions on using Prowler App, refer to the [Prowler App Usage Guide](https://docs.prowler.com/projects/prowler-open-source/en/latest/tutorials/prowler-app/).
|
||||
|
||||
-168
@@ -1,168 +0,0 @@
|
||||
# Byte-compiled / optimized / DLL files
|
||||
__pycache__/
|
||||
*.pyc
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
|
||||
# C extensions
|
||||
*.so
|
||||
|
||||
# Distribution / packaging
|
||||
.Python
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
share/python-wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
MANIFEST
|
||||
|
||||
# PyInstaller
|
||||
# Usually these files are written by a python script from a template
|
||||
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
||||
*.manifest
|
||||
*.spec
|
||||
|
||||
# Installer logs
|
||||
pip-log.txt
|
||||
pip-delete-this-directory.txt
|
||||
|
||||
# Unit test / coverage reports
|
||||
htmlcov/
|
||||
.tox/
|
||||
.nox/
|
||||
.coverage
|
||||
.coverage.*
|
||||
.cache
|
||||
nosetests.xml
|
||||
coverage.xml
|
||||
*.cover
|
||||
*.py,cover
|
||||
.hypothesis/
|
||||
.pytest_cache/
|
||||
cover/
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
*.pot
|
||||
|
||||
# Django stuff:
|
||||
*.log
|
||||
local_settings.py
|
||||
db.sqlite3
|
||||
db.sqlite3-journal
|
||||
/_data/
|
||||
|
||||
# Flask stuff:
|
||||
instance/
|
||||
.webassets-cache
|
||||
|
||||
# Scrapy stuff:
|
||||
.scrapy
|
||||
|
||||
# Sphinx documentation
|
||||
docs/_build/
|
||||
|
||||
# PyBuilder
|
||||
.pybuilder/
|
||||
target/
|
||||
|
||||
# Jupyter Notebook
|
||||
.ipynb_checkpoints
|
||||
|
||||
# IPython
|
||||
profile_default/
|
||||
ipython_config.py
|
||||
|
||||
# pyenv
|
||||
# For a library or package, you might want to ignore these files since the code is
|
||||
# intended to run in multiple environments; otherwise, check them in:
|
||||
# .python-version
|
||||
|
||||
# pipenv
|
||||
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
|
||||
# However, in case of collaboration, if having platform-specific dependencies or dependencies
|
||||
# having no cross-platform support, pipenv may install dependencies that don't work, or not
|
||||
# install all needed dependencies.
|
||||
#Pipfile.lock
|
||||
|
||||
# poetry
|
||||
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
|
||||
# This is especially recommended for binary packages to ensure reproducibility, and is more
|
||||
# commonly ignored for libraries.
|
||||
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
|
||||
#poetry.lock
|
||||
|
||||
# pdm
|
||||
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
|
||||
#pdm.lock
|
||||
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
|
||||
# in version control.
|
||||
# https://pdm.fming.dev/latest/usage/project/#working-with-version-control
|
||||
.pdm.toml
|
||||
.pdm-python
|
||||
.pdm-build/
|
||||
|
||||
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
|
||||
__pypackages__/
|
||||
|
||||
# Celery stuff
|
||||
celerybeat-schedule
|
||||
celerybeat.pid
|
||||
|
||||
# SageMath parsed files
|
||||
*.sage.py
|
||||
|
||||
# Environments
|
||||
.env
|
||||
*.env
|
||||
.venv
|
||||
env/
|
||||
venv/
|
||||
ENV/
|
||||
env.bak/
|
||||
venv.bak/
|
||||
|
||||
# Spyder project settings
|
||||
.spyderproject
|
||||
.spyproject
|
||||
|
||||
# Rope project settings
|
||||
.ropeproject
|
||||
|
||||
# mkdocs documentation
|
||||
/site
|
||||
|
||||
# mypy
|
||||
.mypy_cache/
|
||||
.dmypy.json
|
||||
dmypy.json
|
||||
|
||||
# Pyre type checker
|
||||
.pyre/
|
||||
|
||||
# pytype static type analyzer
|
||||
.pytype/
|
||||
|
||||
# Cython debug symbols
|
||||
cython_debug/
|
||||
|
||||
# PyCharm
|
||||
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
|
||||
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
|
||||
# and can be added to the global gitignore or merged into this file. For a more nuclear
|
||||
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
|
||||
.idea/
|
||||
|
||||
# VSCode
|
||||
.vscode/
|
||||
@@ -1,91 +0,0 @@
|
||||
repos:
|
||||
## GENERAL
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.6.0
|
||||
hooks:
|
||||
- id: check-merge-conflict
|
||||
- id: check-yaml
|
||||
args: ["--unsafe"]
|
||||
- id: check-json
|
||||
- id: end-of-file-fixer
|
||||
- id: trailing-whitespace
|
||||
- id: no-commit-to-branch
|
||||
- id: pretty-format-json
|
||||
args: ["--autofix", "--no-sort-keys", "--no-ensure-ascii"]
|
||||
exclude: 'src/backend/api/fixtures/dev/.*\.json$'
|
||||
|
||||
## TOML
|
||||
- repo: https://github.com/macisamuele/language-formatters-pre-commit-hooks
|
||||
rev: v2.13.0
|
||||
hooks:
|
||||
- id: pretty-format-toml
|
||||
args: [--autofix]
|
||||
files: pyproject.toml
|
||||
|
||||
## BASH
|
||||
- repo: https://github.com/koalaman/shellcheck-precommit
|
||||
rev: v0.10.0
|
||||
hooks:
|
||||
- id: shellcheck
|
||||
exclude: contrib
|
||||
## PYTHON
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
# Ruff version.
|
||||
rev: v0.5.0
|
||||
hooks:
|
||||
# Run the linter.
|
||||
- id: ruff
|
||||
args: [ --fix ]
|
||||
# Run the formatter.
|
||||
- id: ruff-format
|
||||
|
||||
- repo: https://github.com/python-poetry/poetry
|
||||
rev: 1.8.0
|
||||
hooks:
|
||||
- id: poetry-check
|
||||
args: ["--directory=src"]
|
||||
- id: poetry-lock
|
||||
args: ["--no-update", "--directory=src"]
|
||||
|
||||
- repo: https://github.com/hadolint/hadolint
|
||||
rev: v2.13.0-beta
|
||||
hooks:
|
||||
- id: hadolint
|
||||
args: ["--ignore=DL3013", "Dockerfile"]
|
||||
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: pylint
|
||||
name: pylint
|
||||
entry: bash -c 'poetry run pylint --disable=W,C,R,E -j 0 -rn -sn src/'
|
||||
language: system
|
||||
files: '.*\.py'
|
||||
|
||||
- id: trufflehog
|
||||
name: TruffleHog
|
||||
description: Detect secrets in your data.
|
||||
entry: bash -c 'trufflehog --no-update git file://. --only-verified --fail'
|
||||
# For running trufflehog in docker, use the following entry instead:
|
||||
# entry: bash -c 'docker run -v "$(pwd):/workdir" -i --rm trufflesecurity/trufflehog:latest git file:///workdir --only-verified --fail'
|
||||
language: system
|
||||
stages: ["commit", "push"]
|
||||
|
||||
- id: bandit
|
||||
name: bandit
|
||||
description: "Bandit is a tool for finding common security issues in Python code"
|
||||
entry: bash -c 'poetry run bandit -q -lll -x '*_test.py,./contrib/,./.venv/' -r .'
|
||||
language: system
|
||||
files: '.*\.py'
|
||||
|
||||
- id: safety
|
||||
name: safety
|
||||
description: "Safety is a tool that checks your installed dependencies for known security vulnerabilities"
|
||||
entry: bash -c 'poetry run safety check --ignore 70612,66963,74429'
|
||||
language: system
|
||||
|
||||
- id: vulture
|
||||
name: vulture
|
||||
description: "Vulture finds unused code in Python programs."
|
||||
entry: bash -c 'poetry run vulture --exclude "contrib,.venv,tests,conftest.py" --min-confidence 100 .'
|
||||
language: system
|
||||
files: '.*\.py'
|
||||
+66
-42
@@ -2,48 +2,77 @@
|
||||
|
||||
All notable changes to the **Prowler API** are documented in this file.
|
||||
|
||||
## [v1.9.0] (Prowler UNRELEASED)
|
||||
|
||||
### Added
|
||||
- SSO with SAML support [(#7822)](https://github.com/prowler-cloud/prowler/pull/7822)
|
||||
- Support GCP Service Account key [(#7824)](https://github.com/prowler-cloud/prowler/pull/7824)
|
||||
- `GET /compliance-overviews` endpoints to retrieve compliance metadata and specific requirements statuses [(#7877)](https://github.com/prowler-cloud/prowler/pull/7877)
|
||||
- Lighthouse configuration support [(#7848)](https://github.com/prowler-cloud/prowler/pull/7848)
|
||||
|
||||
### Changed
|
||||
- Reworked `GET /compliance-overviews` to return proper requirement metrics [(#7877)](https://github.com/prowler-cloud/prowler/pull/7877)
|
||||
|
||||
### Fixed
|
||||
- Scheduled scans are no longer deleted when their daily schedule run is disabled [(#8082)](https://github.com/prowler-cloud/prowler/pull/8082)
|
||||
|
||||
---
|
||||
|
||||
## [v1.8.5] (Prowler v5.7.5)
|
||||
|
||||
### Fixed
|
||||
- Normalize provider UID to ensure safe and unique export directory paths [(#8007)](https://github.com/prowler-cloud/prowler/pull/8007).
|
||||
- Blank resource types in `/metadata` endpoints [(#8027)](https://github.com/prowler-cloud/prowler/pull/8027)
|
||||
|
||||
---
|
||||
|
||||
## [v1.8.4] (Prowler v5.7.4)
|
||||
|
||||
### Removed
|
||||
- Reverted RLS transaction handling and DB custom backend [(#7994)](https://github.com/prowler-cloud/prowler/pull/7994)
|
||||
|
||||
---
|
||||
|
||||
## [v1.8.3] (Prowler v5.7.3)
|
||||
|
||||
### Added
|
||||
- Database backend to handle already closed connections [(#7935)](https://github.com/prowler-cloud/prowler/pull/7935).
|
||||
- Database backend to handle already closed connections [(#7935)](https://github.com/prowler-cloud/prowler/pull/7935)
|
||||
|
||||
### Changed
|
||||
- Renamed field encrypted_password to password for M365 provider [(#7784)](https://github.com/prowler-cloud/prowler/pull/7784)
|
||||
|
||||
### Fixed
|
||||
- Fixed transaction persistence with RLS operations [(#7916)](https://github.com/prowler-cloud/prowler/pull/7916).
|
||||
- Reverted the change `get_with_retry` to use the original `get` method for retrieving tasks [(#7932)](https://github.com/prowler-cloud/prowler/pull/7932).
|
||||
- Fixed the connection status verification before launching a scan [(#7831)](https://github.com/prowler-cloud/prowler/pull/7831)
|
||||
|
||||
- Transaction persistence with RLS operations [(#7916)](https://github.com/prowler-cloud/prowler/pull/7916)
|
||||
- Reverted the change `get_with_retry` to use the original `get` method for retrieving tasks [(#7932)](https://github.com/prowler-cloud/prowler/pull/7932)
|
||||
|
||||
---
|
||||
|
||||
## [v1.8.2] (Prowler v5.7.2)
|
||||
|
||||
### Fixed
|
||||
- Fixed task lookup to use task_kwargs instead of task_args for scan report resolution. [(#7830)](https://github.com/prowler-cloud/prowler/pull/7830)
|
||||
- Fixed Kubernetes UID validation to allow valid context names [(#7871)](https://github.com/prowler-cloud/prowler/pull/7871)
|
||||
- Fixed the connection status verification before launching a scan [(#7831)](https://github.com/prowler-cloud/prowler/pull/7831)
|
||||
- Fixed a race condition when creating background tasks [(#7876)](https://github.com/prowler-cloud/prowler/pull/7876).
|
||||
- Fixed an error when modifying or retrieving tenants due to missing user UUID in transaction context [(#7890)](https://github.com/prowler-cloud/prowler/pull/7890).
|
||||
- Task lookup to use task_kwargs instead of task_args for scan report resolution [(#7830)](https://github.com/prowler-cloud/prowler/pull/7830)
|
||||
- Kubernetes UID validation to allow valid context names [(#7871)](https://github.com/prowler-cloud/prowler/pull/7871)
|
||||
- Connection status verification before launching a scan [(#7831)](https://github.com/prowler-cloud/prowler/pull/7831)
|
||||
- Race condition when creating background tasks [(#7876)](https://github.com/prowler-cloud/prowler/pull/7876)
|
||||
- Error when modifying or retrieving tenants due to missing user UUID in transaction context [(#7890)](https://github.com/prowler-cloud/prowler/pull/7890)
|
||||
|
||||
---
|
||||
|
||||
## [v1.8.1] (Prowler v5.7.1)
|
||||
|
||||
### Fixed
|
||||
- Added database index to improve performance on finding lookup [(#7800)](https://github.com/prowler-cloud/prowler/pull/7800).
|
||||
- Added database index to improve performance on finding lookup [(#7800)](https://github.com/prowler-cloud/prowler/pull/7800)
|
||||
|
||||
---
|
||||
|
||||
## [v1.8.0] (Prowler v5.7.0)
|
||||
|
||||
### Added
|
||||
- Added huge improvements to `/findings/metadata` and resource related filters for findings [(#7690)](https://github.com/prowler-cloud/prowler/pull/7690).
|
||||
- Added improvements to `/overviews` endpoints [(#7690)](https://github.com/prowler-cloud/prowler/pull/7690).
|
||||
- Added new queue to perform backfill background tasks [(#7690)](https://github.com/prowler-cloud/prowler/pull/7690).
|
||||
- Added new endpoints to retrieve latest findings and metadata [(#7743)](https://github.com/prowler-cloud/prowler/pull/7743).
|
||||
- Added export support for Prowler ThreatScore in M365 [(7783)](https://github.com/prowler-cloud/prowler/pull/7783)
|
||||
- Huge improvements to `/findings/metadata` and resource related filters for findings [(#7690)](https://github.com/prowler-cloud/prowler/pull/7690)
|
||||
- Improvements to `/overviews` endpoints [(#7690)](https://github.com/prowler-cloud/prowler/pull/7690)
|
||||
- Queue to perform backfill background tasks [(#7690)](https://github.com/prowler-cloud/prowler/pull/7690)
|
||||
- New endpoints to retrieve latest findings and metadata [(#7743)](https://github.com/prowler-cloud/prowler/pull/7743)
|
||||
- Export support for Prowler ThreatScore in M365 [(7783)](https://github.com/prowler-cloud/prowler/pull/7783)
|
||||
|
||||
---
|
||||
|
||||
@@ -51,9 +80,9 @@ All notable changes to the **Prowler API** are documented in this file.
|
||||
|
||||
### Added
|
||||
|
||||
- Added M365 as a new provider [(#7563)](https://github.com/prowler-cloud/prowler/pull/7563).
|
||||
- Added a `compliance/` folder and ZIP‐export functionality for all compliance reports.[(#7653)](https://github.com/prowler-cloud/prowler/pull/7653).
|
||||
- Added a new API endpoint to fetch and download any specific compliance file by name [(#7653)](https://github.com/prowler-cloud/prowler/pull/7653).
|
||||
- M365 as a new provider [(#7563)](https://github.com/prowler-cloud/prowler/pull/7563)
|
||||
- `compliance/` folder and ZIP‐export functionality for all compliance reports [(#7653)](https://github.com/prowler-cloud/prowler/pull/7653)
|
||||
- API endpoint to fetch and download any specific compliance file by name [(#7653)](https://github.com/prowler-cloud/prowler/pull/7653)
|
||||
|
||||
---
|
||||
|
||||
@@ -61,47 +90,42 @@ All notable changes to the **Prowler API** are documented in this file.
|
||||
|
||||
### Added
|
||||
|
||||
- Support for developing new integrations [(#7167)](https://github.com/prowler-cloud/prowler/pull/7167).
|
||||
- HTTP Security Headers [(#7289)](https://github.com/prowler-cloud/prowler/pull/7289).
|
||||
- New endpoint to get the compliance overviews metadata [(#7333)](https://github.com/prowler-cloud/prowler/pull/7333).
|
||||
- Support for muted findings [(#7378)](https://github.com/prowler-cloud/prowler/pull/7378).
|
||||
- Added missing fields to API findings and resources [(#7318)](https://github.com/prowler-cloud/prowler/pull/7318).
|
||||
- Support for developing new integrations [(#7167)](https://github.com/prowler-cloud/prowler/pull/7167)
|
||||
- HTTP Security Headers [(#7289)](https://github.com/prowler-cloud/prowler/pull/7289)
|
||||
- New endpoint to get the compliance overviews metadata [(#7333)](https://github.com/prowler-cloud/prowler/pull/7333)
|
||||
- Support for muted findings [(#7378)](https://github.com/prowler-cloud/prowler/pull/7378)
|
||||
- Missing fields to API findings and resources [(#7318)](https://github.com/prowler-cloud/prowler/pull/7318)
|
||||
|
||||
---
|
||||
|
||||
## [v1.5.4] (Prowler v5.4.4)
|
||||
|
||||
### Fixed
|
||||
- Fixed a bug with periodic tasks when trying to delete a provider ([#7466])(https://github.com/prowler-cloud/prowler/pull/7466).
|
||||
- Bug with periodic tasks when trying to delete a provider [(#7466)](https://github.com/prowler-cloud/prowler/pull/7466)
|
||||
|
||||
---
|
||||
|
||||
## [v1.5.3] (Prowler v5.4.3)
|
||||
|
||||
### Fixed
|
||||
- Added duplicated scheduled scans handling ([#7401])(https://github.com/prowler-cloud/prowler/pull/7401).
|
||||
- Added environment variable to configure the deletion task batch size ([#7423])(https://github.com/prowler-cloud/prowler/pull/7423).
|
||||
- Duplicated scheduled scans handling [(#7401)](https://github.com/prowler-cloud/prowler/pull/7401)
|
||||
- Environment variable to configure the deletion task batch size [(#7423)](https://github.com/prowler-cloud/prowler/pull/7423)
|
||||
|
||||
---
|
||||
|
||||
## [v1.5.2] (Prowler v5.4.2)
|
||||
|
||||
### Changed
|
||||
- Refactored deletion logic and implemented retry mechanism for deletion tasks [(#7349)](https://github.com/prowler-cloud/prowler/pull/7349).
|
||||
- Refactored deletion logic and implemented retry mechanism for deletion tasks [(#7349)](https://github.com/prowler-cloud/prowler/pull/7349)
|
||||
|
||||
---
|
||||
|
||||
## [v1.5.1] (Prowler v5.4.1)
|
||||
|
||||
### Fixed
|
||||
- Added a handled response in case local files are missing [(#7183)](https://github.com/prowler-cloud/prowler/pull/7183).
|
||||
- Fixed a race condition when deleting export files after the S3 upload [(#7172)](https://github.com/prowler-cloud/prowler/pull/7172).
|
||||
- Handled exception when a provider has no secret in test connection [(#7283)](https://github.com/prowler-cloud/prowler/pull/7283).
|
||||
|
||||
|
||||
### Added
|
||||
|
||||
- Support for developing new integrations [(#7167)](https://github.com/prowler-cloud/prowler/pull/7167).
|
||||
- Handle response in case local files are missing [(#7183)](https://github.com/prowler-cloud/prowler/pull/7183)
|
||||
- Race condition when deleting export files after the S3 upload [(#7172)](https://github.com/prowler-cloud/prowler/pull/7172)
|
||||
- Handle exception when a provider has no secret in test connection [(#7283)](https://github.com/prowler-cloud/prowler/pull/7283)
|
||||
|
||||
---
|
||||
|
||||
@@ -109,20 +133,20 @@ All notable changes to the **Prowler API** are documented in this file.
|
||||
|
||||
### Added
|
||||
- Social login integration with Google and GitHub [(#6906)](https://github.com/prowler-cloud/prowler/pull/6906)
|
||||
- Add API scan report system, now all scans launched from the API will generate a compressed file with the report in OCSF, CSV and HTML formats [(#6878)](https://github.com/prowler-cloud/prowler/pull/6878).
|
||||
- API scan report system, now all scans launched from the API will generate a compressed file with the report in OCSF, CSV and HTML formats [(#6878)](https://github.com/prowler-cloud/prowler/pull/6878)
|
||||
- Configurable Sentry integration [(#6874)](https://github.com/prowler-cloud/prowler/pull/6874)
|
||||
|
||||
### Changed
|
||||
- Optimized `GET /findings` endpoint to improve response time and size [(#7019)](https://github.com/prowler-cloud/prowler/pull/7019).
|
||||
- Optimized `GET /findings` endpoint to improve response time and size [(#7019)](https://github.com/prowler-cloud/prowler/pull/7019)
|
||||
|
||||
---
|
||||
|
||||
## [v1.4.0] (Prowler v5.3.0)
|
||||
|
||||
### Changed
|
||||
- Daily scheduled scan instances are now created beforehand with `SCHEDULED` state [(#6700)](https://github.com/prowler-cloud/prowler/pull/6700).
|
||||
- Findings endpoints now require at least one date filter [(#6800)](https://github.com/prowler-cloud/prowler/pull/6800).
|
||||
- Findings metadata endpoint received a performance improvement [(#6863)](https://github.com/prowler-cloud/prowler/pull/6863).
|
||||
- Increased the allowed length of the provider UID for Kubernetes providers [(#6869)](https://github.com/prowler-cloud/prowler/pull/6869).
|
||||
- Daily scheduled scan instances are now created beforehand with `SCHEDULED` state [(#6700)](https://github.com/prowler-cloud/prowler/pull/6700)
|
||||
- Findings endpoints now require at least one date filter [(#6800)](https://github.com/prowler-cloud/prowler/pull/6800)
|
||||
- Findings metadata endpoint received a performance improvement [(#6863)](https://github.com/prowler-cloud/prowler/pull/6863)
|
||||
- Increased the allowed length of the provider UID for Kubernetes providers [(#6869)](https://github.com/prowler-cloud/prowler/pull/6869)
|
||||
|
||||
---
|
||||
|
||||
+20
-5
@@ -6,7 +6,19 @@ ARG POWERSHELL_VERSION=7.5.0
|
||||
ENV POWERSHELL_VERSION=${POWERSHELL_VERSION}
|
||||
|
||||
# hadolint ignore=DL3008
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends wget libicu72 \
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
wget \
|
||||
libicu72 \
|
||||
gcc \
|
||||
g++ \
|
||||
make \
|
||||
libxml2-dev \
|
||||
libxmlsec1-dev \
|
||||
libxmlsec1-openssl \
|
||||
pkg-config \
|
||||
libtool \
|
||||
libxslt1-dev \
|
||||
python3-dev \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install PowerShell
|
||||
@@ -37,18 +49,21 @@ COPY pyproject.toml ./
|
||||
RUN pip install --no-cache-dir --upgrade pip && \
|
||||
pip install --no-cache-dir poetry
|
||||
|
||||
COPY src/backend/ ./backend/
|
||||
|
||||
ENV PATH="/home/prowler/.local/bin:$PATH"
|
||||
|
||||
# Add `--no-root` to avoid installing the current project as a package
|
||||
RUN poetry install --no-root && \
|
||||
rm -rf ~/.cache/pip
|
||||
|
||||
COPY docker-entrypoint.sh ./docker-entrypoint.sh
|
||||
|
||||
RUN poetry run python "$(poetry env info --path)/src/prowler/prowler/providers/m365/lib/powershell/m365_powershell.py"
|
||||
|
||||
# Prevents known compatibility error between lxml and libxml2/libxmlsec versions.
|
||||
# See: https://github.com/xmlsec/python-xmlsec/issues/320
|
||||
RUN poetry run pip install --force-reinstall --no-binary lxml lxml
|
||||
|
||||
COPY src/backend/ ./backend/
|
||||
COPY docker-entrypoint.sh ./docker-entrypoint.sh
|
||||
|
||||
WORKDIR /home/prowler/backend
|
||||
|
||||
# Development image
|
||||
|
||||
@@ -1,125 +0,0 @@
|
||||
services:
|
||||
api:
|
||||
build:
|
||||
dockerfile: Dockerfile
|
||||
image: prowler-api
|
||||
env_file:
|
||||
- path: ./.env
|
||||
required: false
|
||||
ports:
|
||||
- "${DJANGO_PORT:-8000}:${DJANGO_PORT:-8000}"
|
||||
profiles:
|
||||
- prod
|
||||
depends_on:
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
valkey:
|
||||
condition: service_healthy
|
||||
entrypoint:
|
||||
- "../docker-entrypoint.sh"
|
||||
- "prod"
|
||||
|
||||
api-dev:
|
||||
build:
|
||||
dockerfile: Dockerfile
|
||||
target: dev
|
||||
image: prowler-api-dev
|
||||
environment:
|
||||
- DJANGO_SETTINGS_MODULE=config.django.devel
|
||||
- DJANGO_LOGGING_FORMATTER=human_readable
|
||||
env_file:
|
||||
- path: ./.env
|
||||
required: false
|
||||
ports:
|
||||
- "${DJANGO_PORT:-8080}:${DJANGO_PORT:-8080}"
|
||||
volumes:
|
||||
- "./src/backend:/home/prowler/backend"
|
||||
- "./pyproject.toml:/home/prowler/pyproject.toml"
|
||||
profiles:
|
||||
- dev
|
||||
depends_on:
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
valkey:
|
||||
condition: service_healthy
|
||||
entrypoint:
|
||||
- "../docker-entrypoint.sh"
|
||||
- "dev"
|
||||
|
||||
postgres:
|
||||
image: postgres:16.3-alpine
|
||||
ports:
|
||||
- "${POSTGRES_PORT:-5432}:${POSTGRES_PORT:-5432}"
|
||||
hostname: "postgres-db"
|
||||
volumes:
|
||||
- ./_data/postgres:/var/lib/postgresql/data
|
||||
environment:
|
||||
- POSTGRES_USER=${POSTGRES_ADMIN_USER:-prowler}
|
||||
- POSTGRES_PASSWORD=${POSTGRES_ADMIN_PASSWORD:-S3cret}
|
||||
- POSTGRES_DB=${POSTGRES_DB:-prowler_db}
|
||||
env_file:
|
||||
- path: ./.env
|
||||
required: false
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "sh -c 'pg_isready -U ${POSTGRES_ADMIN_USER:-prowler} -d ${POSTGRES_DB:-prowler_db}'"]
|
||||
interval: 5s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
|
||||
valkey:
|
||||
image: valkey/valkey:7-alpine3.19
|
||||
ports:
|
||||
- "${VALKEY_PORT:-6379}:6379"
|
||||
hostname: "valkey"
|
||||
volumes:
|
||||
- ./_data/valkey:/data
|
||||
env_file:
|
||||
- path: ./.env
|
||||
required: false
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "sh -c 'valkey-cli ping'"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
|
||||
worker:
|
||||
build:
|
||||
dockerfile: Dockerfile
|
||||
image: prowler-worker
|
||||
environment:
|
||||
- DJANGO_SETTINGS_MODULE=${DJANGO_SETTINGS_MODULE:-config.django.production}
|
||||
env_file:
|
||||
- path: ./.env
|
||||
required: false
|
||||
profiles:
|
||||
- dev
|
||||
- prod
|
||||
depends_on:
|
||||
valkey:
|
||||
condition: service_healthy
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
entrypoint:
|
||||
- "../docker-entrypoint.sh"
|
||||
- "worker"
|
||||
|
||||
worker-beat:
|
||||
build:
|
||||
dockerfile: Dockerfile
|
||||
image: prowler-worker
|
||||
environment:
|
||||
- DJANGO_SETTINGS_MODULE=${DJANGO_SETTINGS_MODULE:-config.django.production}
|
||||
env_file:
|
||||
- path: ./.env
|
||||
required: false
|
||||
profiles:
|
||||
- dev
|
||||
- prod
|
||||
depends_on:
|
||||
valkey:
|
||||
condition: service_healthy
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
entrypoint:
|
||||
- "../docker-entrypoint.sh"
|
||||
- "beat"
|
||||
@@ -3,6 +3,10 @@
|
||||
|
||||
apply_migrations() {
|
||||
echo "Applying database migrations..."
|
||||
|
||||
# Fix Inconsistent migration history after adding sites app
|
||||
poetry run python manage.py check_and_fix_socialaccount_sites_migration --database admin
|
||||
|
||||
poetry run python manage.py migrate --database admin
|
||||
}
|
||||
|
||||
|
||||
Generated
+350
-65
@@ -1,4 +1,4 @@
|
||||
# This file is automatically @generated by Poetry 2.1.3 and should not be changed by hand.
|
||||
# This file is automatically @generated by Poetry 2.1.1 and should not be changed by hand.
|
||||
|
||||
[[package]]
|
||||
name = "about-time"
|
||||
@@ -880,6 +880,7 @@ description = "Foreign Function Interface for Python calling C code."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main", "dev"]
|
||||
markers = "platform_python_implementation != \"PyPy\""
|
||||
files = [
|
||||
{file = "cffi-1.17.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:df8b1c11f177bc2313ec4b2d46baec87a5f3e71fc8b45dab2ee7cae86d9aba14"},
|
||||
{file = "cffi-1.17.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8f2cdc858323644ab277e9bb925ad72ae0e67f69e804f4898c070998d50b1a67"},
|
||||
@@ -949,7 +950,6 @@ files = [
|
||||
{file = "cffi-1.17.1-cp39-cp39-win_amd64.whl", hash = "sha256:d016c76bdd850f3c626af19b0542c9677ba156e4ee4fccfdd7848803533ef662"},
|
||||
{file = "cffi-1.17.1.tar.gz", hash = "sha256:1c39c6016c32bc48dd54561950ebd6836e1670f2ae46128f67cf49e789c52824"},
|
||||
]
|
||||
markers = {dev = "platform_python_implementation != \"PyPy\""}
|
||||
|
||||
[package.dependencies]
|
||||
pycparser = "*"
|
||||
@@ -1448,6 +1448,18 @@ files = [
|
||||
graph = ["objgraph (>=1.7.2)"]
|
||||
profile = ["gprof2dot (>=2022.7.29)"]
|
||||
|
||||
[[package]]
|
||||
name = "distro"
|
||||
version = "1.9.0"
|
||||
description = "Distro - an OS platform information API"
|
||||
optional = false
|
||||
python-versions = ">=3.6"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "distro-1.9.0-py3-none-any.whl", hash = "sha256:7bffd925d65168f85027d8da9af6bddab658135b840670a223589bc0c8ef02b2"},
|
||||
{file = "distro-1.9.0.tar.gz", hash = "sha256:2fa77c6fd8940f116ee1d6b94a2f90b13b5ea8d019b98bc8bafdcabcdd9bdbed"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "dj-rest-auth"
|
||||
version = "7.0.1"
|
||||
@@ -1469,14 +1481,14 @@ with-social = ["django-allauth[socialaccount] (>=64.0.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "django"
|
||||
version = "5.1.8"
|
||||
version = "5.1.10"
|
||||
description = "A high-level Python web framework that encourages rapid development and clean, pragmatic design."
|
||||
optional = false
|
||||
python-versions = ">=3.10"
|
||||
groups = ["main", "dev"]
|
||||
files = [
|
||||
{file = "Django-5.1.8-py3-none-any.whl", hash = "sha256:11b28fa4b00e59d0def004e9ee012fefbb1065a5beb39ee838983fd24493ad4f"},
|
||||
{file = "Django-5.1.8.tar.gz", hash = "sha256:42e92a1dd2810072bcc40a39a212b693f94406d0ba0749e68eb642f31dc770b4"},
|
||||
{file = "django-5.1.10-py3-none-any.whl", hash = "sha256:19c9b771e9cf4de91101861aadd2daaa159bcf10698ca909c5755c88e70ccb84"},
|
||||
{file = "django-5.1.10.tar.gz", hash = "sha256:73e5d191421d177803dbd5495d94bc7d06d156df9561f4eea9e11b4994c07137"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -1490,19 +1502,20 @@ bcrypt = ["bcrypt"]
|
||||
|
||||
[[package]]
|
||||
name = "django-allauth"
|
||||
version = "65.4.1"
|
||||
version = "65.8.0"
|
||||
description = "Integrated set of Django applications addressing authentication, registration, account management as well as 3rd party (social) account authentication."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "django_allauth-65.4.1.tar.gz", hash = "sha256:60b32aef7dbbcc213319aa4fd8f570e985266ea1162ae6ef7a26a24efca85c8c"},
|
||||
{file = "django_allauth-65.8.0.tar.gz", hash = "sha256:9da589d99d412740629333a01865a90c95c97e0fae0cde789aa45a8fda90e83b"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
asgiref = ">=3.8.1"
|
||||
Django = ">=4.2.16"
|
||||
pyjwt = {version = ">=1.7", extras = ["crypto"], optional = true, markers = "extra == \"socialaccount\""}
|
||||
python3-saml = {version = ">=1.15.0,<2.0.0", optional = true, markers = "extra == \"saml\""}
|
||||
requests = {version = ">=2.0.0", optional = true, markers = "extra == \"socialaccount\""}
|
||||
requests-oauthlib = {version = ">=0.3.0", optional = true, markers = "extra == \"socialaccount\""}
|
||||
|
||||
@@ -2470,6 +2483,93 @@ MarkupSafe = ">=2.0"
|
||||
[package.extras]
|
||||
i18n = ["Babel (>=2.7)"]
|
||||
|
||||
[[package]]
|
||||
name = "jiter"
|
||||
version = "0.10.0"
|
||||
description = "Fast iterable JSON parser."
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "jiter-0.10.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:cd2fb72b02478f06a900a5782de2ef47e0396b3e1f7d5aba30daeb1fce66f303"},
|
||||
{file = "jiter-0.10.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:32bb468e3af278f095d3fa5b90314728a6916d89ba3d0ffb726dd9bf7367285e"},
|
||||
{file = "jiter-0.10.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aa8b3e0068c26ddedc7abc6fac37da2d0af16b921e288a5a613f4b86f050354f"},
|
||||
{file = "jiter-0.10.0-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:286299b74cc49e25cd42eea19b72aa82c515d2f2ee12d11392c56d8701f52224"},
|
||||
{file = "jiter-0.10.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6ed5649ceeaeffc28d87fb012d25a4cd356dcd53eff5acff1f0466b831dda2a7"},
|
||||
{file = "jiter-0.10.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b2ab0051160cb758a70716448908ef14ad476c3774bd03ddce075f3c1f90a3d6"},
|
||||
{file = "jiter-0.10.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:03997d2f37f6b67d2f5c475da4412be584e1cec273c1cfc03d642c46db43f8cf"},
|
||||
{file = "jiter-0.10.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:c404a99352d839fed80d6afd6c1d66071f3bacaaa5c4268983fc10f769112e90"},
|
||||
{file = "jiter-0.10.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:66e989410b6666d3ddb27a74c7e50d0829704ede652fd4c858e91f8d64b403d0"},
|
||||
{file = "jiter-0.10.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:b532d3af9ef4f6374609a3bcb5e05a1951d3bf6190dc6b176fdb277c9bbf15ee"},
|
||||
{file = "jiter-0.10.0-cp310-cp310-win32.whl", hash = "sha256:da9be20b333970e28b72edc4dff63d4fec3398e05770fb3205f7fb460eb48dd4"},
|
||||
{file = "jiter-0.10.0-cp310-cp310-win_amd64.whl", hash = "sha256:f59e533afed0c5b0ac3eba20d2548c4a550336d8282ee69eb07b37ea526ee4e5"},
|
||||
{file = "jiter-0.10.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:3bebe0c558e19902c96e99217e0b8e8b17d570906e72ed8a87170bc290b1e978"},
|
||||
{file = "jiter-0.10.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:558cc7e44fd8e507a236bee6a02fa17199ba752874400a0ca6cd6e2196cdb7dc"},
|
||||
{file = "jiter-0.10.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4d613e4b379a07d7c8453c5712ce7014e86c6ac93d990a0b8e7377e18505e98d"},
|
||||
{file = "jiter-0.10.0-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f62cf8ba0618eda841b9bf61797f21c5ebd15a7a1e19daab76e4e4b498d515b2"},
|
||||
{file = "jiter-0.10.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:919d139cdfa8ae8945112398511cb7fca58a77382617d279556b344867a37e61"},
|
||||
{file = "jiter-0.10.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:13ddbc6ae311175a3b03bd8994881bc4635c923754932918e18da841632349db"},
|
||||
{file = "jiter-0.10.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4c440ea003ad10927a30521a9062ce10b5479592e8a70da27f21eeb457b4a9c5"},
|
||||
{file = "jiter-0.10.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:dc347c87944983481e138dea467c0551080c86b9d21de6ea9306efb12ca8f606"},
|
||||
{file = "jiter-0.10.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:13252b58c1f4d8c5b63ab103c03d909e8e1e7842d302473f482915d95fefd605"},
|
||||
{file = "jiter-0.10.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:7d1bbf3c465de4a24ab12fb7766a0003f6f9bce48b8b6a886158c4d569452dc5"},
|
||||
{file = "jiter-0.10.0-cp311-cp311-win32.whl", hash = "sha256:db16e4848b7e826edca4ccdd5b145939758dadf0dc06e7007ad0e9cfb5928ae7"},
|
||||
{file = "jiter-0.10.0-cp311-cp311-win_amd64.whl", hash = "sha256:9c9c1d5f10e18909e993f9641f12fe1c77b3e9b533ee94ffa970acc14ded3812"},
|
||||
{file = "jiter-0.10.0-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:1e274728e4a5345a6dde2d343c8da018b9d4bd4350f5a472fa91f66fda44911b"},
|
||||
{file = "jiter-0.10.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:7202ae396446c988cb2a5feb33a543ab2165b786ac97f53b59aafb803fef0744"},
|
||||
{file = "jiter-0.10.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:23ba7722d6748b6920ed02a8f1726fb4b33e0fd2f3f621816a8b486c66410ab2"},
|
||||
{file = "jiter-0.10.0-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:371eab43c0a288537d30e1f0b193bc4eca90439fc08a022dd83e5e07500ed026"},
|
||||
{file = "jiter-0.10.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6c675736059020365cebc845a820214765162728b51ab1e03a1b7b3abb70f74c"},
|
||||
{file = "jiter-0.10.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0c5867d40ab716e4684858e4887489685968a47e3ba222e44cde6e4a2154f959"},
|
||||
{file = "jiter-0.10.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:395bb9a26111b60141757d874d27fdea01b17e8fac958b91c20128ba8f4acc8a"},
|
||||
{file = "jiter-0.10.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6842184aed5cdb07e0c7e20e5bdcfafe33515ee1741a6835353bb45fe5d1bd95"},
|
||||
{file = "jiter-0.10.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:62755d1bcea9876770d4df713d82606c8c1a3dca88ff39046b85a048566d56ea"},
|
||||
{file = "jiter-0.10.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:533efbce2cacec78d5ba73a41756beff8431dfa1694b6346ce7af3a12c42202b"},
|
||||
{file = "jiter-0.10.0-cp312-cp312-win32.whl", hash = "sha256:8be921f0cadd245e981b964dfbcd6fd4bc4e254cdc069490416dd7a2632ecc01"},
|
||||
{file = "jiter-0.10.0-cp312-cp312-win_amd64.whl", hash = "sha256:a7c7d785ae9dda68c2678532a5a1581347e9c15362ae9f6e68f3fdbfb64f2e49"},
|
||||
{file = "jiter-0.10.0-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:e0588107ec8e11b6f5ef0e0d656fb2803ac6cf94a96b2b9fc675c0e3ab5e8644"},
|
||||
{file = "jiter-0.10.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:cafc4628b616dc32530c20ee53d71589816cf385dd9449633e910d596b1f5c8a"},
|
||||
{file = "jiter-0.10.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:520ef6d981172693786a49ff5b09eda72a42e539f14788124a07530f785c3ad6"},
|
||||
{file = "jiter-0.10.0-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:554dedfd05937f8fc45d17ebdf298fe7e0c77458232bcb73d9fbbf4c6455f5b3"},
|
||||
{file = "jiter-0.10.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5bc299da7789deacf95f64052d97f75c16d4fc8c4c214a22bf8d859a4288a1c2"},
|
||||
{file = "jiter-0.10.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5161e201172de298a8a1baad95eb85db4fb90e902353b1f6a41d64ea64644e25"},
|
||||
{file = "jiter-0.10.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2e2227db6ba93cb3e2bf67c87e594adde0609f146344e8207e8730364db27041"},
|
||||
{file = "jiter-0.10.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:15acb267ea5e2c64515574b06a8bf393fbfee6a50eb1673614aa45f4613c0cca"},
|
||||
{file = "jiter-0.10.0-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:901b92f2e2947dc6dfcb52fd624453862e16665ea909a08398dde19c0731b7f4"},
|
||||
{file = "jiter-0.10.0-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:d0cb9a125d5a3ec971a094a845eadde2db0de85b33c9f13eb94a0c63d463879e"},
|
||||
{file = "jiter-0.10.0-cp313-cp313-win32.whl", hash = "sha256:48a403277ad1ee208fb930bdf91745e4d2d6e47253eedc96e2559d1e6527006d"},
|
||||
{file = "jiter-0.10.0-cp313-cp313-win_amd64.whl", hash = "sha256:75f9eb72ecb640619c29bf714e78c9c46c9c4eaafd644bf78577ede459f330d4"},
|
||||
{file = "jiter-0.10.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:28ed2a4c05a1f32ef0e1d24c2611330219fed727dae01789f4a335617634b1ca"},
|
||||
{file = "jiter-0.10.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:14a4c418b1ec86a195f1ca69da8b23e8926c752b685af665ce30777233dfe070"},
|
||||
{file = "jiter-0.10.0-cp313-cp313t-win_amd64.whl", hash = "sha256:d7bfed2fe1fe0e4dda6ef682cee888ba444b21e7a6553e03252e4feb6cf0adca"},
|
||||
{file = "jiter-0.10.0-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:5e9251a5e83fab8d87799d3e1a46cb4b7f2919b895c6f4483629ed2446f66522"},
|
||||
{file = "jiter-0.10.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:023aa0204126fe5b87ccbcd75c8a0d0261b9abdbbf46d55e7ae9f8e22424eeb8"},
|
||||
{file = "jiter-0.10.0-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3c189c4f1779c05f75fc17c0c1267594ed918996a231593a21a5ca5438445216"},
|
||||
{file = "jiter-0.10.0-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:15720084d90d1098ca0229352607cd68256c76991f6b374af96f36920eae13c4"},
|
||||
{file = "jiter-0.10.0-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e4f2fb68e5f1cfee30e2b2a09549a00683e0fde4c6a2ab88c94072fc33cb7426"},
|
||||
{file = "jiter-0.10.0-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ce541693355fc6da424c08b7edf39a2895f58d6ea17d92cc2b168d20907dee12"},
|
||||
{file = "jiter-0.10.0-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:31c50c40272e189d50006ad5c73883caabb73d4e9748a688b216e85a9a9ca3b9"},
|
||||
{file = "jiter-0.10.0-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:fa3402a2ff9815960e0372a47b75c76979d74402448509ccd49a275fa983ef8a"},
|
||||
{file = "jiter-0.10.0-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:1956f934dca32d7bb647ea21d06d93ca40868b505c228556d3373cbd255ce853"},
|
||||
{file = "jiter-0.10.0-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:fcedb049bdfc555e261d6f65a6abe1d5ad68825b7202ccb9692636c70fcced86"},
|
||||
{file = "jiter-0.10.0-cp314-cp314-win32.whl", hash = "sha256:ac509f7eccca54b2a29daeb516fb95b6f0bd0d0d8084efaf8ed5dfc7b9f0b357"},
|
||||
{file = "jiter-0.10.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:5ed975b83a2b8639356151cef5c0d597c68376fc4922b45d0eb384ac058cfa00"},
|
||||
{file = "jiter-0.10.0-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3aa96f2abba33dc77f79b4cf791840230375f9534e5fac927ccceb58c5e604a5"},
|
||||
{file = "jiter-0.10.0-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:bd6292a43c0fc09ce7c154ec0fa646a536b877d1e8f2f96c19707f65355b5a4d"},
|
||||
{file = "jiter-0.10.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:39de429dcaeb6808d75ffe9effefe96a4903c6a4b376b2f6d08d77c1aaee2f18"},
|
||||
{file = "jiter-0.10.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:52ce124f13a7a616fad3bb723f2bfb537d78239d1f7f219566dc52b6f2a9e48d"},
|
||||
{file = "jiter-0.10.0-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:166f3606f11920f9a1746b2eea84fa2c0a5d50fd313c38bdea4edc072000b0af"},
|
||||
{file = "jiter-0.10.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:28dcecbb4ba402916034fc14eba7709f250c4d24b0c43fc94d187ee0580af181"},
|
||||
{file = "jiter-0.10.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:86c5aa6910f9bebcc7bc4f8bc461aff68504388b43bfe5e5c0bd21efa33b52f4"},
|
||||
{file = "jiter-0.10.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ceeb52d242b315d7f1f74b441b6a167f78cea801ad7c11c36da77ff2d42e8a28"},
|
||||
{file = "jiter-0.10.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:ff76d8887c8c8ee1e772274fcf8cc1071c2c58590d13e33bd12d02dc9a560397"},
|
||||
{file = "jiter-0.10.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:a9be4d0fa2b79f7222a88aa488bd89e2ae0a0a5b189462a12def6ece2faa45f1"},
|
||||
{file = "jiter-0.10.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:9ab7fd8738094139b6c1ab1822d6f2000ebe41515c537235fd45dabe13ec9324"},
|
||||
{file = "jiter-0.10.0-cp39-cp39-win32.whl", hash = "sha256:5f51e048540dd27f204ff4a87f5d79294ea0aa3aa552aca34934588cf27023cf"},
|
||||
{file = "jiter-0.10.0-cp39-cp39-win_amd64.whl", hash = "sha256:1b28302349dc65703a9e4ead16f163b1c339efffbe1049c30a44b001a2a4fff9"},
|
||||
{file = "jiter-0.10.0.tar.gz", hash = "sha256:07a7142c38aacc85194391108dc91b5b57093c978a9932bd86a36862759d9500"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "jmespath"
|
||||
version = "1.0.1"
|
||||
@@ -2582,6 +2682,155 @@ websocket-client = ">=0.32.0,<0.40.0 || >0.40.0,<0.41.dev0 || >=0.43.dev0"
|
||||
[package.extras]
|
||||
adal = ["adal (>=1.0.2)"]
|
||||
|
||||
[[package]]
|
||||
name = "lxml"
|
||||
version = "5.4.0"
|
||||
description = "Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API."
|
||||
optional = false
|
||||
python-versions = ">=3.6"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "lxml-5.4.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e7bc6df34d42322c5289e37e9971d6ed114e3776b45fa879f734bded9d1fea9c"},
|
||||
{file = "lxml-5.4.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:6854f8bd8a1536f8a1d9a3655e6354faa6406621cf857dc27b681b69860645c7"},
|
||||
{file = "lxml-5.4.0-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:696ea9e87442467819ac22394ca36cb3d01848dad1be6fac3fb612d3bd5a12cf"},
|
||||
{file = "lxml-5.4.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6ef80aeac414f33c24b3815ecd560cee272786c3adfa5f31316d8b349bfade28"},
|
||||
{file = "lxml-5.4.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b9c2754cef6963f3408ab381ea55f47dabc6f78f4b8ebb0f0b25cf1ac1f7609"},
|
||||
{file = "lxml-5.4.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7a62cc23d754bb449d63ff35334acc9f5c02e6dae830d78dab4dd12b78a524f4"},
|
||||
{file = "lxml-5.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f82125bc7203c5ae8633a7d5d20bcfdff0ba33e436e4ab0abc026a53a8960b7"},
|
||||
{file = "lxml-5.4.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:b67319b4aef1a6c56576ff544b67a2a6fbd7eaee485b241cabf53115e8908b8f"},
|
||||
{file = "lxml-5.4.0-cp310-cp310-manylinux_2_28_ppc64le.whl", hash = "sha256:a8ef956fce64c8551221f395ba21d0724fed6b9b6242ca4f2f7beb4ce2f41997"},
|
||||
{file = "lxml-5.4.0-cp310-cp310-manylinux_2_28_s390x.whl", hash = "sha256:0a01ce7d8479dce84fc03324e3b0c9c90b1ece9a9bb6a1b6c9025e7e4520e78c"},
|
||||
{file = "lxml-5.4.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:91505d3ddebf268bb1588eb0f63821f738d20e1e7f05d3c647a5ca900288760b"},
|
||||
{file = "lxml-5.4.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:a3bcdde35d82ff385f4ede021df801b5c4a5bcdfb61ea87caabcebfc4945dc1b"},
|
||||
{file = "lxml-5.4.0-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:aea7c06667b987787c7d1f5e1dfcd70419b711cdb47d6b4bb4ad4b76777a0563"},
|
||||
{file = "lxml-5.4.0-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:a7fb111eef4d05909b82152721a59c1b14d0f365e2be4c742a473c5d7372f4f5"},
|
||||
{file = "lxml-5.4.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:43d549b876ce64aa18b2328faff70f5877f8c6dede415f80a2f799d31644d776"},
|
||||
{file = "lxml-5.4.0-cp310-cp310-win32.whl", hash = "sha256:75133890e40d229d6c5837b0312abbe5bac1c342452cf0e12523477cd3aa21e7"},
|
||||
{file = "lxml-5.4.0-cp310-cp310-win_amd64.whl", hash = "sha256:de5b4e1088523e2b6f730d0509a9a813355b7f5659d70eb4f319c76beea2e250"},
|
||||
{file = "lxml-5.4.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:98a3912194c079ef37e716ed228ae0dcb960992100461b704aea4e93af6b0bb9"},
|
||||
{file = "lxml-5.4.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:0ea0252b51d296a75f6118ed0d8696888e7403408ad42345d7dfd0d1e93309a7"},
|
||||
{file = "lxml-5.4.0-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b92b69441d1bd39f4940f9eadfa417a25862242ca2c396b406f9272ef09cdcaa"},
|
||||
{file = "lxml-5.4.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:20e16c08254b9b6466526bc1828d9370ee6c0d60a4b64836bc3ac2917d1e16df"},
|
||||
{file = "lxml-5.4.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7605c1c32c3d6e8c990dd28a0970a3cbbf1429d5b92279e37fda05fb0c92190e"},
|
||||
{file = "lxml-5.4.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ecf4c4b83f1ab3d5a7ace10bafcb6f11df6156857a3c418244cef41ca9fa3e44"},
|
||||
{file = "lxml-5.4.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0cef4feae82709eed352cd7e97ae062ef6ae9c7b5dbe3663f104cd2c0e8d94ba"},
|
||||
{file = "lxml-5.4.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:df53330a3bff250f10472ce96a9af28628ff1f4efc51ccba351a8820bca2a8ba"},
|
||||
{file = "lxml-5.4.0-cp311-cp311-manylinux_2_28_ppc64le.whl", hash = "sha256:aefe1a7cb852fa61150fcb21a8c8fcea7b58c4cb11fbe59c97a0a4b31cae3c8c"},
|
||||
{file = "lxml-5.4.0-cp311-cp311-manylinux_2_28_s390x.whl", hash = "sha256:ef5a7178fcc73b7d8c07229e89f8eb45b2908a9238eb90dcfc46571ccf0383b8"},
|
||||
{file = "lxml-5.4.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:d2ed1b3cb9ff1c10e6e8b00941bb2e5bb568b307bfc6b17dffbbe8be5eecba86"},
|
||||
{file = "lxml-5.4.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:72ac9762a9f8ce74c9eed4a4e74306f2f18613a6b71fa065495a67ac227b3056"},
|
||||
{file = "lxml-5.4.0-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:f5cb182f6396706dc6cc1896dd02b1c889d644c081b0cdec38747573db88a7d7"},
|
||||
{file = "lxml-5.4.0-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:3a3178b4873df8ef9457a4875703488eb1622632a9cee6d76464b60e90adbfcd"},
|
||||
{file = "lxml-5.4.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:e094ec83694b59d263802ed03a8384594fcce477ce484b0cbcd0008a211ca751"},
|
||||
{file = "lxml-5.4.0-cp311-cp311-win32.whl", hash = "sha256:4329422de653cdb2b72afa39b0aa04252fca9071550044904b2e7036d9d97fe4"},
|
||||
{file = "lxml-5.4.0-cp311-cp311-win_amd64.whl", hash = "sha256:fd3be6481ef54b8cfd0e1e953323b7aa9d9789b94842d0e5b142ef4bb7999539"},
|
||||
{file = "lxml-5.4.0-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:b5aff6f3e818e6bdbbb38e5967520f174b18f539c2b9de867b1e7fde6f8d95a4"},
|
||||
{file = "lxml-5.4.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:942a5d73f739ad7c452bf739a62a0f83e2578afd6b8e5406308731f4ce78b16d"},
|
||||
{file = "lxml-5.4.0-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:460508a4b07364d6abf53acaa0a90b6d370fafde5693ef37602566613a9b0779"},
|
||||
{file = "lxml-5.4.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:529024ab3a505fed78fe3cc5ddc079464e709f6c892733e3f5842007cec8ac6e"},
|
||||
{file = "lxml-5.4.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7ca56ebc2c474e8f3d5761debfd9283b8b18c76c4fc0967b74aeafba1f5647f9"},
|
||||
{file = "lxml-5.4.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a81e1196f0a5b4167a8dafe3a66aa67c4addac1b22dc47947abd5d5c7a3f24b5"},
|
||||
{file = "lxml-5.4.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:00b8686694423ddae324cf614e1b9659c2edb754de617703c3d29ff568448df5"},
|
||||
{file = "lxml-5.4.0-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:c5681160758d3f6ac5b4fea370495c48aac0989d6a0f01bb9a72ad8ef5ab75c4"},
|
||||
{file = "lxml-5.4.0-cp312-cp312-manylinux_2_28_ppc64le.whl", hash = "sha256:2dc191e60425ad70e75a68c9fd90ab284df64d9cd410ba8d2b641c0c45bc006e"},
|
||||
{file = "lxml-5.4.0-cp312-cp312-manylinux_2_28_s390x.whl", hash = "sha256:67f779374c6b9753ae0a0195a892a1c234ce8416e4448fe1e9f34746482070a7"},
|
||||
{file = "lxml-5.4.0-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:79d5bfa9c1b455336f52343130b2067164040604e41f6dc4d8313867ed540079"},
|
||||
{file = "lxml-5.4.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:3d3c30ba1c9b48c68489dc1829a6eede9873f52edca1dda900066542528d6b20"},
|
||||
{file = "lxml-5.4.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:1af80c6316ae68aded77e91cd9d80648f7dd40406cef73df841aa3c36f6907c8"},
|
||||
{file = "lxml-5.4.0-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:4d885698f5019abe0de3d352caf9466d5de2baded00a06ef3f1216c1a58ae78f"},
|
||||
{file = "lxml-5.4.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:aea53d51859b6c64e7c51d522c03cc2c48b9b5d6172126854cc7f01aa11f52bc"},
|
||||
{file = "lxml-5.4.0-cp312-cp312-win32.whl", hash = "sha256:d90b729fd2732df28130c064aac9bb8aff14ba20baa4aee7bd0795ff1187545f"},
|
||||
{file = "lxml-5.4.0-cp312-cp312-win_amd64.whl", hash = "sha256:1dc4ca99e89c335a7ed47d38964abcb36c5910790f9bd106f2a8fa2ee0b909d2"},
|
||||
{file = "lxml-5.4.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:773e27b62920199c6197130632c18fb7ead3257fce1ffb7d286912e56ddb79e0"},
|
||||
{file = "lxml-5.4.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:ce9c671845de9699904b1e9df95acfe8dfc183f2310f163cdaa91a3535af95de"},
|
||||
{file = "lxml-5.4.0-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9454b8d8200ec99a224df8854786262b1bd6461f4280064c807303c642c05e76"},
|
||||
{file = "lxml-5.4.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cccd007d5c95279e529c146d095f1d39ac05139de26c098166c4beb9374b0f4d"},
|
||||
{file = "lxml-5.4.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0fce1294a0497edb034cb416ad3e77ecc89b313cff7adbee5334e4dc0d11f422"},
|
||||
{file = "lxml-5.4.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:24974f774f3a78ac12b95e3a20ef0931795ff04dbb16db81a90c37f589819551"},
|
||||
{file = "lxml-5.4.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:497cab4d8254c2a90bf988f162ace2ddbfdd806fce3bda3f581b9d24c852e03c"},
|
||||
{file = "lxml-5.4.0-cp313-cp313-manylinux_2_28_aarch64.whl", hash = "sha256:e794f698ae4c5084414efea0f5cc9f4ac562ec02d66e1484ff822ef97c2cadff"},
|
||||
{file = "lxml-5.4.0-cp313-cp313-manylinux_2_28_ppc64le.whl", hash = "sha256:2c62891b1ea3094bb12097822b3d44b93fc6c325f2043c4d2736a8ff09e65f60"},
|
||||
{file = "lxml-5.4.0-cp313-cp313-manylinux_2_28_s390x.whl", hash = "sha256:142accb3e4d1edae4b392bd165a9abdee8a3c432a2cca193df995bc3886249c8"},
|
||||
{file = "lxml-5.4.0-cp313-cp313-manylinux_2_28_x86_64.whl", hash = "sha256:1a42b3a19346e5601d1b8296ff6ef3d76038058f311902edd574461e9c036982"},
|
||||
{file = "lxml-5.4.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:4291d3c409a17febf817259cb37bc62cb7eb398bcc95c1356947e2871911ae61"},
|
||||
{file = "lxml-5.4.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:4f5322cf38fe0e21c2d73901abf68e6329dc02a4994e483adbcf92b568a09a54"},
|
||||
{file = "lxml-5.4.0-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:0be91891bdb06ebe65122aa6bf3fc94489960cf7e03033c6f83a90863b23c58b"},
|
||||
{file = "lxml-5.4.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:15a665ad90054a3d4f397bc40f73948d48e36e4c09f9bcffc7d90c87410e478a"},
|
||||
{file = "lxml-5.4.0-cp313-cp313-win32.whl", hash = "sha256:d5663bc1b471c79f5c833cffbc9b87d7bf13f87e055a5c86c363ccd2348d7e82"},
|
||||
{file = "lxml-5.4.0-cp313-cp313-win_amd64.whl", hash = "sha256:bcb7a1096b4b6b24ce1ac24d4942ad98f983cd3810f9711bcd0293f43a9d8b9f"},
|
||||
{file = "lxml-5.4.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:7be701c24e7f843e6788353c055d806e8bd8466b52907bafe5d13ec6a6dbaecd"},
|
||||
{file = "lxml-5.4.0-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fb54f7c6bafaa808f27166569b1511fc42701a7713858dddc08afdde9746849e"},
|
||||
{file = "lxml-5.4.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:97dac543661e84a284502e0cf8a67b5c711b0ad5fb661d1bd505c02f8cf716d7"},
|
||||
{file = "lxml-5.4.0-cp36-cp36m-manylinux_2_28_x86_64.whl", hash = "sha256:c70e93fba207106cb16bf852e421c37bbded92acd5964390aad07cb50d60f5cf"},
|
||||
{file = "lxml-5.4.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:9c886b481aefdf818ad44846145f6eaf373a20d200b5ce1a5c8e1bc2d8745410"},
|
||||
{file = "lxml-5.4.0-cp36-cp36m-musllinux_1_2_x86_64.whl", hash = "sha256:fa0e294046de09acd6146be0ed6727d1f42ded4ce3ea1e9a19c11b6774eea27c"},
|
||||
{file = "lxml-5.4.0-cp36-cp36m-win32.whl", hash = "sha256:61c7bbf432f09ee44b1ccaa24896d21075e533cd01477966a5ff5a71d88b2f56"},
|
||||
{file = "lxml-5.4.0-cp36-cp36m-win_amd64.whl", hash = "sha256:7ce1a171ec325192c6a636b64c94418e71a1964f56d002cc28122fceff0b6121"},
|
||||
{file = "lxml-5.4.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:795f61bcaf8770e1b37eec24edf9771b307df3af74d1d6f27d812e15a9ff3872"},
|
||||
{file = "lxml-5.4.0-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:29f451a4b614a7b5b6c2e043d7b64a15bd8304d7e767055e8ab68387a8cacf4e"},
|
||||
{file = "lxml-5.4.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:891f7f991a68d20c75cb13c5c9142b2a3f9eb161f1f12a9489c82172d1f133c0"},
|
||||
{file = "lxml-5.4.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4aa412a82e460571fad592d0f93ce9935a20090029ba08eca05c614f99b0cc92"},
|
||||
{file = "lxml-5.4.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:ac7ba71f9561cd7d7b55e1ea5511543c0282e2b6450f122672a2694621d63b7e"},
|
||||
{file = "lxml-5.4.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:c5d32f5284012deaccd37da1e2cd42f081feaa76981f0eaa474351b68df813c5"},
|
||||
{file = "lxml-5.4.0-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:ce31158630a6ac85bddd6b830cffd46085ff90498b397bd0a259f59d27a12188"},
|
||||
{file = "lxml-5.4.0-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:31e63621e073e04697c1b2d23fcb89991790eef370ec37ce4d5d469f40924ed6"},
|
||||
{file = "lxml-5.4.0-cp37-cp37m-win32.whl", hash = "sha256:be2ba4c3c5b7900246a8f866580700ef0d538f2ca32535e991027bdaba944063"},
|
||||
{file = "lxml-5.4.0-cp37-cp37m-win_amd64.whl", hash = "sha256:09846782b1ef650b321484ad429217f5154da4d6e786636c38e434fa32e94e49"},
|
||||
{file = "lxml-5.4.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:eaf24066ad0b30917186420d51e2e3edf4b0e2ea68d8cd885b14dc8afdcf6556"},
|
||||
{file = "lxml-5.4.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2b31a3a77501d86d8ade128abb01082724c0dfd9524f542f2f07d693c9f1175f"},
|
||||
{file = "lxml-5.4.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0e108352e203c7afd0eb91d782582f00a0b16a948d204d4dec8565024fafeea5"},
|
||||
{file = "lxml-5.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a11a96c3b3f7551c8a8109aa65e8594e551d5a84c76bf950da33d0fb6dfafab7"},
|
||||
{file = "lxml-5.4.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:ca755eebf0d9e62d6cb013f1261e510317a41bf4650f22963474a663fdfe02aa"},
|
||||
{file = "lxml-5.4.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:4cd915c0fb1bed47b5e6d6edd424ac25856252f09120e3e8ba5154b6b921860e"},
|
||||
{file = "lxml-5.4.0-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:226046e386556a45ebc787871d6d2467b32c37ce76c2680f5c608e25823ffc84"},
|
||||
{file = "lxml-5.4.0-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:b108134b9667bcd71236c5a02aad5ddd073e372fb5d48ea74853e009fe38acb6"},
|
||||
{file = "lxml-5.4.0-cp38-cp38-win32.whl", hash = "sha256:1320091caa89805df7dcb9e908add28166113dcd062590668514dbd510798c88"},
|
||||
{file = "lxml-5.4.0-cp38-cp38-win_amd64.whl", hash = "sha256:073eb6dcdf1f587d9b88c8c93528b57eccda40209cf9be549d469b942b41d70b"},
|
||||
{file = "lxml-5.4.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:bda3ea44c39eb74e2488297bb39d47186ed01342f0022c8ff407c250ac3f498e"},
|
||||
{file = "lxml-5.4.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9ceaf423b50ecfc23ca00b7f50b64baba85fb3fb91c53e2c9d00bc86150c7e40"},
|
||||
{file = "lxml-5.4.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:664cdc733bc87449fe781dbb1f309090966c11cc0c0cd7b84af956a02a8a4729"},
|
||||
{file = "lxml-5.4.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:67ed8a40665b84d161bae3181aa2763beea3747f748bca5874b4af4d75998f87"},
|
||||
{file = "lxml-5.4.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9b4a3bd174cc9cdaa1afbc4620c049038b441d6ba07629d89a83b408e54c35cd"},
|
||||
{file = "lxml-5.4.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:b0989737a3ba6cf2a16efb857fb0dfa20bc5c542737fddb6d893fde48be45433"},
|
||||
{file = "lxml-5.4.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:dc0af80267edc68adf85f2a5d9be1cdf062f973db6790c1d065e45025fa26140"},
|
||||
{file = "lxml-5.4.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:639978bccb04c42677db43c79bdaa23785dc7f9b83bfd87570da8207872f1ce5"},
|
||||
{file = "lxml-5.4.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:5a99d86351f9c15e4a901fc56404b485b1462039db59288b203f8c629260a142"},
|
||||
{file = "lxml-5.4.0-cp39-cp39-win32.whl", hash = "sha256:3e6d5557989cdc3ebb5302bbdc42b439733a841891762ded9514e74f60319ad6"},
|
||||
{file = "lxml-5.4.0-cp39-cp39-win_amd64.whl", hash = "sha256:a8c9b7f16b63e65bbba889acb436a1034a82d34fa09752d754f88d708eca80e1"},
|
||||
{file = "lxml-5.4.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:1b717b00a71b901b4667226bba282dd462c42ccf618ade12f9ba3674e1fabc55"},
|
||||
{file = "lxml-5.4.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:27a9ded0f0b52098ff89dd4c418325b987feed2ea5cc86e8860b0f844285d740"},
|
||||
{file = "lxml-5.4.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b7ce10634113651d6f383aa712a194179dcd496bd8c41e191cec2099fa09de5"},
|
||||
{file = "lxml-5.4.0-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:53370c26500d22b45182f98847243efb518d268374a9570409d2e2276232fd37"},
|
||||
{file = "lxml-5.4.0-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:c6364038c519dffdbe07e3cf42e6a7f8b90c275d4d1617a69bb59734c1a2d571"},
|
||||
{file = "lxml-5.4.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:b12cb6527599808ada9eb2cd6e0e7d3d8f13fe7bbb01c6311255a15ded4c7ab4"},
|
||||
{file = "lxml-5.4.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:5f11a1526ebd0dee85e7b1e39e39a0cc0d9d03fb527f56d8457f6df48a10dc0c"},
|
||||
{file = "lxml-5.4.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:48b4afaf38bf79109bb060d9016fad014a9a48fb244e11b94f74ae366a64d252"},
|
||||
{file = "lxml-5.4.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:de6f6bb8a7840c7bf216fb83eec4e2f79f7325eca8858167b68708b929ab2172"},
|
||||
{file = "lxml-5.4.0-pp37-pypy37_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:5cca36a194a4eb4e2ed6be36923d3cffd03dcdf477515dea687185506583d4c9"},
|
||||
{file = "lxml-5.4.0-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:b7c86884ad23d61b025989d99bfdd92a7351de956e01c61307cb87035960bcb1"},
|
||||
{file = "lxml-5.4.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:53d9469ab5460402c19553b56c3648746774ecd0681b1b27ea74d5d8a3ef5590"},
|
||||
{file = "lxml-5.4.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:56dbdbab0551532bb26c19c914848d7251d73edb507c3079d6805fa8bba5b706"},
|
||||
{file = "lxml-5.4.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:14479c2ad1cb08b62bb941ba8e0e05938524ee3c3114644df905d2331c76cd57"},
|
||||
{file = "lxml-5.4.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:32697d2ea994e0db19c1df9e40275ffe84973e4232b5c274f47e7c1ec9763cdd"},
|
||||
{file = "lxml-5.4.0-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:24f6df5f24fc3385f622c0c9d63fe34604893bc1a5bdbb2dbf5870f85f9a404a"},
|
||||
{file = "lxml-5.4.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:151d6c40bc9db11e960619d2bf2ec5829f0aaffb10b41dcf6ad2ce0f3c0b2325"},
|
||||
{file = "lxml-5.4.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:4025bf2884ac4370a3243c5aa8d66d3cb9e15d3ddd0af2d796eccc5f0244390e"},
|
||||
{file = "lxml-5.4.0-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:9459e6892f59ecea2e2584ee1058f5d8f629446eab52ba2305ae13a32a059530"},
|
||||
{file = "lxml-5.4.0-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:47fb24cc0f052f0576ea382872b3fc7e1f7e3028e53299ea751839418ade92a6"},
|
||||
{file = "lxml-5.4.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:50441c9de951a153c698b9b99992e806b71c1f36d14b154592580ff4a9d0d877"},
|
||||
{file = "lxml-5.4.0-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:ab339536aa798b1e17750733663d272038bf28069761d5be57cb4a9b0137b4f8"},
|
||||
{file = "lxml-5.4.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:9776af1aad5a4b4a1317242ee2bea51da54b2a7b7b48674be736d463c999f37d"},
|
||||
{file = "lxml-5.4.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:63e7968ff83da2eb6fdda967483a7a023aa497d85ad8f05c3ad9b1f2e8c84987"},
|
||||
{file = "lxml-5.4.0.tar.gz", hash = "sha256:d12832e1dbea4be280b22fd0ea7c9b87f0d8fc51ba06e92dc62d52f804f78ebd"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
cssselect = ["cssselect (>=0.7)"]
|
||||
html-clean = ["lxml_html_clean"]
|
||||
html5 = ["html5lib"]
|
||||
htmlsoup = ["BeautifulSoup4"]
|
||||
source = ["Cython (>=3.0.11,<3.1.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "markdown-it-py"
|
||||
version = "3.0.0"
|
||||
@@ -3221,6 +3470,33 @@ rsa = ["cryptography (>=3.0.0)"]
|
||||
signals = ["blinker (>=1.4.0)"]
|
||||
signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
|
||||
|
||||
[[package]]
|
||||
name = "openai"
|
||||
version = "1.82.0"
|
||||
description = "The official Python library for the openai API"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "openai-1.82.0-py3-none-any.whl", hash = "sha256:8c40647fea1816516cb3de5189775b30b5f4812777e40b8768f361f232b61b30"},
|
||||
{file = "openai-1.82.0.tar.gz", hash = "sha256:b0a009b9a58662d598d07e91e4219ab4b1e3d8ba2db3f173896a92b9b874d1a7"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
anyio = ">=3.5.0,<5"
|
||||
distro = ">=1.7.0,<2"
|
||||
httpx = ">=0.23.0,<1"
|
||||
jiter = ">=0.4.0,<1"
|
||||
pydantic = ">=1.9.0,<3"
|
||||
sniffio = "*"
|
||||
tqdm = ">4"
|
||||
typing-extensions = ">=4.11,<5"
|
||||
|
||||
[package.extras]
|
||||
datalib = ["numpy (>=1)", "pandas (>=1.2.3)", "pandas-stubs (>=1.1.0.11)"]
|
||||
realtime = ["websockets (>=13,<16)"]
|
||||
voice-helpers = ["numpy (>=2.0.2)", "sounddevice (>=0.5.1)"]
|
||||
|
||||
[[package]]
|
||||
name = "opentelemetry-api"
|
||||
version = "1.32.1"
|
||||
@@ -3597,7 +3873,7 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "prowler"
|
||||
version = "5.7.0"
|
||||
version = "5.6.0"
|
||||
description = "Prowler is an Open Source security tool to perform AWS, GCP and Azure security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness. It contains hundreds of controls covering CIS, NIST 800, NIST CSF, CISA, RBI, FedRAMP, PCI-DSS, GDPR, HIPAA, FFIEC, SOC2, GXP, AWS Well-Architected Framework Security Pillar, AWS Foundational Technical Review (FTR), ENS (Spanish National Security Scheme) and your custom security frameworks."
|
||||
optional = false
|
||||
python-versions = ">3.9.1,<3.13"
|
||||
@@ -3645,7 +3921,6 @@ numpy = "2.0.2"
|
||||
pandas = "2.2.3"
|
||||
py-ocsf-models = "0.3.1"
|
||||
pydantic = "1.10.21"
|
||||
pygithub = "2.5.0"
|
||||
python-dateutil = ">=2.9.0.post0,<3.0.0"
|
||||
pytz = "2025.1"
|
||||
schema = "0.7.7"
|
||||
@@ -3657,8 +3932,8 @@ tzlocal = "5.3.1"
|
||||
[package.source]
|
||||
type = "git"
|
||||
url = "https://github.com/prowler-cloud/prowler.git"
|
||||
reference = "v5.7"
|
||||
resolved_reference = "a3b606fc7124ce94f27ed2fd2ba8ad8f734a69d1"
|
||||
reference = "master"
|
||||
resolved_reference = "9828824b737b8deda61f4a6646b54e0ad45033b9"
|
||||
|
||||
[[package]]
|
||||
name = "psutil"
|
||||
@@ -3835,11 +4110,11 @@ description = "C parser in Python"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main", "dev"]
|
||||
markers = "platform_python_implementation != \"PyPy\""
|
||||
files = [
|
||||
{file = "pycparser-2.22-py3-none-any.whl", hash = "sha256:c3702b6d3dd8c7abc1afa565d7e63d53a1d0bd86cdc24edd75470f4de499cfcc"},
|
||||
{file = "pycparser-2.22.tar.gz", hash = "sha256:491c8be9c040f5390f5bf44a5b07752bd07f56edf992381b05c701439eec10f6"},
|
||||
]
|
||||
markers = {dev = "platform_python_implementation != \"PyPy\""}
|
||||
|
||||
[[package]]
|
||||
name = "pycurl"
|
||||
@@ -3950,26 +4225,6 @@ typing-extensions = ">=4.2.0"
|
||||
dotenv = ["python-dotenv (>=0.10.4)"]
|
||||
email = ["email-validator (>=1.0.3)"]
|
||||
|
||||
[[package]]
|
||||
name = "pygithub"
|
||||
version = "2.5.0"
|
||||
description = "Use the full Github API v3"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "PyGithub-2.5.0-py3-none-any.whl", hash = "sha256:b0b635999a658ab8e08720bdd3318893ff20e2275f6446fcf35bf3f44f2c0fd2"},
|
||||
{file = "pygithub-2.5.0.tar.gz", hash = "sha256:e1613ac508a9be710920d26eb18b1905ebd9926aa49398e88151c1b526aad3cf"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
Deprecated = "*"
|
||||
pyjwt = {version = ">=2.4.0", extras = ["crypto"]}
|
||||
pynacl = ">=1.4.0"
|
||||
requests = ">=2.14.0"
|
||||
typing-extensions = ">=4.0.0"
|
||||
urllib3 = ">=1.26.0"
|
||||
|
||||
[[package]]
|
||||
name = "pygments"
|
||||
version = "2.19.1"
|
||||
@@ -4034,33 +4289,6 @@ tomlkit = ">=0.10.1"
|
||||
spelling = ["pyenchant (>=3.2,<4.0)"]
|
||||
testutils = ["gitpython (>3)"]
|
||||
|
||||
[[package]]
|
||||
name = "pynacl"
|
||||
version = "1.5.0"
|
||||
description = "Python binding to the Networking and Cryptography (NaCl) library"
|
||||
optional = false
|
||||
python-versions = ">=3.6"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "PyNaCl-1.5.0-cp36-abi3-macosx_10_10_universal2.whl", hash = "sha256:401002a4aaa07c9414132aaed7f6836ff98f59277a234704ff66878c2ee4a0d1"},
|
||||
{file = "PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:52cb72a79269189d4e0dc537556f4740f7f0a9ec41c1322598799b0bdad4ef92"},
|
||||
{file = "PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a36d4a9dda1f19ce6e03c9a784a2921a4b726b02e1c736600ca9c22029474394"},
|
||||
{file = "PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:0c84947a22519e013607c9be43706dd42513f9e6ae5d39d3613ca1e142fba44d"},
|
||||
{file = "PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:06b8f6fa7f5de8d5d2f7573fe8c863c051225a27b61e6860fd047b1775807858"},
|
||||
{file = "PyNaCl-1.5.0-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:a422368fc821589c228f4c49438a368831cb5bbc0eab5ebe1d7fac9dded6567b"},
|
||||
{file = "PyNaCl-1.5.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:61f642bf2378713e2c2e1de73444a3778e5f0a38be6fee0fe532fe30060282ff"},
|
||||
{file = "PyNaCl-1.5.0-cp36-abi3-win32.whl", hash = "sha256:e46dae94e34b085175f8abb3b0aaa7da40767865ac82c928eeb9e57e1ea8a543"},
|
||||
{file = "PyNaCl-1.5.0-cp36-abi3-win_amd64.whl", hash = "sha256:20f42270d27e1b6a29f54032090b972d97f0a1b0948cc52392041ef7831fee93"},
|
||||
{file = "PyNaCl-1.5.0.tar.gz", hash = "sha256:8ac7448f09ab85811607bdd21ec2464495ac8b7c66d146bf545b0f08fb9220ba"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
cffi = ">=1.4.1"
|
||||
|
||||
[package.extras]
|
||||
docs = ["sphinx (>=1.6.5)", "sphinx-rtd-theme"]
|
||||
tests = ["hypothesis (>=3.27.0)", "pytest (>=3.2.1,!=3.3.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "pyparsing"
|
||||
version = "3.2.3"
|
||||
@@ -4284,6 +4512,27 @@ files = [
|
||||
{file = "python_memcached-1.62-py2.py3-none-any.whl", hash = "sha256:1bdd8d2393ff53e80cd5e9442d750e658e0b35c3eebb3211af137303e3b729d1"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "python3-saml"
|
||||
version = "1.16.0"
|
||||
description = "Saml Python Toolkit. Add SAML support to your Python software using this library"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "python3-saml-1.16.0.tar.gz", hash = "sha256:97c9669aecabc283c6e5fb4eb264f446b6e006f5267d01c9734f9d8bffdac133"},
|
||||
{file = "python3_saml-1.16.0-py2-none-any.whl", hash = "sha256:c49097863c278ff669a337a96c46dc1f25d16307b4bb2679d2d1733cc4f5176a"},
|
||||
{file = "python3_saml-1.16.0-py3-none-any.whl", hash = "sha256:20b97d11b04f01ee22e98f4a38242e2fea2e28fbc7fbc9bdd57cab5ac7fc2d0d"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
isodate = ">=0.6.1"
|
||||
lxml = ">=4.6.5,<4.7.0 || >4.7.0"
|
||||
xmlsec = ">=1.3.9"
|
||||
|
||||
[package.extras]
|
||||
test = ["coverage (>=4.5.2)", "flake8 (>=3.6.0,<=5.0.0)", "freezegun (>=0.3.11,<=1.1.0)", "pytest (>=4.6)"]
|
||||
|
||||
[[package]]
|
||||
name = "pytz"
|
||||
version = "2025.1"
|
||||
@@ -4424,19 +4673,19 @@ typing-extensions = {version = ">=4.4.0", markers = "python_version < \"3.13\""}
|
||||
|
||||
[[package]]
|
||||
name = "requests"
|
||||
version = "2.32.3"
|
||||
version = "2.32.4"
|
||||
description = "Python HTTP for Humans."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main", "dev"]
|
||||
files = [
|
||||
{file = "requests-2.32.3-py3-none-any.whl", hash = "sha256:70761cfe03c773ceb22aa2f671b4757976145175cdfca038c02654d061d6dcc6"},
|
||||
{file = "requests-2.32.3.tar.gz", hash = "sha256:55365417734eb18255590a9ff9eb97e9e1da868d4ccd6402399eaf68af20a760"},
|
||||
{file = "requests-2.32.4-py3-none-any.whl", hash = "sha256:27babd3cda2a6d50b30443204ee89830707d396671944c998b5975b031ac2b2c"},
|
||||
{file = "requests-2.32.4.tar.gz", hash = "sha256:27d0316682c8a29834d3264820024b62a36942083d52caf2f14c0591336d3422"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
certifi = ">=2017.4.17"
|
||||
charset-normalizer = ">=2,<4"
|
||||
charset_normalizer = ">=2,<4"
|
||||
idna = ">=2.5,<4"
|
||||
urllib3 = ">=1.21.1,<3"
|
||||
|
||||
@@ -5103,7 +5352,7 @@ version = "4.67.1"
|
||||
description = "Fast, Extensible Progress Meter"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["dev"]
|
||||
groups = ["main", "dev"]
|
||||
files = [
|
||||
{file = "tqdm-4.67.1-py3-none-any.whl", hash = "sha256:26445eca388f82e72884e0d580d5464cd801a3ea01e63e5601bdff9ba6a48de2"},
|
||||
{file = "tqdm-4.67.1.tar.gz", hash = "sha256:f8aef9c52c08c13a65f30ea34f4e5aac3fd1a34959879d7e59e63027286627f2"},
|
||||
@@ -5394,6 +5643,42 @@ files = [
|
||||
{file = "xlsxwriter-3.2.3.tar.gz", hash = "sha256:ad6fd41bdcf1b885876b1f6b7087560aecc9ae5a9cc2ba97dcac7ab2e210d3d5"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "xmlsec"
|
||||
version = "1.3.15"
|
||||
description = "Python bindings for the XML Security Library"
|
||||
optional = false
|
||||
python-versions = ">=3.5"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "xmlsec-1.3.15-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:60209f82a254a1d6083397c4eeae131e7ac2f64bfddb97f2b0b240369f03c4df"},
|
||||
{file = "xmlsec-1.3.15-cp310-cp310-win32.whl", hash = "sha256:a62be0f8964bbec1efd2ca39b025c40da620a2ef9cb5440ff4ffa7e0c6906f70"},
|
||||
{file = "xmlsec-1.3.15-cp310-cp310-win_amd64.whl", hash = "sha256:685b92860bbf048e3b725bd5e9310bd4d3515f7eafcb2c284dda62078a1ce90c"},
|
||||
{file = "xmlsec-1.3.15-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:c760230d4f77b7828857d076434e0810850eb2603775dc92fa9f760a98c2f694"},
|
||||
{file = "xmlsec-1.3.15-cp311-cp311-win32.whl", hash = "sha256:901458034b7476e1fd0881a85814e184d00eec2b5df33b1ceeb312681e8cb9e8"},
|
||||
{file = "xmlsec-1.3.15-cp311-cp311-win_amd64.whl", hash = "sha256:2ecbb65eea79a25769fbaa56c9e8bc4553aea63a9704795e962dfe06679b0191"},
|
||||
{file = "xmlsec-1.3.15-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:0edff08e0442cdcc82bebf353ba4bcfd5a022f4b2751052ee1564afc5c78bef4"},
|
||||
{file = "xmlsec-1.3.15-cp312-cp312-win32.whl", hash = "sha256:e5c402e5633fd39f75fe124219d66d383a040ba04d0de54e024afeb7fe7d3e3a"},
|
||||
{file = "xmlsec-1.3.15-cp312-cp312-win_amd64.whl", hash = "sha256:0c47f2347e8dcc0a48648b9702af53179618c204414a8e36926a9f61214ebf0b"},
|
||||
{file = "xmlsec-1.3.15-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:6ac2154311d32a6571e22f224ed16356029e59bd5ca76edeb3922a809adfe89c"},
|
||||
{file = "xmlsec-1.3.15-cp313-cp313-win32.whl", hash = "sha256:5ed218129f89b0592926ad2be42c017bece469db9b7380dc41bc09b01ca26d5d"},
|
||||
{file = "xmlsec-1.3.15-cp313-cp313-win_amd64.whl", hash = "sha256:5fc29e69b064323317b3862751a3a8107670e0a17510ca4517bbdc1939a90b1a"},
|
||||
{file = "xmlsec-1.3.15-cp36-cp36m-win32.whl", hash = "sha256:d0404dd76097b1f6dcbeff404c46cf045442a8cf9500f60c46a26ae03130ab9c"},
|
||||
{file = "xmlsec-1.3.15-cp36-cp36m-win_amd64.whl", hash = "sha256:672bb43a12d6b8e2e4a392ef495ea731ded5acc1585f9358174295a6fb5df262"},
|
||||
{file = "xmlsec-1.3.15-cp37-cp37m-win32.whl", hash = "sha256:96e24b22e862f0c50840a5af23cb7df186e7a1547b311a67ebca5b1e43ea0d86"},
|
||||
{file = "xmlsec-1.3.15-cp37-cp37m-win_amd64.whl", hash = "sha256:bec066ce81a82a5a2b994b1e7be2af11715fd716a55754c645668acf9c5a64c0"},
|
||||
{file = "xmlsec-1.3.15-cp38-cp38-win32.whl", hash = "sha256:95e80981b2e0ea74a7040cbf66b40072f4424298d7b50c3e587a026a7dab34ad"},
|
||||
{file = "xmlsec-1.3.15-cp38-cp38-win_amd64.whl", hash = "sha256:c2a40f8549769ba5fdc223f0ae564d3b4d4ca52b6461d46bc508d3321267b2ad"},
|
||||
{file = "xmlsec-1.3.15-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a2d5692a683054dec769f4a1d6e8fade88ddcfc2cef89b20d0ecc1c75deb0dd6"},
|
||||
{file = "xmlsec-1.3.15-cp39-cp39-macosx_13_0_arm64.whl", hash = "sha256:f0115d3b4f156df2cfee8424d75dcb7f5ca2cb4870af18b713098830493d3cb0"},
|
||||
{file = "xmlsec-1.3.15-cp39-cp39-win32.whl", hash = "sha256:ffb32d3c5af289c8598d4f9215c9f8f6c208f1551e78f0180f525bc08c8a67d2"},
|
||||
{file = "xmlsec-1.3.15-cp39-cp39-win_amd64.whl", hash = "sha256:3211da05c11c7a0d2b913a7834bff59e649150f41127949b3322442bc3986b56"},
|
||||
{file = "xmlsec-1.3.15.tar.gz", hash = "sha256:baa856b83d0012e278e6f6cbec96ac8128de667ca9fa9a2eeb02c752e816f6d8"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
lxml = ">=3.8"
|
||||
|
||||
[[package]]
|
||||
name = "yarl"
|
||||
version = "1.20.0"
|
||||
@@ -5536,4 +5821,4 @@ type = ["pytest-mypy"]
|
||||
[metadata]
|
||||
lock-version = "2.1"
|
||||
python-versions = ">=3.11,<3.13"
|
||||
content-hash = "db1beb68c9757678759b79a515ff19a21b1201502c1e7c24f579ccc47aef8644"
|
||||
content-hash = "0750d4d8d4c0b020c87a5c6e3c459f1f5f445e6f1395f7e492adea9a901e2056"
|
||||
|
||||
+6
-5
@@ -7,8 +7,8 @@ authors = [{name = "Prowler Engineering", email = "engineering@prowler.com"}]
|
||||
dependencies = [
|
||||
"celery[pytest] (>=5.4.0,<6.0.0)",
|
||||
"dj-rest-auth[with_social,jwt] (==7.0.1)",
|
||||
"django==5.1.8",
|
||||
"django-allauth==65.4.1",
|
||||
"django==5.1.10",
|
||||
"django-allauth[saml] (>=65.8.0,<66.0.0)",
|
||||
"django-celery-beat (>=2.7.0,<3.0.0)",
|
||||
"django-celery-results (>=2.5.1,<3.0.0)",
|
||||
"django-cors-headers==4.4.0",
|
||||
@@ -23,11 +23,12 @@ dependencies = [
|
||||
"drf-spectacular==0.27.2",
|
||||
"drf-spectacular-jsonapi==0.5.1",
|
||||
"gunicorn==23.0.0",
|
||||
"prowler @ git+https://github.com/prowler-cloud/prowler.git@v5.7",
|
||||
"prowler @ git+https://github.com/prowler-cloud/prowler.git@master",
|
||||
"psycopg2-binary==2.9.9",
|
||||
"pytest-celery[redis] (>=1.0.1,<2.0.0)",
|
||||
"sentry-sdk[django] (>=2.20.0,<3.0.0)",
|
||||
"uuid6==2024.7.10"
|
||||
"uuid6==2024.7.10",
|
||||
"openai (>=1.82.0,<2.0.0)"
|
||||
]
|
||||
description = "Prowler's API (Django/DRF)"
|
||||
license = "Apache-2.0"
|
||||
@@ -35,7 +36,7 @@ name = "prowler-api"
|
||||
package-mode = false
|
||||
# Needed for the SDK compatibility
|
||||
requires-python = ">=3.11,<3.13"
|
||||
version = "1.8.3"
|
||||
version = "1.9.0"
|
||||
|
||||
[project.scripts]
|
||||
celery = "src.backend.config.settings.celery"
|
||||
|
||||
+102
-26
@@ -3,7 +3,14 @@ from django.db import transaction
|
||||
|
||||
from api.db_router import MainRouter
|
||||
from api.db_utils import rls_transaction
|
||||
from api.models import Membership, Role, Tenant, User, UserRoleRelationship
|
||||
from api.models import (
|
||||
Membership,
|
||||
Role,
|
||||
SAMLConfiguration,
|
||||
Tenant,
|
||||
User,
|
||||
UserRoleRelationship,
|
||||
)
|
||||
|
||||
|
||||
class ProwlerSocialAccountAdapter(DefaultSocialAccountAdapter):
|
||||
@@ -17,6 +24,8 @@ class ProwlerSocialAccountAdapter(DefaultSocialAccountAdapter):
|
||||
def pre_social_login(self, request, sociallogin):
|
||||
# Link existing accounts with the same email address
|
||||
email = sociallogin.account.extra_data.get("email")
|
||||
if sociallogin.account.provider == "saml":
|
||||
email = sociallogin.user.email
|
||||
if email:
|
||||
existing_user = self.get_user_by_email(email)
|
||||
if existing_user:
|
||||
@@ -29,33 +38,100 @@ class ProwlerSocialAccountAdapter(DefaultSocialAccountAdapter):
|
||||
"""
|
||||
with transaction.atomic(using=MainRouter.admin_db):
|
||||
user = super().save_user(request, sociallogin, form)
|
||||
user.save(using=MainRouter.admin_db)
|
||||
social_account_name = sociallogin.account.extra_data.get("name")
|
||||
if social_account_name:
|
||||
user.name = social_account_name
|
||||
provider = sociallogin.account.provider
|
||||
extra = sociallogin.account.extra_data
|
||||
|
||||
if provider == "saml":
|
||||
# Handle SAML-specific logic
|
||||
user.first_name = (
|
||||
extra.get("firstName", [""])[0] if extra.get("firstName") else ""
|
||||
)
|
||||
user.last_name = (
|
||||
extra.get("lastName", [""])[0] if extra.get("lastName") else ""
|
||||
)
|
||||
user.company_name = (
|
||||
extra.get("organization", [""])[0]
|
||||
if extra.get("organization")
|
||||
else ""
|
||||
)
|
||||
user.name = f"{user.first_name} {user.last_name}".strip()
|
||||
if user.name == "":
|
||||
user.name = "N/A"
|
||||
user.save(using=MainRouter.admin_db)
|
||||
|
||||
tenant = Tenant.objects.using(MainRouter.admin_db).create(
|
||||
name=f"{user.email.split('@')[0]} default tenant"
|
||||
)
|
||||
with rls_transaction(str(tenant.id)):
|
||||
Membership.objects.using(MainRouter.admin_db).create(
|
||||
user=user, tenant=tenant, role=Membership.RoleChoices.OWNER
|
||||
email_domain = user.email.split("@")[-1]
|
||||
tenant = (
|
||||
SAMLConfiguration.objects.using(MainRouter.admin_db)
|
||||
.get(email_domain=email_domain)
|
||||
.tenant
|
||||
)
|
||||
role = Role.objects.using(MainRouter.admin_db).create(
|
||||
name="admin",
|
||||
tenant_id=tenant.id,
|
||||
manage_users=True,
|
||||
manage_account=True,
|
||||
manage_billing=True,
|
||||
manage_providers=True,
|
||||
manage_integrations=True,
|
||||
manage_scans=True,
|
||||
unlimited_visibility=True,
|
||||
)
|
||||
UserRoleRelationship.objects.using(MainRouter.admin_db).create(
|
||||
user=user,
|
||||
role=role,
|
||||
tenant_id=tenant.id,
|
||||
|
||||
with rls_transaction(str(tenant.id)):
|
||||
role_name = (
|
||||
extra.get("userType", ["saml_default_role"])[0].strip()
|
||||
if extra.get("userType")
|
||||
else "saml_default_role"
|
||||
)
|
||||
|
||||
try:
|
||||
role = Role.objects.using(MainRouter.admin_db).get(
|
||||
name=role_name, tenant_id=tenant.id
|
||||
)
|
||||
except Role.DoesNotExist:
|
||||
role = Role.objects.using(MainRouter.admin_db).create(
|
||||
name=role_name,
|
||||
tenant_id=tenant.id,
|
||||
manage_users=False,
|
||||
manage_account=False,
|
||||
manage_billing=False,
|
||||
manage_providers=False,
|
||||
manage_integrations=False,
|
||||
manage_scans=False,
|
||||
unlimited_visibility=False,
|
||||
)
|
||||
|
||||
Membership.objects.using(MainRouter.admin_db).create(
|
||||
user=user,
|
||||
tenant=tenant,
|
||||
role=Membership.RoleChoices.MEMBER,
|
||||
)
|
||||
|
||||
UserRoleRelationship.objects.using(MainRouter.admin_db).create(
|
||||
user=user,
|
||||
role=role,
|
||||
tenant_id=tenant.id,
|
||||
)
|
||||
|
||||
else:
|
||||
# Handle other providers (e.g., GitHub, Google)
|
||||
user.save(using=MainRouter.admin_db)
|
||||
social_account_name = extra.get("name")
|
||||
if social_account_name:
|
||||
user.name = social_account_name
|
||||
user.save(using=MainRouter.admin_db)
|
||||
|
||||
tenant = Tenant.objects.using(MainRouter.admin_db).create(
|
||||
name=f"{user.email.split('@')[0]} default tenant"
|
||||
)
|
||||
with rls_transaction(str(tenant.id)):
|
||||
Membership.objects.using(MainRouter.admin_db).create(
|
||||
user=user, tenant=tenant, role=Membership.RoleChoices.OWNER
|
||||
)
|
||||
role = Role.objects.using(MainRouter.admin_db).create(
|
||||
name="admin",
|
||||
tenant_id=tenant.id,
|
||||
manage_users=True,
|
||||
manage_account=True,
|
||||
manage_billing=True,
|
||||
manage_providers=True,
|
||||
manage_integrations=True,
|
||||
manage_scans=True,
|
||||
unlimited_visibility=True,
|
||||
)
|
||||
UserRoleRelationship.objects.using(MainRouter.admin_db).create(
|
||||
user=user,
|
||||
role=role,
|
||||
tenant_id=tenant.id,
|
||||
)
|
||||
|
||||
return user
|
||||
|
||||
@@ -47,9 +47,11 @@ class BaseViewSet(ModelViewSet):
|
||||
|
||||
|
||||
class BaseRLSViewSet(BaseViewSet):
|
||||
def initial(self, request, *args, **kwargs):
|
||||
super().initial(request, *args, **kwargs)
|
||||
def dispatch(self, request, *args, **kwargs):
|
||||
with transaction.atomic():
|
||||
return super().dispatch(request, *args, **kwargs)
|
||||
|
||||
def initial(self, request, *args, **kwargs):
|
||||
# Ideally, this logic would be in the `.setup()` method but DRF view sets don't call it
|
||||
# https://docs.djangoproject.com/en/5.1/ref/class-based-views/base/#django.views.generic.base.View.setup
|
||||
if request.auth is None:
|
||||
@@ -59,19 +61,9 @@ class BaseRLSViewSet(BaseViewSet):
|
||||
if tenant_id is None:
|
||||
raise NotAuthenticated("Tenant ID is not present in token")
|
||||
|
||||
self.request.tenant_id = tenant_id
|
||||
|
||||
self._rls_cm = rls_transaction(tenant_id)
|
||||
self._rls_cm.__enter__()
|
||||
|
||||
def finalize_response(self, request, response, *args, **kwargs):
|
||||
response = super().finalize_response(request, response, *args, **kwargs)
|
||||
|
||||
if hasattr(self, "_rls_cm"):
|
||||
self._rls_cm.__exit__(None, None, None)
|
||||
del self._rls_cm
|
||||
|
||||
return response
|
||||
with rls_transaction(tenant_id):
|
||||
self.request.tenant_id = tenant_id
|
||||
return super().initial(request, *args, **kwargs)
|
||||
|
||||
def get_serializer_context(self):
|
||||
context = super().get_serializer_context()
|
||||
@@ -117,8 +109,6 @@ class BaseTenantViewset(BaseViewSet):
|
||||
pass # Tenant might not exist, handle gracefully
|
||||
|
||||
def initial(self, request, *args, **kwargs):
|
||||
super().initial(request, *args, **kwargs)
|
||||
|
||||
if request.auth is None:
|
||||
raise NotAuthenticated
|
||||
|
||||
@@ -127,27 +117,19 @@ class BaseTenantViewset(BaseViewSet):
|
||||
raise NotAuthenticated("Tenant ID is not present in token")
|
||||
|
||||
user_id = str(request.user.id)
|
||||
|
||||
self._rls_cm = rls_transaction(value=user_id, parameter=POSTGRES_USER_VAR)
|
||||
self._rls_cm.__enter__()
|
||||
|
||||
def finalize_response(self, request, response, *args, **kwargs):
|
||||
response = super().finalize_response(request, response, *args, **kwargs)
|
||||
|
||||
if hasattr(self, "_rls_cm"):
|
||||
self._rls_cm.__exit__(None, None, None)
|
||||
del self._rls_cm
|
||||
|
||||
return response
|
||||
with rls_transaction(value=user_id, parameter=POSTGRES_USER_VAR):
|
||||
return super().initial(request, *args, **kwargs)
|
||||
|
||||
|
||||
class BaseUserViewset(BaseViewSet):
|
||||
def initial(self, request, *args, **kwargs):
|
||||
super().initial(request, *args, **kwargs)
|
||||
def dispatch(self, request, *args, **kwargs):
|
||||
with transaction.atomic():
|
||||
return super().dispatch(request, *args, **kwargs)
|
||||
|
||||
def initial(self, request, *args, **kwargs):
|
||||
# TODO refactor after improving RLS on users
|
||||
if request.stream is not None and request.stream.method == "POST":
|
||||
return
|
||||
return super().initial(request, *args, **kwargs)
|
||||
if request.auth is None:
|
||||
raise NotAuthenticated
|
||||
|
||||
@@ -155,16 +137,6 @@ class BaseUserViewset(BaseViewSet):
|
||||
if tenant_id is None:
|
||||
raise NotAuthenticated("Tenant ID is not present in token")
|
||||
|
||||
self.request.tenant_id = tenant_id
|
||||
|
||||
self._rls_cm = rls_transaction(tenant_id)
|
||||
self._rls_cm.__enter__()
|
||||
|
||||
def finalize_response(self, request, response, *args, **kwargs):
|
||||
response = super().finalize_response(request, response, *args, **kwargs)
|
||||
|
||||
if hasattr(self, "_rls_cm"):
|
||||
self._rls_cm.__exit__(None, None, None)
|
||||
del self._rls_cm
|
||||
|
||||
return response
|
||||
with rls_transaction(tenant_id):
|
||||
self.request.tenant_id = tenant_id
|
||||
return super().initial(request, *args, **kwargs)
|
||||
|
||||
@@ -190,10 +190,16 @@ def generate_compliance_overview_template(prowler_compliance: dict):
|
||||
total_checks = len(requirement.Checks)
|
||||
checks_dict = {check: None for check in requirement.Checks}
|
||||
|
||||
req_status_val = "MANUAL" if total_checks == 0 else "PASS"
|
||||
|
||||
# Build requirement dictionary
|
||||
requirement_dict = {
|
||||
"name": requirement.Name or requirement.Id,
|
||||
"description": requirement.Description,
|
||||
"tactics": getattr(requirement, "Tactics", []),
|
||||
"subtechniques": getattr(requirement, "SubTechniques", []),
|
||||
"platforms": getattr(requirement, "Platforms", []),
|
||||
"technique_url": getattr(requirement, "TechniqueURL", ""),
|
||||
"attributes": [
|
||||
dict(attribute) for attribute in requirement.Attributes
|
||||
],
|
||||
@@ -204,20 +210,18 @@ def generate_compliance_overview_template(prowler_compliance: dict):
|
||||
"manual": 0,
|
||||
"total": total_checks,
|
||||
},
|
||||
"status": "PASS",
|
||||
"status": req_status_val,
|
||||
}
|
||||
|
||||
# Update requirements status
|
||||
if total_checks == 0:
|
||||
# Update requirements status counts for the framework
|
||||
if req_status_val == "MANUAL":
|
||||
requirements_status["manual"] += 1
|
||||
elif req_status_val == "PASS":
|
||||
requirements_status["passed"] += 1
|
||||
|
||||
# Add requirement to compliance requirements
|
||||
compliance_requirements[requirement.Id] = requirement_dict
|
||||
|
||||
# Calculate pending requirements
|
||||
pending_requirements = total_requirements - requirements_status["manual"]
|
||||
requirements_status["passed"] = pending_requirements
|
||||
|
||||
# Build compliance dictionary
|
||||
compliance_dict = {
|
||||
"framework": compliance_data.Framework,
|
||||
|
||||
+124
-11
@@ -1,3 +1,4 @@
|
||||
import re
|
||||
import secrets
|
||||
import uuid
|
||||
from contextlib import contextmanager
|
||||
@@ -152,6 +153,28 @@ def delete_related_daily_task(provider_id: str):
|
||||
PeriodicTask.objects.filter(name=task_name).delete()
|
||||
|
||||
|
||||
def create_objects_in_batches(
|
||||
tenant_id: str, model, objects: list, batch_size: int = 500
|
||||
):
|
||||
"""
|
||||
Bulk-create model instances in repeated, per-tenant RLS transactions.
|
||||
|
||||
All chunks execute in their own transaction, so no single transaction
|
||||
grows too large.
|
||||
|
||||
Args:
|
||||
tenant_id (str): UUID string of the tenant under which to set RLS.
|
||||
model: Django model class whose `.objects.bulk_create()` will be called.
|
||||
objects (list): List of model instances (unsaved) to bulk-create.
|
||||
batch_size (int): Maximum number of objects per bulk_create call.
|
||||
"""
|
||||
total = len(objects)
|
||||
for i in range(0, total, batch_size):
|
||||
chunk = objects[i : i + batch_size]
|
||||
with rls_transaction(value=tenant_id, parameter=POSTGRES_TENANT_VAR):
|
||||
model.objects.bulk_create(chunk, batch_size)
|
||||
|
||||
|
||||
# Postgres Enums
|
||||
|
||||
|
||||
@@ -227,6 +250,72 @@ def register_enum(apps, schema_editor, enum_class): # noqa: F841
|
||||
register_adapter(enum_class, enum_adapter)
|
||||
|
||||
|
||||
def _should_create_index_on_partition(
|
||||
partition_name: str, all_partitions: bool = False
|
||||
) -> bool:
|
||||
"""
|
||||
Determine if we should create an index on this partition.
|
||||
|
||||
Args:
|
||||
partition_name: The name of the partition (e.g., "findings_2025_aug", "findings_default")
|
||||
all_partitions: If True, create on all partitions. If False, only current/future partitions.
|
||||
|
||||
Returns:
|
||||
bool: True if index should be created on this partition, False otherwise.
|
||||
"""
|
||||
if all_partitions:
|
||||
return True
|
||||
|
||||
# Extract date from partition name if it follows the pattern
|
||||
# Partition names look like: findings_2025_aug, findings_2025_jul, etc.
|
||||
date_pattern = r"(\d{4})_([a-z]{3})$"
|
||||
match = re.search(date_pattern, partition_name)
|
||||
|
||||
if not match:
|
||||
# If we can't parse the date, include it to be safe (e.g., default partition)
|
||||
return True
|
||||
|
||||
try:
|
||||
year_str, month_abbr = match.groups()
|
||||
year = int(year_str)
|
||||
|
||||
# Map month abbreviations to numbers
|
||||
month_map = {
|
||||
"jan": 1,
|
||||
"feb": 2,
|
||||
"mar": 3,
|
||||
"apr": 4,
|
||||
"may": 5,
|
||||
"jun": 6,
|
||||
"jul": 7,
|
||||
"aug": 8,
|
||||
"sep": 9,
|
||||
"oct": 10,
|
||||
"nov": 11,
|
||||
"dec": 12,
|
||||
}
|
||||
|
||||
month = month_map.get(month_abbr.lower())
|
||||
if month is None:
|
||||
# Unknown month abbreviation, include it to be safe
|
||||
return True
|
||||
|
||||
partition_date = datetime(year, month, 1, tzinfo=timezone.utc)
|
||||
|
||||
# Get current month start
|
||||
now = datetime.now(timezone.utc)
|
||||
current_month_start = now.replace(
|
||||
day=1, hour=0, minute=0, second=0, microsecond=0
|
||||
)
|
||||
|
||||
# Include current month and future partitions
|
||||
return partition_date >= current_month_start
|
||||
|
||||
except (ValueError, TypeError):
|
||||
# If date parsing fails, include it to be safe
|
||||
return True
|
||||
|
||||
|
||||
def create_index_on_partitions(
|
||||
apps, # noqa: F841
|
||||
schema_editor,
|
||||
@@ -235,16 +324,39 @@ def create_index_on_partitions(
|
||||
columns: str,
|
||||
method: str = "BTREE",
|
||||
where: str = "",
|
||||
all_partitions: bool = True,
|
||||
):
|
||||
"""
|
||||
Create an index on every existing partition of `parent_table`.
|
||||
Create an index on existing partitions of `parent_table`.
|
||||
|
||||
Args:
|
||||
parent_table: The name of the root table (e.g. "findings").
|
||||
index_name: A short name for the index (will be prefixed per-partition).
|
||||
columns: The parenthesized column list, e.g. "tenant_id, scan_id, status".
|
||||
method: The index method—BTREE, GIN, etc. Defaults to BTREE.
|
||||
where: Optional WHERE clause (without the leading "WHERE"), e.g. "status = 'FAIL'".
|
||||
method: The index method—BTREE, GIN, etc. Defaults to BTREE.
|
||||
where: Optional WHERE clause (without the leading "WHERE"), e.g. "status = 'FAIL'".
|
||||
all_partitions: Whether to create indexes on all partitions or just current/future ones.
|
||||
Defaults to False (current/future only) to avoid maintenance overhead
|
||||
on old partitions where the index may not be needed.
|
||||
|
||||
Examples:
|
||||
# Create index only on current and future partitions (recommended for new indexes)
|
||||
create_index_on_partitions(
|
||||
apps, schema_editor,
|
||||
parent_table="findings",
|
||||
index_name="new_performance_idx",
|
||||
columns="tenant_id, status, severity",
|
||||
all_partitions=False # Default behavior
|
||||
)
|
||||
|
||||
# Create index on all partitions (use when migrating existing critical indexes)
|
||||
create_index_on_partitions(
|
||||
apps, schema_editor,
|
||||
parent_table="findings",
|
||||
index_name="critical_existing_idx",
|
||||
columns="tenant_id, scan_id",
|
||||
all_partitions=True
|
||||
)
|
||||
"""
|
||||
with connection.cursor() as cursor:
|
||||
cursor.execute(
|
||||
@@ -259,13 +371,14 @@ def create_index_on_partitions(
|
||||
|
||||
where_sql = f" WHERE {where}" if where else ""
|
||||
for partition in partitions:
|
||||
idx_name = f"{partition.replace('.', '_')}_{index_name}"
|
||||
sql = (
|
||||
f"CREATE INDEX CONCURRENTLY IF NOT EXISTS {idx_name} "
|
||||
f"ON {partition} USING {method} ({columns})"
|
||||
f"{where_sql};"
|
||||
)
|
||||
schema_editor.execute(sql)
|
||||
if _should_create_index_on_partition(partition, all_partitions):
|
||||
idx_name = f"{partition.replace('.', '_')}_{index_name}"
|
||||
sql = (
|
||||
f"CREATE INDEX CONCURRENTLY IF NOT EXISTS {idx_name} "
|
||||
f"ON {partition} USING {method} ({columns})"
|
||||
f"{where_sql};"
|
||||
)
|
||||
schema_editor.execute(sql)
|
||||
|
||||
|
||||
def drop_index_on_partitions(
|
||||
@@ -279,7 +392,7 @@ def drop_index_on_partitions(
|
||||
|
||||
Args:
|
||||
parent_table: The name of the root table (e.g. "findings").
|
||||
index_name: The same short name used when creating them.
|
||||
index_name: The same short name used when creating them.
|
||||
"""
|
||||
with connection.cursor() as cursor:
|
||||
cursor.execute(
|
||||
|
||||
@@ -3,7 +3,7 @@ from rest_framework import status
|
||||
from rest_framework.exceptions import APIException
|
||||
from rest_framework_json_api.exceptions import exception_handler
|
||||
from rest_framework_json_api.serializers import ValidationError
|
||||
from rest_framework_simplejwt.exceptions import TokenError, InvalidToken
|
||||
from rest_framework_simplejwt.exceptions import InvalidToken, TokenError
|
||||
|
||||
|
||||
class ModelValidationError(ValidationError):
|
||||
@@ -32,6 +32,31 @@ class InvitationTokenExpiredException(APIException):
|
||||
default_code = "token_expired"
|
||||
|
||||
|
||||
# Task Management Exceptions (non-HTTP)
|
||||
class TaskManagementError(Exception):
|
||||
"""Base exception for task management errors."""
|
||||
|
||||
def __init__(self, task=None):
|
||||
self.task = task
|
||||
super().__init__()
|
||||
|
||||
|
||||
class TaskFailedException(TaskManagementError):
|
||||
"""Raised when a task has failed."""
|
||||
|
||||
|
||||
class TaskNotFoundException(TaskManagementError):
|
||||
"""Raised when a task is not found."""
|
||||
|
||||
|
||||
class TaskInProgressException(TaskManagementError):
|
||||
"""Raised when a task is running but there's no related Task object to return."""
|
||||
|
||||
def __init__(self, task_result=None):
|
||||
self.task_result = task_result
|
||||
super().__init__()
|
||||
|
||||
|
||||
def custom_exception_handler(exc, context):
|
||||
if isinstance(exc, django_validation_error):
|
||||
if hasattr(exc, "error_dict"):
|
||||
@@ -39,7 +64,12 @@ def custom_exception_handler(exc, context):
|
||||
else:
|
||||
exc = ValidationError(detail=exc.messages[0], code=exc.code)
|
||||
elif isinstance(exc, (TokenError, InvalidToken)):
|
||||
exc.detail["messages"] = [
|
||||
message_item["message"] for message_item in exc.detail["messages"]
|
||||
]
|
||||
if (
|
||||
hasattr(exc, "detail")
|
||||
and isinstance(exc.detail, dict)
|
||||
and "messages" in exc.detail
|
||||
):
|
||||
exc.detail["messages"] = [
|
||||
message_item["message"] for message_item in exc.detail["messages"]
|
||||
]
|
||||
return exception_handler(exc, context)
|
||||
|
||||
@@ -22,7 +22,7 @@ from api.db_utils import (
|
||||
StatusEnumField,
|
||||
)
|
||||
from api.models import (
|
||||
ComplianceOverview,
|
||||
ComplianceRequirementOverview,
|
||||
Finding,
|
||||
Integration,
|
||||
Invitation,
|
||||
@@ -637,12 +637,11 @@ class RoleFilter(FilterSet):
|
||||
|
||||
class ComplianceOverviewFilter(FilterSet):
|
||||
inserted_at = DateFilter(field_name="inserted_at", lookup_expr="date")
|
||||
provider_type = ChoiceFilter(choices=Provider.ProviderChoices.choices)
|
||||
provider_type__in = ChoiceInFilter(choices=Provider.ProviderChoices.choices)
|
||||
scan_id = UUIDFilter(field_name="scan__id")
|
||||
scan_id = UUIDFilter(field_name="scan_id")
|
||||
region = CharFilter(field_name="region")
|
||||
|
||||
class Meta:
|
||||
model = ComplianceOverview
|
||||
model = ComplianceRequirementOverview
|
||||
fields = {
|
||||
"inserted_at": ["date", "gte", "lte"],
|
||||
"compliance_id": ["exact", "icontains"],
|
||||
|
||||
+80
@@ -0,0 +1,80 @@
|
||||
from django.contrib.sites.models import Site
|
||||
from django.core.management.base import BaseCommand
|
||||
from django.db import DEFAULT_DB_ALIAS, connection, connections, transaction
|
||||
from django.db.migrations.recorder import MigrationRecorder
|
||||
|
||||
|
||||
def table_exists(table_name):
|
||||
with connection.cursor() as cursor:
|
||||
cursor.execute(
|
||||
"""
|
||||
SELECT EXISTS (
|
||||
SELECT 1 FROM information_schema.tables
|
||||
WHERE table_name = %s
|
||||
)
|
||||
""",
|
||||
[table_name],
|
||||
)
|
||||
return cursor.fetchone()[0]
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = "Fix migration inconsistency between socialaccount and sites"
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument(
|
||||
"--database",
|
||||
default=DEFAULT_DB_ALIAS,
|
||||
help="Specifies the database to operate on.",
|
||||
)
|
||||
|
||||
def handle(self, *args, **options):
|
||||
db = options["database"]
|
||||
connection = connections[db]
|
||||
recorder = MigrationRecorder(connection)
|
||||
|
||||
applied = set(recorder.applied_migrations())
|
||||
|
||||
has_social = ("socialaccount", "0001_initial") in applied
|
||||
|
||||
with connection.cursor() as cursor:
|
||||
cursor.execute(
|
||||
"""
|
||||
SELECT EXISTS (
|
||||
SELECT FROM information_schema.tables
|
||||
WHERE table_name = 'django_site'
|
||||
);
|
||||
"""
|
||||
)
|
||||
site_table_exists = cursor.fetchone()[0]
|
||||
|
||||
if has_social and not site_table_exists:
|
||||
self.stdout.write(
|
||||
f"Detected inconsistency in '{db}'. Creating 'django_site' table manually..."
|
||||
)
|
||||
|
||||
with transaction.atomic(using=db):
|
||||
with connection.schema_editor() as schema_editor:
|
||||
schema_editor.create_model(Site)
|
||||
|
||||
recorder.record_applied("sites", "0001_initial")
|
||||
recorder.record_applied("sites", "0002_alter_domain_unique")
|
||||
|
||||
self.stdout.write(
|
||||
"Fixed: 'django_site' table created and migrations registered."
|
||||
)
|
||||
|
||||
# Ensure the relationship table also exists
|
||||
if not table_exists("socialaccount_socialapp_sites"):
|
||||
self.stdout.write(
|
||||
"Detected missing 'socialaccount_socialapp_sites' table. Creating manually..."
|
||||
)
|
||||
with connection.schema_editor() as schema_editor:
|
||||
from allauth.socialaccount.models import SocialApp
|
||||
|
||||
schema_editor.create_model(
|
||||
SocialApp._meta.get_field("sites").remote_field.through
|
||||
)
|
||||
self.stdout.write(
|
||||
"Fixed: 'socialaccount_socialapp_sites' table created."
|
||||
)
|
||||
@@ -0,0 +1,14 @@
|
||||
from django.db import migrations
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0025_findings_uid_index_parent"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunSQL(
|
||||
"ALTER TYPE provider_secret_type ADD VALUE IF NOT EXISTS 'service_account';",
|
||||
reverse_sql=migrations.RunSQL.noop,
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,124 @@
|
||||
# Generated by Django 5.1.8 on 2025-05-21 11:37
|
||||
|
||||
import uuid
|
||||
|
||||
import django.db.models.deletion
|
||||
from django.db import migrations, models
|
||||
|
||||
import api.db_utils
|
||||
import api.rls
|
||||
from api.rls import RowLevelSecurityConstraint
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0026_provider_secret_gcp_service_account"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.CreateModel(
|
||||
name="ComplianceRequirementOverview",
|
||||
fields=[
|
||||
(
|
||||
"id",
|
||||
models.UUIDField(
|
||||
default=uuid.uuid4,
|
||||
editable=False,
|
||||
primary_key=True,
|
||||
serialize=False,
|
||||
),
|
||||
),
|
||||
("inserted_at", models.DateTimeField(auto_now_add=True)),
|
||||
("compliance_id", models.TextField(blank=False)),
|
||||
("framework", models.TextField(blank=False)),
|
||||
("version", models.TextField(blank=True)),
|
||||
("description", models.TextField(blank=True)),
|
||||
("region", models.TextField(blank=False)),
|
||||
("requirement_id", models.TextField(blank=False)),
|
||||
(
|
||||
"requirement_status",
|
||||
api.db_utils.StatusEnumField(
|
||||
choices=[
|
||||
("FAIL", "Fail"),
|
||||
("PASS", "Pass"),
|
||||
("MANUAL", "Manual"),
|
||||
]
|
||||
),
|
||||
),
|
||||
("passed_checks", models.IntegerField(default=0)),
|
||||
("failed_checks", models.IntegerField(default=0)),
|
||||
("total_checks", models.IntegerField(default=0)),
|
||||
(
|
||||
"scan",
|
||||
models.ForeignKey(
|
||||
on_delete=django.db.models.deletion.CASCADE,
|
||||
related_name="compliance_requirements_overviews",
|
||||
related_query_name="compliance_requirements_overview",
|
||||
to="api.scan",
|
||||
),
|
||||
),
|
||||
(
|
||||
"tenant",
|
||||
models.ForeignKey(
|
||||
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
|
||||
),
|
||||
),
|
||||
],
|
||||
options={
|
||||
"db_table": "compliance_requirements_overviews",
|
||||
"abstract": False,
|
||||
"indexes": [
|
||||
models.Index(
|
||||
fields=["tenant_id", "scan_id"], name="cro_tenant_scan_idx"
|
||||
),
|
||||
models.Index(
|
||||
fields=["tenant_id", "scan_id", "compliance_id"],
|
||||
name="cro_scan_comp_idx",
|
||||
),
|
||||
models.Index(
|
||||
fields=["tenant_id", "scan_id", "compliance_id", "region"],
|
||||
name="cro_scan_comp_reg_idx",
|
||||
),
|
||||
models.Index(
|
||||
fields=[
|
||||
"tenant_id",
|
||||
"scan_id",
|
||||
"compliance_id",
|
||||
"requirement_id",
|
||||
],
|
||||
name="cro_scan_comp_req_idx",
|
||||
),
|
||||
models.Index(
|
||||
fields=[
|
||||
"tenant_id",
|
||||
"scan_id",
|
||||
"compliance_id",
|
||||
"requirement_id",
|
||||
"region",
|
||||
],
|
||||
name="cro_scan_comp_req_reg_idx",
|
||||
),
|
||||
],
|
||||
"constraints": [
|
||||
models.UniqueConstraint(
|
||||
fields=(
|
||||
"tenant_id",
|
||||
"scan_id",
|
||||
"compliance_id",
|
||||
"requirement_id",
|
||||
"region",
|
||||
),
|
||||
name="unique_tenant_compliance_requirement_overview",
|
||||
)
|
||||
],
|
||||
},
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name="ComplianceRequirementOverview",
|
||||
constraint=RowLevelSecurityConstraint(
|
||||
"tenant_id",
|
||||
name="rls_on_compliancerequirementoverview",
|
||||
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
|
||||
),
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,29 @@
|
||||
from functools import partial
|
||||
|
||||
from django.db import migrations
|
||||
|
||||
from api.db_utils import create_index_on_partitions, drop_index_on_partitions
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
atomic = False
|
||||
|
||||
dependencies = [
|
||||
("api", "0027_compliance_requirement_overviews"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunPython(
|
||||
partial(
|
||||
create_index_on_partitions,
|
||||
parent_table="findings",
|
||||
index_name="find_tenant_scan_check_idx",
|
||||
columns="tenant_id, scan_id, check_id",
|
||||
),
|
||||
reverse_code=partial(
|
||||
drop_index_on_partitions,
|
||||
parent_table="findings",
|
||||
index_name="find_tenant_scan_check_idx",
|
||||
),
|
||||
)
|
||||
]
|
||||
@@ -0,0 +1,17 @@
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0028_findings_check_index_partitions"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddIndex(
|
||||
model_name="finding",
|
||||
index=models.Index(
|
||||
fields=["tenant_id", "scan_id", "check_id"],
|
||||
name="find_tenant_scan_check_idx",
|
||||
),
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,120 @@
|
||||
# Generated by Django 5.1.8 on 2025-05-15 09:54
|
||||
|
||||
import uuid
|
||||
|
||||
import django.db.models.deletion
|
||||
from django.db import migrations, models
|
||||
|
||||
import api.rls
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0029_findings_check_index_parent"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.CreateModel(
|
||||
name="SAMLDomainIndex",
|
||||
fields=[
|
||||
(
|
||||
"id",
|
||||
models.BigAutoField(
|
||||
auto_created=True,
|
||||
primary_key=True,
|
||||
serialize=False,
|
||||
verbose_name="ID",
|
||||
),
|
||||
),
|
||||
("email_domain", models.CharField(max_length=254, unique=True)),
|
||||
(
|
||||
"tenant",
|
||||
models.ForeignKey(
|
||||
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
|
||||
),
|
||||
),
|
||||
],
|
||||
options={
|
||||
"db_table": "saml_domain_index",
|
||||
},
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name="samldomainindex",
|
||||
constraint=models.UniqueConstraint(
|
||||
fields=("email_domain", "tenant"),
|
||||
name="unique_resources_by_email_domain",
|
||||
),
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name="samldomainindex",
|
||||
constraint=api.rls.BaseSecurityConstraint(
|
||||
name="statements_on_samldomainindex",
|
||||
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
|
||||
),
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name="SAMLConfiguration",
|
||||
fields=[
|
||||
(
|
||||
"id",
|
||||
models.UUIDField(
|
||||
default=uuid.uuid4,
|
||||
editable=False,
|
||||
primary_key=True,
|
||||
serialize=False,
|
||||
),
|
||||
),
|
||||
(
|
||||
"email_domain",
|
||||
models.CharField(
|
||||
help_text="Email domain used to identify the tenant, e.g. prowlerdemo.com",
|
||||
max_length=254,
|
||||
unique=True,
|
||||
),
|
||||
),
|
||||
(
|
||||
"metadata_xml",
|
||||
models.TextField(
|
||||
help_text="Raw IdP metadata XML to configure SingleSignOnService, certificates, etc."
|
||||
),
|
||||
),
|
||||
("created_at", models.DateTimeField(auto_now_add=True)),
|
||||
("updated_at", models.DateTimeField(auto_now=True)),
|
||||
(
|
||||
"tenant",
|
||||
models.ForeignKey(
|
||||
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
|
||||
),
|
||||
),
|
||||
],
|
||||
options={
|
||||
"db_table": "saml_configurations",
|
||||
},
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name="samlconfiguration",
|
||||
constraint=api.rls.RowLevelSecurityConstraint(
|
||||
"tenant_id",
|
||||
name="rls_on_samlconfiguration",
|
||||
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
|
||||
),
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name="samlconfiguration",
|
||||
constraint=models.UniqueConstraint(
|
||||
fields=("tenant",), name="unique_samlconfig_per_tenant"
|
||||
),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name="integration",
|
||||
name="integration_type",
|
||||
field=api.db_utils.IntegrationTypeEnumField(
|
||||
choices=[
|
||||
("amazon_s3", "Amazon S3"),
|
||||
("aws_security_hub", "AWS Security Hub"),
|
||||
("jira", "JIRA"),
|
||||
("slack", "Slack"),
|
||||
]
|
||||
),
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,107 @@
|
||||
# Generated by Django 5.1.10 on 2025-06-12 12:45
|
||||
|
||||
import uuid
|
||||
|
||||
import django.core.validators
|
||||
import django.db.models.deletion
|
||||
from django.db import migrations, models
|
||||
|
||||
import api.rls
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0030_samlconfigurations"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.CreateModel(
|
||||
name="LighthouseConfiguration",
|
||||
fields=[
|
||||
(
|
||||
"id",
|
||||
models.UUIDField(
|
||||
default=uuid.uuid4,
|
||||
editable=False,
|
||||
primary_key=True,
|
||||
serialize=False,
|
||||
),
|
||||
),
|
||||
("inserted_at", models.DateTimeField(auto_now_add=True)),
|
||||
("updated_at", models.DateTimeField(auto_now=True)),
|
||||
(
|
||||
"name",
|
||||
models.CharField(
|
||||
help_text="Name of the configuration",
|
||||
max_length=100,
|
||||
validators=[django.core.validators.MinLengthValidator(3)],
|
||||
),
|
||||
),
|
||||
(
|
||||
"api_key",
|
||||
models.BinaryField(
|
||||
help_text="Encrypted API key for the LLM service"
|
||||
),
|
||||
),
|
||||
(
|
||||
"model",
|
||||
models.CharField(
|
||||
choices=[
|
||||
("gpt-4o-2024-11-20", "GPT-4o v2024-11-20"),
|
||||
("gpt-4o-2024-08-06", "GPT-4o v2024-08-06"),
|
||||
("gpt-4o-2024-05-13", "GPT-4o v2024-05-13"),
|
||||
("gpt-4o", "GPT-4o Default"),
|
||||
("gpt-4o-mini-2024-07-18", "GPT-4o Mini v2024-07-18"),
|
||||
("gpt-4o-mini", "GPT-4o Mini Default"),
|
||||
],
|
||||
default="gpt-4o-2024-08-06",
|
||||
help_text="Must be one of the supported model names",
|
||||
max_length=50,
|
||||
),
|
||||
),
|
||||
(
|
||||
"temperature",
|
||||
models.FloatField(default=0, help_text="Must be between 0 and 1"),
|
||||
),
|
||||
(
|
||||
"max_tokens",
|
||||
models.IntegerField(
|
||||
default=4000, help_text="Must be between 500 and 5000"
|
||||
),
|
||||
),
|
||||
(
|
||||
"business_context",
|
||||
models.TextField(
|
||||
blank=True,
|
||||
default="",
|
||||
help_text="Additional business context for this AI model configuration",
|
||||
),
|
||||
),
|
||||
("is_active", models.BooleanField(default=True)),
|
||||
(
|
||||
"tenant",
|
||||
models.ForeignKey(
|
||||
on_delete=django.db.models.deletion.CASCADE, to="api.tenant"
|
||||
),
|
||||
),
|
||||
],
|
||||
options={
|
||||
"db_table": "lighthouse_configurations",
|
||||
"abstract": False,
|
||||
"constraints": [
|
||||
models.UniqueConstraint(
|
||||
fields=("tenant_id",),
|
||||
name="unique_lighthouse_config_per_tenant",
|
||||
),
|
||||
],
|
||||
},
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name="lighthouseconfiguration",
|
||||
constraint=api.rls.RowLevelSecurityConstraint(
|
||||
"tenant_id",
|
||||
name="rls_on_lighthouseconfiguration",
|
||||
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
|
||||
),
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,24 @@
|
||||
# Generated by Django 5.1.10 on 2025-06-23 10:04
|
||||
|
||||
import django.db.models.deletion
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("api", "0031_lighthouseconfiguration"),
|
||||
("django_celery_beat", "0019_alter_periodictasks_options"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AlterField(
|
||||
model_name="scan",
|
||||
name="scheduler_task",
|
||||
field=models.ForeignKey(
|
||||
blank=True,
|
||||
null=True,
|
||||
on_delete=django.db.models.deletion.SET_NULL,
|
||||
to="django_celery_beat.periodictask",
|
||||
),
|
||||
),
|
||||
]
|
||||
@@ -1,13 +1,20 @@
|
||||
import json
|
||||
import logging
|
||||
import re
|
||||
import xml.etree.ElementTree as ET
|
||||
from uuid import UUID, uuid4
|
||||
|
||||
from cryptography.fernet import Fernet
|
||||
from allauth.socialaccount.models import SocialApp
|
||||
from config.custom_logging import BackendLogger
|
||||
from config.settings.social_login import SOCIALACCOUNT_PROVIDERS
|
||||
from cryptography.fernet import Fernet, InvalidToken
|
||||
from django.conf import settings
|
||||
from django.contrib.auth.models import AbstractBaseUser
|
||||
from django.contrib.postgres.fields import ArrayField
|
||||
from django.contrib.postgres.indexes import GinIndex
|
||||
from django.contrib.postgres.search import SearchVector, SearchVectorField
|
||||
from django.contrib.sites.models import Site
|
||||
from django.core.exceptions import ValidationError
|
||||
from django.core.validators import MinLengthValidator
|
||||
from django.db import models
|
||||
from django.db.models import Q
|
||||
@@ -19,6 +26,7 @@ from psqlextra.models import PostgresPartitionedModel
|
||||
from psqlextra.types import PostgresPartitioningMethod
|
||||
from uuid6 import uuid7
|
||||
|
||||
from api.db_router import MainRouter
|
||||
from api.db_utils import (
|
||||
CustomUserManager,
|
||||
FindingDeltaEnumField,
|
||||
@@ -49,6 +57,8 @@ fernet = Fernet(settings.SECRETS_ENCRYPTION_KEY.encode())
|
||||
# Convert Prowler Severity enum to Django TextChoices
|
||||
SeverityChoices = enum_to_choices(Severity)
|
||||
|
||||
logger = logging.getLogger(BackendLogger.API)
|
||||
|
||||
|
||||
class StatusChoices(models.TextChoices):
|
||||
"""
|
||||
@@ -427,7 +437,7 @@ class Scan(RowLevelSecurityProtectedModel):
|
||||
completed_at = models.DateTimeField(null=True, blank=True)
|
||||
next_scan_at = models.DateTimeField(null=True, blank=True)
|
||||
scheduler_task = models.ForeignKey(
|
||||
PeriodicTask, on_delete=models.CASCADE, null=True, blank=True
|
||||
PeriodicTask, on_delete=models.SET_NULL, null=True, blank=True
|
||||
)
|
||||
output_location = models.CharField(blank=True, null=True, max_length=200)
|
||||
|
||||
@@ -762,6 +772,10 @@ class Finding(PostgresPartitionedModel, RowLevelSecurityProtectedModel):
|
||||
GinIndex(fields=["resource_services"], name="gin_find_service_idx"),
|
||||
GinIndex(fields=["resource_regions"], name="gin_find_region_idx"),
|
||||
GinIndex(fields=["resource_types"], name="gin_find_rtype_idx"),
|
||||
models.Index(
|
||||
fields=["tenant_id", "scan_id", "check_id"],
|
||||
name="find_tenant_scan_check_idx",
|
||||
),
|
||||
]
|
||||
|
||||
class JSONAPIMeta:
|
||||
@@ -850,6 +864,7 @@ class ProviderSecret(RowLevelSecurityProtectedModel):
|
||||
class TypeChoices(models.TextChoices):
|
||||
STATIC = "static", _("Key-value pairs")
|
||||
ROLE = "role", _("Role assumption")
|
||||
SERVICE_ACCOUNT = "service_account", _("GCP Service Account Key")
|
||||
|
||||
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
|
||||
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
|
||||
@@ -1142,6 +1157,78 @@ class ComplianceOverview(RowLevelSecurityProtectedModel):
|
||||
resource_name = "compliance-overviews"
|
||||
|
||||
|
||||
class ComplianceRequirementOverview(RowLevelSecurityProtectedModel):
|
||||
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
|
||||
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
|
||||
compliance_id = models.TextField(blank=False)
|
||||
framework = models.TextField(blank=False)
|
||||
version = models.TextField(blank=True)
|
||||
description = models.TextField(blank=True)
|
||||
region = models.TextField(blank=False)
|
||||
|
||||
requirement_id = models.TextField(blank=False)
|
||||
requirement_status = StatusEnumField(choices=StatusChoices)
|
||||
passed_checks = models.IntegerField(default=0)
|
||||
failed_checks = models.IntegerField(default=0)
|
||||
total_checks = models.IntegerField(default=0)
|
||||
|
||||
scan = models.ForeignKey(
|
||||
Scan,
|
||||
on_delete=models.CASCADE,
|
||||
related_name="compliance_requirements_overviews",
|
||||
related_query_name="compliance_requirements_overview",
|
||||
)
|
||||
|
||||
class Meta(RowLevelSecurityProtectedModel.Meta):
|
||||
db_table = "compliance_requirements_overviews"
|
||||
|
||||
constraints = [
|
||||
models.UniqueConstraint(
|
||||
fields=(
|
||||
"tenant_id",
|
||||
"scan_id",
|
||||
"compliance_id",
|
||||
"requirement_id",
|
||||
"region",
|
||||
),
|
||||
name="unique_tenant_compliance_requirement_overview",
|
||||
),
|
||||
RowLevelSecurityConstraint(
|
||||
field="tenant_id",
|
||||
name="rls_on_%(class)s",
|
||||
statements=["SELECT", "INSERT", "DELETE"],
|
||||
),
|
||||
]
|
||||
indexes = [
|
||||
models.Index(fields=["tenant_id", "scan_id"], name="cro_tenant_scan_idx"),
|
||||
models.Index(
|
||||
fields=["tenant_id", "scan_id", "compliance_id"],
|
||||
name="cro_scan_comp_idx",
|
||||
),
|
||||
models.Index(
|
||||
fields=["tenant_id", "scan_id", "compliance_id", "region"],
|
||||
name="cro_scan_comp_reg_idx",
|
||||
),
|
||||
models.Index(
|
||||
fields=["tenant_id", "scan_id", "compliance_id", "requirement_id"],
|
||||
name="cro_scan_comp_req_idx",
|
||||
),
|
||||
models.Index(
|
||||
fields=[
|
||||
"tenant_id",
|
||||
"scan_id",
|
||||
"compliance_id",
|
||||
"requirement_id",
|
||||
"region",
|
||||
],
|
||||
name="cro_scan_comp_req_reg_idx",
|
||||
),
|
||||
]
|
||||
|
||||
class JSONAPIMeta:
|
||||
resource_name = "compliance-requirements-overviews"
|
||||
|
||||
|
||||
class ScanSummary(RowLevelSecurityProtectedModel):
|
||||
objects = ActiveProviderManager()
|
||||
all_objects = models.Manager()
|
||||
@@ -1210,7 +1297,6 @@ class ScanSummary(RowLevelSecurityProtectedModel):
|
||||
class Integration(RowLevelSecurityProtectedModel):
|
||||
class IntegrationChoices(models.TextChoices):
|
||||
S3 = "amazon_s3", _("Amazon S3")
|
||||
SAML = "saml", _("SAML")
|
||||
AWS_SECURITY_HUB = "aws_security_hub", _("AWS Security Hub")
|
||||
JIRA = "jira", _("JIRA")
|
||||
SLACK = "slack", _("Slack")
|
||||
@@ -1284,6 +1370,221 @@ class IntegrationProviderRelationship(RowLevelSecurityProtectedModel):
|
||||
]
|
||||
|
||||
|
||||
class SAMLDomainIndex(models.Model):
|
||||
"""
|
||||
Public index of SAML domains. No RLS. Used for fast lookup in SAML login flow.
|
||||
"""
|
||||
|
||||
email_domain = models.CharField(max_length=254, unique=True)
|
||||
tenant = models.ForeignKey("Tenant", on_delete=models.CASCADE)
|
||||
|
||||
class Meta:
|
||||
db_table = "saml_domain_index"
|
||||
|
||||
constraints = [
|
||||
models.UniqueConstraint(
|
||||
fields=("email_domain", "tenant"),
|
||||
name="unique_resources_by_email_domain",
|
||||
),
|
||||
BaseSecurityConstraint(
|
||||
name="statements_on_%(class)s",
|
||||
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
class SAMLConfiguration(RowLevelSecurityProtectedModel):
|
||||
"""
|
||||
Stores per-tenant SAML settings, including email domain and IdP metadata.
|
||||
Automatically syncs to a SocialApp instance on save.
|
||||
|
||||
Note:
|
||||
This model exists to provide a tenant-aware abstraction over SAML configuration.
|
||||
It supports row-level security, custom validation, and metadata parsing, enabling
|
||||
Prowler to expose a clean API and admin interface for managing SAML integrations.
|
||||
|
||||
Although Django Allauth uses the SocialApp model to store provider configuration,
|
||||
it is not designed for multi-tenant use. SocialApp lacks support for tenant scoping,
|
||||
email domain mapping, and structured metadata handling.
|
||||
|
||||
By managing SAMLConfiguration separately, we ensure:
|
||||
- Strong isolation between tenants via RLS.
|
||||
- Ownership of raw IdP metadata and its validation.
|
||||
- An explicit link between SAML config and business-level identifiers (e.g. email domain).
|
||||
- Programmatic transformation into the SocialApp format used by Allauth.
|
||||
|
||||
In short, this model acts as a secure and user-friendly layer over Allauth's lower-level primitives.
|
||||
"""
|
||||
|
||||
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
|
||||
email_domain = models.CharField(
|
||||
max_length=254,
|
||||
unique=True,
|
||||
help_text="Email domain used to identify the tenant, e.g. prowlerdemo.com",
|
||||
)
|
||||
metadata_xml = models.TextField(
|
||||
help_text="Raw IdP metadata XML to configure SingleSignOnService, certificates, etc."
|
||||
)
|
||||
created_at = models.DateTimeField(auto_now_add=True)
|
||||
updated_at = models.DateTimeField(auto_now=True)
|
||||
|
||||
class JSONAPIMeta:
|
||||
resource_name = "saml-configurations"
|
||||
|
||||
class Meta:
|
||||
db_table = "saml_configurations"
|
||||
|
||||
constraints = [
|
||||
RowLevelSecurityConstraint(
|
||||
field="tenant_id",
|
||||
name="rls_on_%(class)s",
|
||||
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
|
||||
),
|
||||
# 1 config per tenant
|
||||
models.UniqueConstraint(
|
||||
fields=["tenant"],
|
||||
name="unique_samlconfig_per_tenant",
|
||||
),
|
||||
]
|
||||
|
||||
def clean(self, old_email_domain=None):
|
||||
# Domain must not contain @
|
||||
if "@" in self.email_domain:
|
||||
raise ValidationError({"email_domain": "Domain must not contain @"})
|
||||
|
||||
# Enforce at most one config per tenant
|
||||
qs = SAMLConfiguration.objects.filter(tenant=self.tenant)
|
||||
# Exclude ourselves in case of update
|
||||
if self.pk:
|
||||
qs = qs.exclude(pk=self.pk)
|
||||
if qs.exists():
|
||||
raise ValidationError(
|
||||
{"tenant": "A SAML configuration already exists for this tenant."}
|
||||
)
|
||||
|
||||
# The email domain must be unique in the entire system
|
||||
qs = SAMLConfiguration.objects.using(MainRouter.admin_db).filter(
|
||||
email_domain__iexact=self.email_domain
|
||||
)
|
||||
if qs.exists() and old_email_domain != self.email_domain:
|
||||
raise ValidationError(
|
||||
{"tenant": "There is a problem with your email domain."}
|
||||
)
|
||||
|
||||
def save(self, *args, **kwargs):
|
||||
self.email_domain = self.email_domain.strip().lower()
|
||||
is_create = not SAMLConfiguration.objects.filter(pk=self.pk).exists()
|
||||
|
||||
if not is_create:
|
||||
old = SAMLConfiguration.objects.get(pk=self.pk)
|
||||
old_email_domain = old.email_domain
|
||||
old_metadata_xml = old.metadata_xml
|
||||
else:
|
||||
old_email_domain = None
|
||||
old_metadata_xml = None
|
||||
|
||||
self.clean(old_email_domain)
|
||||
super().save(*args, **kwargs)
|
||||
|
||||
if is_create or (
|
||||
old_email_domain != self.email_domain
|
||||
or old_metadata_xml != self.metadata_xml
|
||||
):
|
||||
self._sync_social_app(old_email_domain)
|
||||
|
||||
# Sync the public index
|
||||
if not is_create and old_email_domain and old_email_domain != self.email_domain:
|
||||
SAMLDomainIndex.objects.filter(email_domain=old_email_domain).delete()
|
||||
|
||||
# Create/update the new domain index
|
||||
SAMLDomainIndex.objects.update_or_create(
|
||||
email_domain=self.email_domain, defaults={"tenant": self.tenant}
|
||||
)
|
||||
|
||||
def _parse_metadata(self):
|
||||
"""
|
||||
Parse the raw IdP metadata XML and extract:
|
||||
- entity_id
|
||||
- sso_url
|
||||
- slo_url (may be None)
|
||||
- x509cert (required)
|
||||
"""
|
||||
ns = {
|
||||
"md": "urn:oasis:names:tc:SAML:2.0:metadata",
|
||||
"ds": "http://www.w3.org/2000/09/xmldsig#",
|
||||
}
|
||||
try:
|
||||
root = ET.fromstring(self.metadata_xml)
|
||||
except ET.ParseError as e:
|
||||
raise ValidationError({"metadata_xml": f"Invalid XML: {e}"})
|
||||
|
||||
# Entity ID
|
||||
entity_id = root.attrib.get("entityID")
|
||||
|
||||
# SSO endpoint (must exist)
|
||||
sso = root.find(".//md:IDPSSODescriptor/md:SingleSignOnService", ns)
|
||||
if sso is None or "Location" not in sso.attrib:
|
||||
raise ValidationError(
|
||||
{"metadata_xml": "Missing SingleSignOnService in metadata."}
|
||||
)
|
||||
sso_url = sso.attrib["Location"]
|
||||
|
||||
# SLO endpoint (optional)
|
||||
slo = root.find(".//md:IDPSSODescriptor/md:SingleLogoutService", ns)
|
||||
slo_url = slo.attrib.get("Location") if slo is not None else None
|
||||
|
||||
# X.509 certificate (required)
|
||||
cert = root.find(
|
||||
'.//md:KeyDescriptor[@use="signing"]/ds:KeyInfo/ds:X509Data/ds:X509Certificate',
|
||||
ns,
|
||||
)
|
||||
if cert is None or not cert.text or not cert.text.strip():
|
||||
raise ValidationError(
|
||||
{
|
||||
"metadata_xml": 'Metadata must include a <ds:X509Certificate> under <KeyDescriptor use="signing">.'
|
||||
}
|
||||
)
|
||||
x509cert = cert.text.strip()
|
||||
|
||||
return {
|
||||
"entity_id": entity_id,
|
||||
"sso_url": sso_url,
|
||||
"slo_url": slo_url,
|
||||
"x509cert": x509cert,
|
||||
}
|
||||
|
||||
def _sync_social_app(self, previous_email_domain=None):
|
||||
"""
|
||||
Create or update the corresponding SocialApp based on email_domain.
|
||||
If the domain changed, update the matching SocialApp.
|
||||
"""
|
||||
idp_settings = self._parse_metadata()
|
||||
settings_dict = SOCIALACCOUNT_PROVIDERS["saml"].copy()
|
||||
settings_dict["idp"] = idp_settings
|
||||
|
||||
current_site = Site.objects.get(id=settings.SITE_ID)
|
||||
|
||||
social_app_qs = SocialApp.objects.filter(
|
||||
provider="saml", client_id=previous_email_domain or self.email_domain
|
||||
)
|
||||
|
||||
if social_app_qs.exists():
|
||||
social_app = social_app_qs.first()
|
||||
social_app.client_id = self.email_domain
|
||||
social_app.name = f"{self.tenant.name} SAML ({self.email_domain})"
|
||||
social_app.settings = settings_dict
|
||||
social_app.save()
|
||||
social_app.sites.set([current_site])
|
||||
else:
|
||||
social_app = SocialApp.objects.create(
|
||||
provider="saml",
|
||||
client_id=self.email_domain,
|
||||
name=f"{self.tenant.name} SAML ({self.email_domain})",
|
||||
settings=settings_dict,
|
||||
)
|
||||
social_app.sites.set([current_site])
|
||||
|
||||
|
||||
class ResourceScanSummary(RowLevelSecurityProtectedModel):
|
||||
scan_id = models.UUIDField(default=uuid7, db_index=True)
|
||||
resource_id = models.UUIDField(default=uuid4, db_index=True)
|
||||
@@ -1331,3 +1632,130 @@ class ResourceScanSummary(RowLevelSecurityProtectedModel):
|
||||
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
class LighthouseConfiguration(RowLevelSecurityProtectedModel):
|
||||
"""
|
||||
Stores configuration and API keys for LLM services.
|
||||
"""
|
||||
|
||||
class ModelChoices(models.TextChoices):
|
||||
GPT_4O_2024_11_20 = "gpt-4o-2024-11-20", _("GPT-4o v2024-11-20")
|
||||
GPT_4O_2024_08_06 = "gpt-4o-2024-08-06", _("GPT-4o v2024-08-06")
|
||||
GPT_4O_2024_05_13 = "gpt-4o-2024-05-13", _("GPT-4o v2024-05-13")
|
||||
GPT_4O = "gpt-4o", _("GPT-4o Default")
|
||||
GPT_4O_MINI_2024_07_18 = "gpt-4o-mini-2024-07-18", _("GPT-4o Mini v2024-07-18")
|
||||
GPT_4O_MINI = "gpt-4o-mini", _("GPT-4o Mini Default")
|
||||
|
||||
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
|
||||
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
|
||||
updated_at = models.DateTimeField(auto_now=True, editable=False)
|
||||
|
||||
name = models.CharField(
|
||||
max_length=100,
|
||||
validators=[MinLengthValidator(3)],
|
||||
blank=False,
|
||||
null=False,
|
||||
help_text="Name of the configuration",
|
||||
)
|
||||
api_key = models.BinaryField(
|
||||
blank=False, null=False, help_text="Encrypted API key for the LLM service"
|
||||
)
|
||||
model = models.CharField(
|
||||
max_length=50,
|
||||
choices=ModelChoices.choices,
|
||||
blank=False,
|
||||
null=False,
|
||||
default=ModelChoices.GPT_4O_2024_08_06,
|
||||
help_text="Must be one of the supported model names",
|
||||
)
|
||||
temperature = models.FloatField(default=0, help_text="Must be between 0 and 1")
|
||||
max_tokens = models.IntegerField(
|
||||
default=4000, help_text="Must be between 500 and 5000"
|
||||
)
|
||||
business_context = models.TextField(
|
||||
blank=True,
|
||||
null=False,
|
||||
default="",
|
||||
help_text="Additional business context for this AI model configuration",
|
||||
)
|
||||
is_active = models.BooleanField(default=True)
|
||||
|
||||
def __str__(self):
|
||||
return self.name
|
||||
|
||||
def clean(self):
|
||||
super().clean()
|
||||
|
||||
# Validate temperature
|
||||
if not 0 <= self.temperature <= 1:
|
||||
raise ModelValidationError(
|
||||
detail="Temperature must be between 0 and 1",
|
||||
code="invalid_temperature",
|
||||
pointer="/data/attributes/temperature",
|
||||
)
|
||||
|
||||
# Validate max_tokens
|
||||
if not 500 <= self.max_tokens <= 5000:
|
||||
raise ModelValidationError(
|
||||
detail="Max tokens must be between 500 and 5000",
|
||||
code="invalid_max_tokens",
|
||||
pointer="/data/attributes/max_tokens",
|
||||
)
|
||||
|
||||
@property
|
||||
def api_key_decoded(self):
|
||||
"""Return the decrypted API key, or None if unavailable or invalid."""
|
||||
if not self.api_key:
|
||||
return None
|
||||
|
||||
try:
|
||||
decrypted_key = fernet.decrypt(bytes(self.api_key))
|
||||
return decrypted_key.decode()
|
||||
|
||||
except InvalidToken:
|
||||
logger.warning("Invalid token while decrypting API key.")
|
||||
except Exception as e:
|
||||
logger.exception("Unexpected error while decrypting API key: %s", e)
|
||||
|
||||
@api_key_decoded.setter
|
||||
def api_key_decoded(self, value):
|
||||
"""Store the encrypted API key."""
|
||||
if not value:
|
||||
raise ModelValidationError(
|
||||
detail="API key is required",
|
||||
code="invalid_api_key",
|
||||
pointer="/data/attributes/api_key",
|
||||
)
|
||||
|
||||
# Validate OpenAI API key format
|
||||
openai_key_pattern = r"^sk-[\w-]+T3BlbkFJ[\w-]+$"
|
||||
if not re.match(openai_key_pattern, value):
|
||||
raise ModelValidationError(
|
||||
detail="Invalid OpenAI API key format.",
|
||||
code="invalid_api_key",
|
||||
pointer="/data/attributes/api_key",
|
||||
)
|
||||
self.api_key = fernet.encrypt(value.encode())
|
||||
|
||||
def save(self, *args, **kwargs):
|
||||
self.full_clean()
|
||||
super().save(*args, **kwargs)
|
||||
|
||||
class Meta(RowLevelSecurityProtectedModel.Meta):
|
||||
db_table = "lighthouse_configurations"
|
||||
|
||||
constraints = [
|
||||
RowLevelSecurityConstraint(
|
||||
field="tenant_id",
|
||||
name="rls_on_%(class)s",
|
||||
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
|
||||
),
|
||||
# Add unique constraint for name within a tenant
|
||||
models.UniqueConstraint(
|
||||
fields=["tenant_id"], name="unique_lighthouse_config_per_tenant"
|
||||
),
|
||||
]
|
||||
|
||||
class JSONAPIMeta:
|
||||
resource_name = "lighthouse-configurations"
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from rest_framework_json_api.pagination import JsonApiPageNumberPagination
|
||||
from drf_spectacular_jsonapi.schemas.pagination import JsonApiPageNumberPagination
|
||||
|
||||
|
||||
class ComplianceOverviewPagination(JsonApiPageNumberPagination):
|
||||
|
||||
+657
-292
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,90 @@
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
import pytest
|
||||
from allauth.socialaccount.models import SocialLogin
|
||||
from django.contrib.auth import get_user_model
|
||||
|
||||
from api.adapters import ProwlerSocialAccountAdapter
|
||||
from api.db_router import MainRouter
|
||||
from api.models import Membership, SAMLConfiguration, Tenant
|
||||
|
||||
User = get_user_model()
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
class TestProwlerSocialAccountAdapter:
|
||||
def test_get_user_by_email_returns_user(self, create_test_user):
|
||||
adapter = ProwlerSocialAccountAdapter()
|
||||
user = adapter.get_user_by_email(create_test_user.email)
|
||||
assert user == create_test_user
|
||||
|
||||
def test_get_user_by_email_returns_none_for_unknown_email(self):
|
||||
adapter = ProwlerSocialAccountAdapter()
|
||||
assert adapter.get_user_by_email("notfound@example.com") is None
|
||||
|
||||
def test_pre_social_login_links_existing_user(self, create_test_user, rf):
|
||||
adapter = ProwlerSocialAccountAdapter()
|
||||
|
||||
sociallogin = MagicMock(spec=SocialLogin)
|
||||
sociallogin.account = MagicMock()
|
||||
sociallogin.account.provider = "saml"
|
||||
sociallogin.account.extra_data = {}
|
||||
sociallogin.user = create_test_user
|
||||
sociallogin.connect = MagicMock()
|
||||
|
||||
adapter.pre_social_login(rf.get("/"), sociallogin)
|
||||
|
||||
call_args = sociallogin.connect.call_args
|
||||
assert call_args is not None
|
||||
|
||||
called_request, called_user = call_args[0]
|
||||
assert called_request.path == "/"
|
||||
assert called_user.email == create_test_user.email
|
||||
|
||||
def test_pre_social_login_no_link_if_email_missing(self, rf):
|
||||
adapter = ProwlerSocialAccountAdapter()
|
||||
|
||||
sociallogin = MagicMock(spec=SocialLogin)
|
||||
sociallogin.account = MagicMock()
|
||||
sociallogin.account.provider = "github"
|
||||
sociallogin.account.extra_data = {}
|
||||
sociallogin.connect = MagicMock()
|
||||
|
||||
adapter.pre_social_login(rf.get("/"), sociallogin)
|
||||
|
||||
sociallogin.connect.assert_not_called()
|
||||
|
||||
def test_save_user_saml_flow(
|
||||
self,
|
||||
rf,
|
||||
saml_setup,
|
||||
saml_sociallogin,
|
||||
):
|
||||
adapter = ProwlerSocialAccountAdapter()
|
||||
request = rf.get("/")
|
||||
saml_sociallogin.user.email = saml_setup["email"]
|
||||
saml_sociallogin.account.extra_data = {
|
||||
"firstName": [],
|
||||
"lastName": [],
|
||||
"organization": [],
|
||||
"userType": [],
|
||||
}
|
||||
|
||||
tenant = Tenant.objects.using(MainRouter.admin_db).get(
|
||||
id=saml_setup["tenant_id"]
|
||||
)
|
||||
saml_config = SAMLConfiguration.objects.using(MainRouter.admin_db).get(
|
||||
tenant=tenant
|
||||
)
|
||||
assert saml_config.email_domain == saml_setup["domain"]
|
||||
|
||||
user = adapter.save_user(request, saml_sociallogin)
|
||||
|
||||
assert user.name == "N/A"
|
||||
assert user.company_name == ""
|
||||
assert user.email == saml_setup["email"]
|
||||
assert (
|
||||
Membership.objects.using(MainRouter.admin_db)
|
||||
.filter(user=user, tenant=tenant)
|
||||
.exists()
|
||||
)
|
||||
@@ -1,12 +1,12 @@
|
||||
from unittest.mock import patch, MagicMock
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
from api.compliance import (
|
||||
generate_compliance_overview_template,
|
||||
generate_scan_compliance,
|
||||
get_prowler_provider_checks,
|
||||
get_prowler_provider_compliance,
|
||||
load_prowler_compliance,
|
||||
load_prowler_checks,
|
||||
generate_scan_compliance,
|
||||
generate_compliance_overview_template,
|
||||
load_prowler_compliance,
|
||||
)
|
||||
from api.models import Provider
|
||||
|
||||
@@ -69,7 +69,7 @@ class TestCompliance:
|
||||
|
||||
load_prowler_compliance()
|
||||
|
||||
from api.compliance import PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE, PROWLER_CHECKS
|
||||
from api.compliance import PROWLER_CHECKS, PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE
|
||||
|
||||
assert PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE == {
|
||||
"template_key": "template_value"
|
||||
@@ -218,6 +218,10 @@ class TestCompliance:
|
||||
Description="Description of requirement 1",
|
||||
Attributes=[],
|
||||
Checks=["check1", "check2"],
|
||||
Tactics=["tactic1"],
|
||||
SubTechniques=["subtechnique1"],
|
||||
Platforms=["platform1"],
|
||||
TechniqueURL="https://example.com",
|
||||
)
|
||||
requirement2 = MagicMock(
|
||||
Id="requirement2",
|
||||
@@ -225,6 +229,10 @@ class TestCompliance:
|
||||
Description="Description of requirement 2",
|
||||
Attributes=[],
|
||||
Checks=[],
|
||||
Tactics=[],
|
||||
SubTechniques=[],
|
||||
Platforms=[],
|
||||
TechniqueURL="",
|
||||
)
|
||||
compliance1 = MagicMock(
|
||||
Requirements=[requirement1, requirement2],
|
||||
@@ -247,6 +255,10 @@ class TestCompliance:
|
||||
"requirement1": {
|
||||
"name": "Requirement 1",
|
||||
"description": "Description of requirement 1",
|
||||
"tactics": ["tactic1"],
|
||||
"subtechniques": ["subtechnique1"],
|
||||
"platforms": ["platform1"],
|
||||
"technique_url": "https://example.com",
|
||||
"attributes": [],
|
||||
"checks": {"check1": None, "check2": None},
|
||||
"checks_status": {
|
||||
@@ -260,6 +272,10 @@ class TestCompliance:
|
||||
"requirement2": {
|
||||
"name": "Requirement 2",
|
||||
"description": "Description of requirement 2",
|
||||
"tactics": [],
|
||||
"subtechniques": [],
|
||||
"platforms": [],
|
||||
"technique_url": "",
|
||||
"attributes": [],
|
||||
"checks": {},
|
||||
"checks_status": {
|
||||
@@ -268,7 +284,7 @@ class TestCompliance:
|
||||
"manual": 0,
|
||||
"total": 0,
|
||||
},
|
||||
"status": "PASS",
|
||||
"status": "MANUAL",
|
||||
},
|
||||
},
|
||||
"requirements_status": {
|
||||
|
||||
@@ -3,9 +3,13 @@ from enum import Enum
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
from django.conf import settings
|
||||
from freezegun import freeze_time
|
||||
|
||||
from api.db_utils import (
|
||||
_should_create_index_on_partition,
|
||||
batch_delete,
|
||||
create_objects_in_batches,
|
||||
enum_to_choices,
|
||||
generate_random_token,
|
||||
one_week_from_now,
|
||||
@@ -138,3 +142,88 @@ class TestBatchDelete:
|
||||
)
|
||||
assert Provider.objects.all().count() == 0
|
||||
assert summary == {"api.Provider": create_test_providers}
|
||||
|
||||
|
||||
class TestShouldCreateIndexOnPartition:
|
||||
@freeze_time("2025-05-15 00:00:00Z")
|
||||
@pytest.mark.parametrize(
|
||||
"partition_name, all_partitions, expected",
|
||||
[
|
||||
("any_name", True, True),
|
||||
("findings_default", True, True),
|
||||
("findings_2022_jan", True, True),
|
||||
("foo_bar", False, True),
|
||||
("findings_2025_MAY", False, True),
|
||||
("findings_2025_may", False, True),
|
||||
("findings_2025_jun", False, True),
|
||||
("findings_2025_apr", False, False),
|
||||
("findings_2025_xyz", False, True),
|
||||
],
|
||||
)
|
||||
def test_partition_inclusion_logic(self, partition_name, all_partitions, expected):
|
||||
assert (
|
||||
_should_create_index_on_partition(partition_name, all_partitions)
|
||||
is expected
|
||||
)
|
||||
|
||||
@freeze_time("2025-05-15 00:00:00Z")
|
||||
def test_invalid_date_components(self):
|
||||
# even if regex matches but int conversion fails, we fallback True
|
||||
# (e.g. year too big, month number parse error)
|
||||
bad_name = "findings_99999_jan"
|
||||
assert _should_create_index_on_partition(bad_name, False) is True
|
||||
|
||||
bad_name2 = "findings_2025_abc"
|
||||
# abc not in month_map → fallback True
|
||||
assert _should_create_index_on_partition(bad_name2, False) is True
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
class TestCreateObjectsInBatches:
|
||||
@pytest.fixture
|
||||
def tenant(self, tenants_fixture):
|
||||
return tenants_fixture[0]
|
||||
|
||||
def make_provider_instances(self, tenant, count):
|
||||
"""
|
||||
Return a list of `count` unsaved Provider instances for the given tenant.
|
||||
"""
|
||||
base_uid = 1000
|
||||
return [
|
||||
Provider(
|
||||
tenant=tenant,
|
||||
uid=str(base_uid + i),
|
||||
provider=Provider.ProviderChoices.AWS,
|
||||
)
|
||||
for i in range(count)
|
||||
]
|
||||
|
||||
def test_exact_multiple_of_batch(self, tenant):
|
||||
total = 6
|
||||
batch_size = 3
|
||||
objs = self.make_provider_instances(tenant, total)
|
||||
|
||||
create_objects_in_batches(str(tenant.id), Provider, objs, batch_size=batch_size)
|
||||
|
||||
qs = Provider.objects.filter(tenant=tenant)
|
||||
assert qs.count() == total
|
||||
|
||||
def test_non_multiple_of_batch(self, tenant):
|
||||
total = 7
|
||||
batch_size = 3
|
||||
objs = self.make_provider_instances(tenant, total)
|
||||
|
||||
create_objects_in_batches(str(tenant.id), Provider, objs, batch_size=batch_size)
|
||||
|
||||
qs = Provider.objects.filter(tenant=tenant)
|
||||
assert qs.count() == total
|
||||
|
||||
def test_batch_size_default(self, tenant):
|
||||
default_size = settings.DJANGO_DELETION_BATCH_SIZE
|
||||
total = default_size + 2
|
||||
objs = self.make_provider_instances(tenant, total)
|
||||
|
||||
create_objects_in_batches(str(tenant.id), Provider, objs)
|
||||
|
||||
qs = Provider.objects.filter(tenant=tenant)
|
||||
assert qs.count() == total
|
||||
|
||||
@@ -0,0 +1,379 @@
|
||||
import json
|
||||
from uuid import uuid4
|
||||
|
||||
import pytest
|
||||
from django_celery_results.models import TaskResult
|
||||
from rest_framework import status
|
||||
from rest_framework.response import Response
|
||||
|
||||
from api.exceptions import (
|
||||
TaskFailedException,
|
||||
TaskInProgressException,
|
||||
TaskNotFoundException,
|
||||
)
|
||||
from api.models import Task, User
|
||||
from api.rls import Tenant
|
||||
from api.v1.mixins import PaginateByPkMixin, TaskManagementMixin
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
class TestPaginateByPkMixin:
|
||||
@pytest.fixture
|
||||
def tenant(self):
|
||||
return Tenant.objects.create(name="Test Tenant")
|
||||
|
||||
@pytest.fixture
|
||||
def users(self, tenant):
|
||||
# Create 5 users with proper email field
|
||||
users = []
|
||||
for i in range(5):
|
||||
user = User.objects.create(email=f"user{i}@example.com", name=f"User {i}")
|
||||
users.append(user)
|
||||
return users
|
||||
|
||||
class DummyView(PaginateByPkMixin):
|
||||
def __init__(self, page):
|
||||
self._page = page
|
||||
|
||||
def paginate_queryset(self, qs):
|
||||
return self._page
|
||||
|
||||
def get_serializer(self, queryset, many):
|
||||
class S:
|
||||
def __init__(self, data):
|
||||
# serialize to list of ids
|
||||
self.data = [obj.id for obj in data] if many else queryset.id
|
||||
|
||||
return S(queryset)
|
||||
|
||||
def get_paginated_response(self, data):
|
||||
return Response({"results": data}, status=status.HTTP_200_OK)
|
||||
|
||||
def test_no_pagination(self, users):
|
||||
base_qs = User.objects.all().order_by("id")
|
||||
view = self.DummyView(page=None)
|
||||
resp = view.paginate_by_pk(
|
||||
request=None, base_queryset=base_qs, manager=User.objects
|
||||
)
|
||||
# since no pagination, should return all ids in order
|
||||
expected = [u.id for u in base_qs]
|
||||
assert isinstance(resp, Response)
|
||||
assert resp.data == expected
|
||||
|
||||
def test_with_pagination(self, users):
|
||||
base_qs = User.objects.all().order_by("id")
|
||||
# simulate paging to first 2 ids
|
||||
page = [base_qs[1].id, base_qs[3].id]
|
||||
view = self.DummyView(page=page)
|
||||
resp = view.paginate_by_pk(
|
||||
request=None, base_queryset=base_qs, manager=User.objects
|
||||
)
|
||||
# should fetch only those two users, in the same order as page
|
||||
assert resp.status_code == status.HTTP_200_OK
|
||||
assert resp.data == {"results": page}
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
class TestTaskManagementMixin:
|
||||
class DummyView(TaskManagementMixin):
|
||||
pass
|
||||
|
||||
@pytest.fixture
|
||||
def tenant(self):
|
||||
return Tenant.objects.create(name="Test Tenant")
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def cleanup(self):
|
||||
Task.objects.all().delete()
|
||||
TaskResult.objects.all().delete()
|
||||
|
||||
def test_no_task_and_no_taskresult_raises_not_found(self):
|
||||
view = self.DummyView()
|
||||
with pytest.raises(TaskNotFoundException):
|
||||
view.check_task_status("task_xyz", {"foo": "bar"})
|
||||
|
||||
def test_no_task_and_no_taskresult_returns_none_when_not_raising(self):
|
||||
view = self.DummyView()
|
||||
result = view.check_task_status(
|
||||
"task_xyz", {"foo": "bar"}, raise_on_not_found=False
|
||||
)
|
||||
assert result is None
|
||||
|
||||
def test_taskresult_pending_raises_in_progress(self):
|
||||
task_kwargs = {"foo": "bar"}
|
||||
tr = TaskResult.objects.create(
|
||||
task_id=str(uuid4()),
|
||||
task_name="task_xyz",
|
||||
task_kwargs=json.dumps(task_kwargs),
|
||||
status="PENDING",
|
||||
)
|
||||
view = self.DummyView()
|
||||
with pytest.raises(TaskInProgressException) as excinfo:
|
||||
view.check_task_status("task_xyz", task_kwargs, raise_on_not_found=False)
|
||||
assert hasattr(excinfo.value, "task_result")
|
||||
assert excinfo.value.task_result == tr
|
||||
|
||||
def test_taskresult_started_raises_in_progress(self):
|
||||
task_kwargs = {"foo": "bar"}
|
||||
tr = TaskResult.objects.create(
|
||||
task_id=str(uuid4()),
|
||||
task_name="task_xyz",
|
||||
task_kwargs=json.dumps(task_kwargs),
|
||||
status="STARTED",
|
||||
)
|
||||
view = self.DummyView()
|
||||
with pytest.raises(TaskInProgressException) as excinfo:
|
||||
view.check_task_status("task_xyz", task_kwargs, raise_on_not_found=False)
|
||||
assert hasattr(excinfo.value, "task_result")
|
||||
assert excinfo.value.task_result == tr
|
||||
|
||||
def test_taskresult_progress_raises_in_progress(self):
|
||||
task_kwargs = {"foo": "bar"}
|
||||
tr = TaskResult.objects.create(
|
||||
task_id=str(uuid4()),
|
||||
task_name="task_xyz",
|
||||
task_kwargs=json.dumps(task_kwargs),
|
||||
status="PROGRESS",
|
||||
)
|
||||
view = self.DummyView()
|
||||
with pytest.raises(TaskInProgressException) as excinfo:
|
||||
view.check_task_status("task_xyz", task_kwargs, raise_on_not_found=False)
|
||||
assert hasattr(excinfo.value, "task_result")
|
||||
assert excinfo.value.task_result == tr
|
||||
|
||||
def test_taskresult_failure_raises_failed(self):
|
||||
task_kwargs = {"a": 1}
|
||||
TaskResult.objects.create(
|
||||
task_id=str(uuid4()),
|
||||
task_name="task_fail",
|
||||
task_kwargs=json.dumps(task_kwargs),
|
||||
status="FAILURE",
|
||||
)
|
||||
view = self.DummyView()
|
||||
with pytest.raises(TaskFailedException):
|
||||
view.check_task_status("task_fail", task_kwargs, raise_on_not_found=False)
|
||||
|
||||
def test_taskresult_failure_returns_none_when_not_raising(self):
|
||||
task_kwargs = {"a": 1}
|
||||
TaskResult.objects.create(
|
||||
task_id=str(uuid4()),
|
||||
task_name="task_fail",
|
||||
task_kwargs=json.dumps(task_kwargs),
|
||||
status="FAILURE",
|
||||
)
|
||||
view = self.DummyView()
|
||||
result = view.check_task_status(
|
||||
"task_fail", task_kwargs, raise_on_failed=False, raise_on_not_found=False
|
||||
)
|
||||
assert result is None
|
||||
|
||||
def test_taskresult_success_returns_none(self):
|
||||
task_kwargs = {"x": 2}
|
||||
TaskResult.objects.create(
|
||||
task_id=str(uuid4()),
|
||||
task_name="task_ok",
|
||||
task_kwargs=json.dumps(task_kwargs),
|
||||
status="SUCCESS",
|
||||
)
|
||||
view = self.DummyView()
|
||||
# should not raise, and returns None
|
||||
assert (
|
||||
view.check_task_status("task_ok", task_kwargs, raise_on_not_found=False)
|
||||
is None
|
||||
)
|
||||
|
||||
def test_taskresult_revoked_returns_none(self):
|
||||
task_kwargs = {"x": 2}
|
||||
TaskResult.objects.create(
|
||||
task_id=str(uuid4()),
|
||||
task_name="task_revoked",
|
||||
task_kwargs=json.dumps(task_kwargs),
|
||||
status="REVOKED",
|
||||
)
|
||||
view = self.DummyView()
|
||||
# should not raise, and returns None
|
||||
assert (
|
||||
view.check_task_status(
|
||||
"task_revoked", task_kwargs, raise_on_not_found=False
|
||||
)
|
||||
is None
|
||||
)
|
||||
|
||||
def test_task_with_failed_status_raises_failed(self, tenant):
|
||||
task_kwargs = {"provider_id": "test"}
|
||||
tr = TaskResult.objects.create(
|
||||
task_id=str(uuid4()),
|
||||
task_name="scan_task",
|
||||
task_kwargs=json.dumps(task_kwargs),
|
||||
status="FAILURE",
|
||||
)
|
||||
task = Task.objects.create(tenant=tenant, task_runner_task=tr)
|
||||
view = self.DummyView()
|
||||
with pytest.raises(TaskFailedException) as excinfo:
|
||||
view.check_task_status("scan_task", task_kwargs)
|
||||
# Check that the exception contains the expected task
|
||||
assert hasattr(excinfo.value, "task")
|
||||
assert excinfo.value.task == task
|
||||
|
||||
def test_task_with_cancelled_status_raises_failed(self, tenant):
|
||||
task_kwargs = {"provider_id": "test"}
|
||||
tr = TaskResult.objects.create(
|
||||
task_id=str(uuid4()),
|
||||
task_name="scan_task",
|
||||
task_kwargs=json.dumps(task_kwargs),
|
||||
status="REVOKED",
|
||||
)
|
||||
task = Task.objects.create(tenant=tenant, task_runner_task=tr)
|
||||
view = self.DummyView()
|
||||
with pytest.raises(TaskFailedException) as excinfo:
|
||||
view.check_task_status("scan_task", task_kwargs)
|
||||
# Check that the exception contains the expected task
|
||||
assert hasattr(excinfo.value, "task")
|
||||
assert excinfo.value.task == task
|
||||
|
||||
def test_task_with_failed_status_returns_task_when_not_raising(self, tenant):
|
||||
task_kwargs = {"provider_id": "test"}
|
||||
tr = TaskResult.objects.create(
|
||||
task_id=str(uuid4()),
|
||||
task_name="scan_task",
|
||||
task_kwargs=json.dumps(task_kwargs),
|
||||
status="FAILURE",
|
||||
)
|
||||
task = Task.objects.create(tenant=tenant, task_runner_task=tr)
|
||||
view = self.DummyView()
|
||||
result = view.check_task_status("scan_task", task_kwargs, raise_on_failed=False)
|
||||
assert result == task
|
||||
|
||||
def test_task_with_completed_status_returns_none(self, tenant):
|
||||
task_kwargs = {"provider_id": "test"}
|
||||
tr = TaskResult.objects.create(
|
||||
task_id=str(uuid4()),
|
||||
task_name="scan_task",
|
||||
task_kwargs=json.dumps(task_kwargs),
|
||||
status="SUCCESS",
|
||||
)
|
||||
Task.objects.create(tenant=tenant, task_runner_task=tr)
|
||||
view = self.DummyView()
|
||||
result = view.check_task_status("scan_task", task_kwargs)
|
||||
assert result is None
|
||||
|
||||
def test_task_with_executing_status_returns_task(self, tenant):
|
||||
task_kwargs = {"provider_id": "test"}
|
||||
tr = TaskResult.objects.create(
|
||||
task_id=str(uuid4()),
|
||||
task_name="scan_task",
|
||||
task_kwargs=json.dumps(task_kwargs),
|
||||
status="STARTED",
|
||||
)
|
||||
task = Task.objects.create(tenant=tenant, task_runner_task=tr)
|
||||
view = self.DummyView()
|
||||
result = view.check_task_status("scan_task", task_kwargs)
|
||||
assert result is not None
|
||||
assert result.pk == task.pk
|
||||
|
||||
def test_task_with_pending_status_returns_task(self, tenant):
|
||||
task_kwargs = {"provider_id": "test"}
|
||||
tr = TaskResult.objects.create(
|
||||
task_id=str(uuid4()),
|
||||
task_name="scan_task",
|
||||
task_kwargs=json.dumps(task_kwargs),
|
||||
status="PENDING",
|
||||
)
|
||||
task = Task.objects.create(tenant=tenant, task_runner_task=tr)
|
||||
view = self.DummyView()
|
||||
result = view.check_task_status("scan_task", task_kwargs)
|
||||
assert result is not None
|
||||
assert result.pk == task.pk
|
||||
|
||||
def test_get_task_response_if_running_returns_none_for_completed_task(self, tenant):
|
||||
task_kwargs = {"provider_id": "test"}
|
||||
tr = TaskResult.objects.create(
|
||||
task_id=str(uuid4()),
|
||||
task_name="scan_task",
|
||||
task_kwargs=json.dumps(task_kwargs),
|
||||
status="SUCCESS",
|
||||
)
|
||||
Task.objects.create(tenant=tenant, task_runner_task=tr)
|
||||
view = self.DummyView()
|
||||
result = view.get_task_response_if_running("scan_task", task_kwargs)
|
||||
assert result is None
|
||||
|
||||
def test_get_task_response_if_running_returns_none_for_no_task(self):
|
||||
view = self.DummyView()
|
||||
result = view.get_task_response_if_running(
|
||||
"nonexistent", {"foo": "bar"}, raise_on_not_found=False
|
||||
)
|
||||
assert result is None
|
||||
|
||||
def test_get_task_response_if_running_returns_202_for_executing_task(self, tenant):
|
||||
task_kwargs = {"provider_id": "test"}
|
||||
tr = TaskResult.objects.create(
|
||||
task_id=str(uuid4()),
|
||||
task_name="scan_task",
|
||||
task_kwargs=json.dumps(task_kwargs),
|
||||
status="STARTED",
|
||||
)
|
||||
task = Task.objects.create(tenant=tenant, task_runner_task=tr)
|
||||
view = self.DummyView()
|
||||
result = view.get_task_response_if_running("scan_task", task_kwargs)
|
||||
|
||||
assert isinstance(result, Response)
|
||||
assert result.status_code == status.HTTP_202_ACCEPTED
|
||||
assert "Content-Location" in result.headers
|
||||
# The response should contain the serialized task data
|
||||
assert result.data is not None
|
||||
assert "id" in result.data
|
||||
assert str(result.data["id"]) == str(task.id)
|
||||
|
||||
def test_get_task_response_if_running_returns_none_for_available_task(self, tenant):
|
||||
task_kwargs = {"provider_id": "test"}
|
||||
tr = TaskResult.objects.create(
|
||||
task_id=str(uuid4()),
|
||||
task_name="scan_task",
|
||||
task_kwargs=json.dumps(task_kwargs),
|
||||
status="PENDING",
|
||||
)
|
||||
Task.objects.create(tenant=tenant, task_runner_task=tr)
|
||||
view = self.DummyView()
|
||||
result = view.get_task_response_if_running("scan_task", task_kwargs)
|
||||
# PENDING maps to AVAILABLE, which is not EXECUTING, so should return None
|
||||
assert result is None
|
||||
|
||||
def test_kwargs_filtering_works_correctly(self, tenant):
|
||||
# Create tasks with different kwargs
|
||||
task_kwargs_1 = {"provider_id": "test1", "scan_type": "full"}
|
||||
task_kwargs_2 = {"provider_id": "test2", "scan_type": "quick"}
|
||||
|
||||
tr1 = TaskResult.objects.create(
|
||||
task_id=str(uuid4()),
|
||||
task_name="scan_task",
|
||||
task_kwargs=json.dumps(task_kwargs_1),
|
||||
status="STARTED",
|
||||
)
|
||||
tr2 = TaskResult.objects.create(
|
||||
task_id=str(uuid4()),
|
||||
task_name="scan_task",
|
||||
task_kwargs=json.dumps(task_kwargs_2),
|
||||
status="STARTED",
|
||||
)
|
||||
|
||||
task1 = Task.objects.create(tenant=tenant, task_runner_task=tr1)
|
||||
task2 = Task.objects.create(tenant=tenant, task_runner_task=tr2)
|
||||
|
||||
view = self.DummyView()
|
||||
|
||||
# Should find task1 when searching for its kwargs
|
||||
result1 = view.check_task_status("scan_task", {"provider_id": "test1"})
|
||||
assert result1 is not None
|
||||
assert result1.pk == task1.pk
|
||||
|
||||
# Should find task2 when searching for its kwargs
|
||||
result2 = view.check_task_status("scan_task", {"provider_id": "test2"})
|
||||
assert result2 is not None
|
||||
assert result2.pk == task2.pk
|
||||
|
||||
# Should not find anything when searching for non-existent kwargs
|
||||
result3 = view.check_task_status(
|
||||
"scan_task", {"provider_id": "test3"}, raise_on_not_found=False
|
||||
)
|
||||
assert result3 is None
|
||||
@@ -1,6 +1,9 @@
|
||||
import pytest
|
||||
from allauth.socialaccount.models import SocialApp
|
||||
from django.core.exceptions import ValidationError
|
||||
|
||||
from api.models import Resource, ResourceTag
|
||||
from api.db_router import MainRouter
|
||||
from api.models import Resource, ResourceTag, SAMLConfiguration, Tenant
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
@@ -120,3 +123,149 @@ class TestResourceModel:
|
||||
# compliance={},
|
||||
# )
|
||||
# assert Finding.objects.filter(uid=long_uid).exists()
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
class TestSAMLConfigurationModel:
|
||||
VALID_METADATA = """<?xml version='1.0' encoding='UTF-8'?>
|
||||
<md:EntityDescriptor entityID='TEST' xmlns:md='urn:oasis:names:tc:SAML:2.0:metadata'>
|
||||
<md:IDPSSODescriptor WantAuthnRequestsSigned='false' protocolSupportEnumeration='urn:oasis:names:tc:SAML:2.0:protocol'>
|
||||
<md:KeyDescriptor use='signing'>
|
||||
<ds:KeyInfo xmlns:ds='http://www.w3.org/2000/09/xmldsig#'>
|
||||
<ds:X509Data>
|
||||
<ds:X509Certificate>FAKECERTDATA</ds:X509Certificate>
|
||||
</ds:X509Data>
|
||||
</ds:KeyInfo>
|
||||
</md:KeyDescriptor>
|
||||
<md:SingleSignOnService Binding='urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST' Location='https://idp.test/sso'/>
|
||||
</md:IDPSSODescriptor>
|
||||
</md:EntityDescriptor>
|
||||
"""
|
||||
|
||||
def test_creates_valid_configuration(self):
|
||||
tenant = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant A")
|
||||
config = SAMLConfiguration.objects.using(MainRouter.admin_db).create(
|
||||
email_domain="ssoexample.com",
|
||||
metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
|
||||
tenant=tenant,
|
||||
)
|
||||
|
||||
assert config.email_domain == "ssoexample.com"
|
||||
assert SocialApp.objects.filter(client_id="ssoexample.com").exists()
|
||||
|
||||
def test_email_domain_with_at_symbol_fails(self):
|
||||
tenant = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant B")
|
||||
config = SAMLConfiguration(
|
||||
email_domain="invalid@domain.com",
|
||||
metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
|
||||
tenant=tenant,
|
||||
)
|
||||
|
||||
with pytest.raises(ValidationError) as exc_info:
|
||||
config.clean()
|
||||
|
||||
errors = exc_info.value.message_dict
|
||||
assert "email_domain" in errors
|
||||
assert "Domain must not contain @" in errors["email_domain"][0]
|
||||
|
||||
def test_duplicate_email_domain_fails(self):
|
||||
tenant1 = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant C1")
|
||||
tenant2 = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant C2")
|
||||
|
||||
SAMLConfiguration.objects.using(MainRouter.admin_db).create(
|
||||
email_domain="duplicate.com",
|
||||
metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
|
||||
tenant=tenant1,
|
||||
)
|
||||
|
||||
config = SAMLConfiguration(
|
||||
email_domain="duplicate.com",
|
||||
metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
|
||||
tenant=tenant2,
|
||||
)
|
||||
|
||||
with pytest.raises(ValidationError) as exc_info:
|
||||
config.clean()
|
||||
|
||||
errors = exc_info.value.message_dict
|
||||
assert "tenant" in errors
|
||||
assert "There is a problem with your email domain." in errors["tenant"][0]
|
||||
|
||||
def test_duplicate_tenant_config_fails(self):
|
||||
tenant = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant D")
|
||||
|
||||
SAMLConfiguration.objects.using(MainRouter.admin_db).create(
|
||||
email_domain="unique1.com",
|
||||
metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
|
||||
tenant=tenant,
|
||||
)
|
||||
|
||||
config = SAMLConfiguration(
|
||||
email_domain="unique2.com",
|
||||
metadata_xml=TestSAMLConfigurationModel.VALID_METADATA,
|
||||
tenant=tenant,
|
||||
)
|
||||
|
||||
with pytest.raises(ValidationError) as exc_info:
|
||||
config.clean()
|
||||
|
||||
errors = exc_info.value.message_dict
|
||||
assert "tenant" in errors
|
||||
assert (
|
||||
"A SAML configuration already exists for this tenant."
|
||||
in errors["tenant"][0]
|
||||
)
|
||||
|
||||
def test_invalid_metadata_xml_fails(self):
|
||||
tenant = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant E")
|
||||
config = SAMLConfiguration(
|
||||
email_domain="brokenxml.com",
|
||||
metadata_xml="<bad<xml>",
|
||||
tenant=tenant,
|
||||
)
|
||||
|
||||
with pytest.raises(ValidationError) as exc_info:
|
||||
config._parse_metadata()
|
||||
|
||||
errors = exc_info.value.message_dict
|
||||
assert "metadata_xml" in errors
|
||||
assert "Invalid XML" in errors["metadata_xml"][0]
|
||||
assert "not well-formed" in errors["metadata_xml"][0]
|
||||
|
||||
def test_metadata_missing_sso_fails(self):
|
||||
tenant = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant F")
|
||||
xml = """<md:EntityDescriptor entityID="x" xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata">
|
||||
<md:IDPSSODescriptor></md:IDPSSODescriptor>
|
||||
</md:EntityDescriptor>"""
|
||||
config = SAMLConfiguration(
|
||||
email_domain="nosso.com",
|
||||
metadata_xml=xml,
|
||||
tenant=tenant,
|
||||
)
|
||||
|
||||
with pytest.raises(ValidationError) as exc_info:
|
||||
config._parse_metadata()
|
||||
|
||||
errors = exc_info.value.message_dict
|
||||
assert "metadata_xml" in errors
|
||||
assert "Missing SingleSignOnService" in errors["metadata_xml"][0]
|
||||
|
||||
def test_metadata_missing_certificate_fails(self):
|
||||
tenant = Tenant.objects.using(MainRouter.admin_db).create(name="Tenant G")
|
||||
xml = """<md:EntityDescriptor entityID="x" xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata">
|
||||
<md:IDPSSODescriptor>
|
||||
<md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://example.com/sso"/>
|
||||
</md:IDPSSODescriptor>
|
||||
</md:EntityDescriptor>"""
|
||||
config = SAMLConfiguration(
|
||||
email_domain="nocert.com",
|
||||
metadata_xml=xml,
|
||||
tenant=tenant,
|
||||
)
|
||||
|
||||
with pytest.raises(ValidationError) as exc_info:
|
||||
config._parse_metadata()
|
||||
|
||||
errors = exc_info.value.message_dict
|
||||
assert "metadata_xml" in errors
|
||||
assert "X509Certificate" in errors["metadata_xml"][0]
|
||||
|
||||
@@ -0,0 +1,80 @@
|
||||
import logging
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
from config.settings.sentry import before_send
|
||||
|
||||
|
||||
def test_before_send_ignores_log_with_ignored_exception():
|
||||
"""Test that before_send ignores logs containing ignored exceptions."""
|
||||
log_record = MagicMock()
|
||||
log_record.msg = "Provider kubernetes is not connected"
|
||||
log_record.levelno = logging.ERROR # 40
|
||||
|
||||
hint = {"log_record": log_record}
|
||||
|
||||
event = MagicMock()
|
||||
|
||||
result = before_send(event, hint)
|
||||
|
||||
# Assert that the event was dropped (None returned)
|
||||
assert result is None
|
||||
|
||||
|
||||
def test_before_send_ignores_exception_with_ignored_exception():
|
||||
"""Test that before_send ignores exceptions containing ignored exceptions."""
|
||||
exc_info = (Exception, Exception("Provider kubernetes is not connected"), None)
|
||||
|
||||
hint = {"exc_info": exc_info}
|
||||
|
||||
event = MagicMock()
|
||||
|
||||
result = before_send(event, hint)
|
||||
|
||||
# Assert that the event was dropped (None returned)
|
||||
assert result is None
|
||||
|
||||
|
||||
def test_before_send_passes_through_non_ignored_log():
|
||||
"""Test that before_send passes through logs that don't contain ignored exceptions."""
|
||||
log_record = MagicMock()
|
||||
log_record.msg = "Some other error message"
|
||||
log_record.levelno = logging.ERROR # 40
|
||||
|
||||
hint = {"log_record": log_record}
|
||||
|
||||
event = MagicMock()
|
||||
|
||||
result = before_send(event, hint)
|
||||
|
||||
# Assert that the event was passed through
|
||||
assert result == event
|
||||
|
||||
|
||||
def test_before_send_passes_through_non_ignored_exception():
|
||||
"""Test that before_send passes through exceptions that don't contain ignored exceptions."""
|
||||
exc_info = (Exception, Exception("Some other error message"), None)
|
||||
|
||||
hint = {"exc_info": exc_info}
|
||||
|
||||
event = MagicMock()
|
||||
|
||||
result = before_send(event, hint)
|
||||
|
||||
# Assert that the event was passed through
|
||||
assert result == event
|
||||
|
||||
|
||||
def test_before_send_handles_warning_level():
|
||||
"""Test that before_send handles warning level logs."""
|
||||
log_record = MagicMock()
|
||||
log_record.msg = "Provider kubernetes is not connected"
|
||||
log_record.levelno = logging.WARNING # 30
|
||||
|
||||
hint = {"log_record": log_record}
|
||||
|
||||
event = MagicMock()
|
||||
|
||||
result = before_send(event, hint)
|
||||
|
||||
# Assert that the event was dropped (None returned)
|
||||
assert result is None
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,5 +1,16 @@
|
||||
from django.urls import reverse
|
||||
from django_celery_results.models import TaskResult
|
||||
from rest_framework import status
|
||||
from rest_framework.response import Response
|
||||
|
||||
from api.exceptions import (
|
||||
TaskFailedException,
|
||||
TaskInProgressException,
|
||||
TaskNotFoundException,
|
||||
)
|
||||
from api.models import StateChoices, Task
|
||||
from api.v1.serializers import TaskSerializer
|
||||
|
||||
|
||||
class PaginateByPkMixin:
|
||||
"""
|
||||
@@ -31,3 +42,181 @@ class PaginateByPkMixin:
|
||||
|
||||
serialized = self.get_serializer(queryset, many=True).data
|
||||
return self.get_paginated_response(serialized)
|
||||
|
||||
|
||||
class TaskManagementMixin:
|
||||
"""
|
||||
Mixin to manage task status checking.
|
||||
|
||||
This mixin provides functionality to check if a task with specific parameters
|
||||
is running, completed, failed, or doesn't exist. It returns the task when running
|
||||
and raises specific exceptions for failed/not found scenarios that can be handled
|
||||
at the view level.
|
||||
"""
|
||||
|
||||
def check_task_status(
|
||||
self,
|
||||
task_name: str,
|
||||
task_kwargs: dict,
|
||||
raise_on_failed: bool = True,
|
||||
raise_on_not_found: bool = True,
|
||||
) -> Task | None:
|
||||
"""
|
||||
Check the status of a task with given name and kwargs.
|
||||
|
||||
This method first checks for a related Task object, and if not found,
|
||||
checks TaskResult directly. If a TaskResult is found and running but
|
||||
there's no related Task, it raises TaskInProgressException.
|
||||
|
||||
Args:
|
||||
task_name (str): The name of the task to check
|
||||
task_kwargs (dict): The kwargs to match against the task
|
||||
raise_on_failed (bool): Whether to raise exception if task failed
|
||||
raise_on_not_found (bool): Whether to raise exception if task not found
|
||||
|
||||
Returns:
|
||||
Task | None: The task instance if found (regardless of state), None if not found and raise_on_not_found=False
|
||||
|
||||
Raises:
|
||||
TaskFailedException: If task failed and raise_on_failed=True
|
||||
TaskNotFoundException: If task not found and raise_on_not_found=True
|
||||
TaskInProgressException: If task is running but no related Task object exists
|
||||
"""
|
||||
# First, try to find a Task object with related TaskResult
|
||||
try:
|
||||
# Build the filter for task kwargs
|
||||
task_filter = {
|
||||
"task_runner_task__task_name": task_name,
|
||||
}
|
||||
|
||||
# Add kwargs filters - we need to check if the task kwargs contain our parameters
|
||||
for key, value in task_kwargs.items():
|
||||
task_filter["task_runner_task__task_kwargs__contains"] = str(value)
|
||||
|
||||
task = (
|
||||
Task.objects.filter(**task_filter)
|
||||
.select_related("task_runner_task")
|
||||
.order_by("-inserted_at")
|
||||
.first()
|
||||
)
|
||||
|
||||
if task:
|
||||
# Get task state using the same logic as TaskSerializer
|
||||
task_state_mapping = {
|
||||
"PENDING": StateChoices.AVAILABLE,
|
||||
"STARTED": StateChoices.EXECUTING,
|
||||
"PROGRESS": StateChoices.EXECUTING,
|
||||
"SUCCESS": StateChoices.COMPLETED,
|
||||
"FAILURE": StateChoices.FAILED,
|
||||
"REVOKED": StateChoices.CANCELLED,
|
||||
}
|
||||
|
||||
celery_status = (
|
||||
task.task_runner_task.status if task.task_runner_task else None
|
||||
)
|
||||
task_state = task_state_mapping.get(
|
||||
celery_status or "", StateChoices.AVAILABLE
|
||||
)
|
||||
|
||||
# Check task state and raise exceptions accordingly
|
||||
if task_state in (StateChoices.FAILED, StateChoices.CANCELLED):
|
||||
if raise_on_failed:
|
||||
raise TaskFailedException(task=task)
|
||||
return task
|
||||
elif task_state == StateChoices.COMPLETED:
|
||||
return None
|
||||
|
||||
return task
|
||||
|
||||
except Task.DoesNotExist:
|
||||
pass
|
||||
|
||||
# If no Task found, check TaskResult directly
|
||||
try:
|
||||
# Build the filter for TaskResult
|
||||
task_result_filter = {
|
||||
"task_name": task_name,
|
||||
}
|
||||
|
||||
# Add kwargs filters - check if the task kwargs contain our parameters
|
||||
for key, value in task_kwargs.items():
|
||||
task_result_filter["task_kwargs__contains"] = str(value)
|
||||
|
||||
task_result = (
|
||||
TaskResult.objects.filter(**task_result_filter)
|
||||
.order_by("-date_created")
|
||||
.first()
|
||||
)
|
||||
|
||||
if task_result:
|
||||
# Check if the TaskResult indicates a running task
|
||||
if task_result.status in ["PENDING", "STARTED", "PROGRESS"]:
|
||||
# Task is running but no related Task object exists
|
||||
raise TaskInProgressException(task_result=task_result)
|
||||
elif task_result.status == "FAILURE":
|
||||
if raise_on_failed:
|
||||
raise TaskFailedException(task=None)
|
||||
# For other statuses (SUCCESS, REVOKED), we don't have a Task to return,
|
||||
# so we treat it as not found
|
||||
|
||||
except TaskResult.DoesNotExist:
|
||||
pass
|
||||
|
||||
# No task found at all
|
||||
if raise_on_not_found:
|
||||
raise TaskNotFoundException()
|
||||
return None
|
||||
|
||||
def get_task_response_if_running(
|
||||
self,
|
||||
task_name: str,
|
||||
task_kwargs: dict,
|
||||
raise_on_failed: bool = True,
|
||||
raise_on_not_found: bool = True,
|
||||
) -> Response | None:
|
||||
"""
|
||||
Get a 202 response with task details if the task is currently running.
|
||||
|
||||
This method is useful for endpoints that should return task status when
|
||||
a background task is in progress, similar to the compliance overview endpoints.
|
||||
|
||||
Args:
|
||||
task_name (str): The name of the task to check
|
||||
task_kwargs (dict): The kwargs to match against the task
|
||||
|
||||
Returns:
|
||||
Response | None: 202 response with task details if running, None otherwise
|
||||
"""
|
||||
task = self.check_task_status(
|
||||
task_name=task_name,
|
||||
task_kwargs=task_kwargs,
|
||||
raise_on_failed=raise_on_failed,
|
||||
raise_on_not_found=raise_on_not_found,
|
||||
)
|
||||
|
||||
if not task:
|
||||
return None
|
||||
|
||||
# Get task state
|
||||
task_state_mapping = {
|
||||
"PENDING": StateChoices.AVAILABLE,
|
||||
"STARTED": StateChoices.EXECUTING,
|
||||
"PROGRESS": StateChoices.EXECUTING,
|
||||
"SUCCESS": StateChoices.COMPLETED,
|
||||
"FAILURE": StateChoices.FAILED,
|
||||
"REVOKED": StateChoices.CANCELLED,
|
||||
}
|
||||
|
||||
celery_status = task.task_runner_task.status if task.task_runner_task else None
|
||||
task_state = task_state_mapping.get(celery_status or "", StateChoices.AVAILABLE)
|
||||
|
||||
if task_state == StateChoices.EXECUTING:
|
||||
self.response_serializer_class = TaskSerializer
|
||||
serializer = TaskSerializer(task)
|
||||
return Response(
|
||||
data=serializer.data,
|
||||
status=status.HTTP_202_ACCEPTED,
|
||||
headers={
|
||||
"Content-Location": reverse("task-detail", kwargs={"pk": task.id})
|
||||
},
|
||||
)
|
||||
|
||||
@@ -154,6 +154,17 @@ from rest_framework_json_api import serializers
|
||||
},
|
||||
"required": ["client_id", "client_secret", "refresh_token"],
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"title": "GCP Service Account Key",
|
||||
"properties": {
|
||||
"service_account_key": {
|
||||
"type": "object",
|
||||
"description": "The service account key for GCP.",
|
||||
}
|
||||
},
|
||||
"required": ["service_account_key"],
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"title": "Kubernetes Static Credentials",
|
||||
|
||||
@@ -14,12 +14,12 @@ from rest_framework_simplejwt.serializers import TokenObtainPairSerializer
|
||||
from rest_framework_simplejwt.tokens import RefreshToken
|
||||
|
||||
from api.models import (
|
||||
ComplianceOverview,
|
||||
Finding,
|
||||
Integration,
|
||||
IntegrationProviderRelationship,
|
||||
Invitation,
|
||||
InvitationRoleRelationship,
|
||||
LighthouseConfiguration,
|
||||
Membership,
|
||||
Provider,
|
||||
ProviderGroup,
|
||||
@@ -29,8 +29,10 @@ from api.models import (
|
||||
ResourceTag,
|
||||
Role,
|
||||
RoleProviderGroupRelationship,
|
||||
SAMLConfiguration,
|
||||
Scan,
|
||||
StateChoices,
|
||||
StatusChoices,
|
||||
Task,
|
||||
User,
|
||||
UserRoleRelationship,
|
||||
@@ -1159,6 +1161,8 @@ class BaseWriteProviderSecretSerializer(BaseWriteSerializer):
|
||||
)
|
||||
elif secret_type == ProviderSecret.TypeChoices.ROLE:
|
||||
serializer = AWSRoleAssumptionProviderSecret(data=secret)
|
||||
elif secret_type == ProviderSecret.TypeChoices.SERVICE_ACCOUNT:
|
||||
serializer = GCPServiceAccountProviderSecret(data=secret)
|
||||
else:
|
||||
raise serializers.ValidationError(
|
||||
{"secret_type": f"Secret type not supported: {secret_type}"}
|
||||
@@ -1212,6 +1216,13 @@ class GCPProviderSecret(serializers.Serializer):
|
||||
resource_name = "provider-secrets"
|
||||
|
||||
|
||||
class GCPServiceAccountProviderSecret(serializers.Serializer):
|
||||
service_account_key = serializers.JSONField()
|
||||
|
||||
class Meta:
|
||||
resource_name = "provider-secrets"
|
||||
|
||||
|
||||
class KubernetesProviderSecret(serializers.Serializer):
|
||||
kubeconfig_content = serializers.CharField()
|
||||
|
||||
@@ -1670,130 +1681,63 @@ class RoleProviderGroupRelationshipSerializer(RLSSerializer, BaseWriteSerializer
|
||||
# Compliance overview
|
||||
|
||||
|
||||
class ComplianceOverviewSerializer(RLSSerializer):
|
||||
class ComplianceOverviewSerializer(serializers.Serializer):
|
||||
"""
|
||||
Serializer for the ComplianceOverview model.
|
||||
Serializer for compliance requirement status aggregated by compliance framework.
|
||||
|
||||
This serializer is used to format aggregated compliance framework data,
|
||||
providing counts of passed, failed, and manual requirements along with
|
||||
an overall global status for each framework.
|
||||
"""
|
||||
|
||||
requirements_status = serializers.SerializerMethodField(
|
||||
read_only=True, method_name="get_requirements_status"
|
||||
)
|
||||
provider_type = serializers.SerializerMethodField(read_only=True)
|
||||
# Add ID field which will be used for resource identification
|
||||
id = serializers.CharField()
|
||||
framework = serializers.CharField()
|
||||
version = serializers.CharField()
|
||||
requirements_passed = serializers.IntegerField()
|
||||
requirements_failed = serializers.IntegerField()
|
||||
requirements_manual = serializers.IntegerField()
|
||||
total_requirements = serializers.IntegerField()
|
||||
|
||||
class Meta:
|
||||
model = ComplianceOverview
|
||||
fields = [
|
||||
"id",
|
||||
"inserted_at",
|
||||
"compliance_id",
|
||||
"framework",
|
||||
"version",
|
||||
"requirements_status",
|
||||
"region",
|
||||
"provider_type",
|
||||
"scan",
|
||||
"url",
|
||||
]
|
||||
|
||||
@extend_schema_field(
|
||||
{
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"passed": {"type": "integer"},
|
||||
"failed": {"type": "integer"},
|
||||
"manual": {"type": "integer"},
|
||||
"total": {"type": "integer"},
|
||||
},
|
||||
}
|
||||
)
|
||||
def get_requirements_status(self, obj):
|
||||
return {
|
||||
"passed": obj.requirements_passed,
|
||||
"failed": obj.requirements_failed,
|
||||
"manual": obj.requirements_manual,
|
||||
"total": obj.total_requirements,
|
||||
}
|
||||
|
||||
@extend_schema_field(serializers.CharField(allow_null=True))
|
||||
def get_provider_type(self, obj):
|
||||
"""
|
||||
Retrieves the provider_type from scan.provider.provider_type.
|
||||
"""
|
||||
try:
|
||||
return obj.scan.provider.provider
|
||||
except AttributeError:
|
||||
return None
|
||||
class JSONAPIMeta:
|
||||
resource_name = "compliance-overviews"
|
||||
|
||||
|
||||
class ComplianceOverviewFullSerializer(ComplianceOverviewSerializer):
|
||||
requirements = serializers.SerializerMethodField(read_only=True)
|
||||
class ComplianceOverviewDetailSerializer(serializers.Serializer):
|
||||
"""
|
||||
Serializer for detailed compliance requirement information.
|
||||
|
||||
class Meta(ComplianceOverviewSerializer.Meta):
|
||||
fields = ComplianceOverviewSerializer.Meta.fields + [
|
||||
"description",
|
||||
"requirements",
|
||||
]
|
||||
This serializer formats the aggregated requirement data, showing detailed status
|
||||
and counts for each requirement across all regions.
|
||||
"""
|
||||
|
||||
@extend_schema_field(
|
||||
{
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"requirement_id": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {"type": "string"},
|
||||
"checks": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"check_name": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["PASS", "FAIL", None],
|
||||
},
|
||||
},
|
||||
}
|
||||
},
|
||||
"description": "Each key in the 'checks' object is a check name, with values as "
|
||||
"'PASS', 'FAIL', or null.",
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["PASS", "FAIL", "MANUAL"],
|
||||
},
|
||||
"attributes": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
},
|
||||
},
|
||||
"description": {"type": "string"},
|
||||
"checks_status": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"total": {"type": "integer"},
|
||||
"pass": {"type": "integer"},
|
||||
"fail": {"type": "integer"},
|
||||
"manual": {"type": "integer"},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
)
|
||||
def get_requirements(self, obj):
|
||||
"""
|
||||
Returns the detailed structure of requirements.
|
||||
"""
|
||||
return obj.requirements
|
||||
id = serializers.CharField()
|
||||
framework = serializers.CharField()
|
||||
version = serializers.CharField()
|
||||
description = serializers.CharField()
|
||||
status = serializers.ChoiceField(choices=StatusChoices.choices)
|
||||
|
||||
class JSONAPIMeta:
|
||||
resource_name = "compliance-requirements-details"
|
||||
|
||||
|
||||
class ComplianceOverviewAttributesSerializer(serializers.Serializer):
|
||||
id = serializers.CharField()
|
||||
framework_description = serializers.CharField()
|
||||
name = serializers.CharField()
|
||||
framework = serializers.CharField()
|
||||
version = serializers.CharField()
|
||||
description = serializers.CharField()
|
||||
attributes = serializers.JSONField()
|
||||
|
||||
class JSONAPIMeta:
|
||||
resource_name = "compliance-requirements-attributes"
|
||||
|
||||
|
||||
class ComplianceOverviewMetadataSerializer(serializers.Serializer):
|
||||
regions = serializers.ListField(child=serializers.CharField(), allow_empty=True)
|
||||
|
||||
class Meta:
|
||||
class JSONAPIMeta:
|
||||
resource_name = "compliance-overviews-metadata"
|
||||
|
||||
|
||||
@@ -2119,3 +2063,156 @@ class IntegrationUpdateSerializer(BaseWriteIntegrationSerializer):
|
||||
IntegrationProviderRelationship.objects.bulk_create(new_relationships)
|
||||
|
||||
return super().update(instance, validated_data)
|
||||
|
||||
|
||||
# SSO
|
||||
|
||||
|
||||
class SamlInitiateSerializer(serializers.Serializer):
|
||||
email_domain = serializers.CharField()
|
||||
|
||||
class JSONAPIMeta:
|
||||
resource_name = "saml-initiate"
|
||||
|
||||
|
||||
class SamlMetadataSerializer(serializers.Serializer):
|
||||
class JSONAPIMeta:
|
||||
resource_name = "saml-meta"
|
||||
|
||||
|
||||
class SAMLConfigurationSerializer(RLSSerializer):
|
||||
class Meta:
|
||||
model = SAMLConfiguration
|
||||
fields = ["id", "email_domain", "metadata_xml", "created_at", "updated_at"]
|
||||
read_only_fields = ["id", "created_at", "updated_at"]
|
||||
|
||||
|
||||
class LighthouseConfigSerializer(RLSSerializer):
|
||||
"""
|
||||
Serializer for the LighthouseConfig model.
|
||||
"""
|
||||
|
||||
api_key = serializers.CharField(required=False)
|
||||
|
||||
class Meta:
|
||||
model = LighthouseConfiguration
|
||||
fields = [
|
||||
"id",
|
||||
"name",
|
||||
"api_key",
|
||||
"model",
|
||||
"temperature",
|
||||
"max_tokens",
|
||||
"business_context",
|
||||
"is_active",
|
||||
"inserted_at",
|
||||
"updated_at",
|
||||
"url",
|
||||
]
|
||||
extra_kwargs = {
|
||||
"id": {"read_only": True},
|
||||
"is_active": {"read_only": True},
|
||||
"inserted_at": {"read_only": True},
|
||||
"updated_at": {"read_only": True},
|
||||
}
|
||||
|
||||
def to_representation(self, instance):
|
||||
data = super().to_representation(instance)
|
||||
# Check if api_key is specifically requested in fields param
|
||||
fields_param = self.context.get("request", None) and self.context[
|
||||
"request"
|
||||
].query_params.get("fields[lighthouse-config]", "")
|
||||
if fields_param == "api_key":
|
||||
# Return decrypted key if specifically requested
|
||||
data["api_key"] = instance.api_key_decoded if instance.api_key else None
|
||||
else:
|
||||
# Return masked key for general requests
|
||||
data["api_key"] = "*" * len(instance.api_key) if instance.api_key else None
|
||||
return data
|
||||
|
||||
|
||||
class LighthouseConfigCreateSerializer(RLSSerializer, BaseWriteSerializer):
|
||||
"""Serializer for creating new Lighthouse configurations."""
|
||||
|
||||
api_key = serializers.CharField(write_only=True, required=True)
|
||||
|
||||
class Meta:
|
||||
model = LighthouseConfiguration
|
||||
fields = [
|
||||
"id",
|
||||
"name",
|
||||
"api_key",
|
||||
"model",
|
||||
"temperature",
|
||||
"max_tokens",
|
||||
"business_context",
|
||||
"is_active",
|
||||
"inserted_at",
|
||||
"updated_at",
|
||||
]
|
||||
extra_kwargs = {
|
||||
"id": {"read_only": True},
|
||||
"is_active": {"read_only": True},
|
||||
"inserted_at": {"read_only": True},
|
||||
"updated_at": {"read_only": True},
|
||||
}
|
||||
|
||||
def validate(self, attrs):
|
||||
tenant_id = self.context.get("request").tenant_id
|
||||
if LighthouseConfiguration.objects.filter(tenant_id=tenant_id).exists():
|
||||
raise serializers.ValidationError(
|
||||
{
|
||||
"tenant_id": "Lighthouse configuration already exists for this tenant."
|
||||
}
|
||||
)
|
||||
return super().validate(attrs)
|
||||
|
||||
def create(self, validated_data):
|
||||
api_key = validated_data.pop("api_key")
|
||||
instance = super().create(validated_data)
|
||||
instance.api_key_decoded = api_key
|
||||
instance.save()
|
||||
return instance
|
||||
|
||||
def to_representation(self, instance):
|
||||
data = super().to_representation(instance)
|
||||
# Always mask the API key in the response
|
||||
data["api_key"] = "*" * len(instance.api_key) if instance.api_key else None
|
||||
return data
|
||||
|
||||
|
||||
class LighthouseConfigUpdateSerializer(BaseWriteSerializer):
|
||||
"""
|
||||
Serializer for updating LighthouseConfig instances.
|
||||
"""
|
||||
|
||||
api_key = serializers.CharField(write_only=True, required=False)
|
||||
|
||||
class Meta:
|
||||
model = LighthouseConfiguration
|
||||
fields = [
|
||||
"id",
|
||||
"name",
|
||||
"api_key",
|
||||
"model",
|
||||
"temperature",
|
||||
"max_tokens",
|
||||
"business_context",
|
||||
"is_active",
|
||||
]
|
||||
extra_kwargs = {
|
||||
"id": {"read_only": True},
|
||||
"is_active": {"read_only": True},
|
||||
"name": {"required": False},
|
||||
"model": {"required": False},
|
||||
"temperature": {"required": False},
|
||||
"max_tokens": {"required": False},
|
||||
}
|
||||
|
||||
def update(self, instance, validated_data):
|
||||
api_key = validated_data.pop("api_key", None)
|
||||
instance = super().update(instance, validated_data)
|
||||
if api_key:
|
||||
instance.api_key_decoded = api_key
|
||||
instance.save()
|
||||
return instance
|
||||
|
||||
@@ -13,6 +13,7 @@ from api.v1.views import (
|
||||
IntegrationViewSet,
|
||||
InvitationAcceptViewSet,
|
||||
InvitationViewSet,
|
||||
LighthouseConfigViewSet,
|
||||
MembershipViewSet,
|
||||
OverviewViewSet,
|
||||
ProviderGroupProvidersRelationshipView,
|
||||
@@ -22,10 +23,13 @@ from api.v1.views import (
|
||||
ResourceViewSet,
|
||||
RoleProviderGroupRelationshipView,
|
||||
RoleViewSet,
|
||||
SAMLConfigurationViewSet,
|
||||
SAMLInitiateAPIView,
|
||||
ScanViewSet,
|
||||
ScheduleViewSet,
|
||||
SchemaView,
|
||||
TaskViewSet,
|
||||
TenantFinishACSView,
|
||||
TenantMembersViewSet,
|
||||
TenantViewSet,
|
||||
UserRoleRelationshipView,
|
||||
@@ -49,6 +53,12 @@ router.register(
|
||||
router.register(r"overviews", OverviewViewSet, basename="overview")
|
||||
router.register(r"schedules", ScheduleViewSet, basename="schedule")
|
||||
router.register(r"integrations", IntegrationViewSet, basename="integration")
|
||||
router.register(r"saml-config", SAMLConfigurationViewSet, basename="saml-config")
|
||||
router.register(
|
||||
r"lighthouse-configurations",
|
||||
LighthouseConfigViewSet,
|
||||
basename="lighthouseconfiguration",
|
||||
)
|
||||
|
||||
tenants_router = routers.NestedSimpleRouter(router, r"tenants", lookup="tenant")
|
||||
tenants_router.register(
|
||||
@@ -112,6 +122,17 @@ urlpatterns = [
|
||||
),
|
||||
name="provider_group-providers-relationship",
|
||||
),
|
||||
# API endpoint to start SAML SSO flow
|
||||
path(
|
||||
"auth/saml/initiate/", SAMLInitiateAPIView.as_view(), name="api_saml_initiate"
|
||||
),
|
||||
# Allauth SAML endpoints for tenants
|
||||
path("accounts/", include("allauth.urls")),
|
||||
path(
|
||||
"api/v1/accounts/saml/<organization_slug>/acs/finish/",
|
||||
TenantFinishACSView.as_view(),
|
||||
name="saml_finish_acs",
|
||||
),
|
||||
path("tokens/google", GoogleSocialLoginView.as_view(), name="token-google"),
|
||||
path("tokens/github", GithubSocialLoginView.as_view(), name="token-github"),
|
||||
path("", include(router.urls)),
|
||||
|
||||
+675
-68
@@ -3,8 +3,10 @@ import os
|
||||
from datetime import datetime, timedelta, timezone
|
||||
|
||||
import sentry_sdk
|
||||
from allauth.socialaccount.models import SocialAccount, SocialApp
|
||||
from allauth.socialaccount.providers.github.views import GitHubOAuth2Adapter
|
||||
from allauth.socialaccount.providers.google.views import GoogleOAuth2Adapter
|
||||
from allauth.socialaccount.providers.saml.views import FinishACSView
|
||||
from botocore.exceptions import ClientError, NoCredentialsError, ParamValidationError
|
||||
from celery.result import AsyncResult
|
||||
from config.env import env
|
||||
@@ -17,19 +19,20 @@ from django.conf import settings as django_settings
|
||||
from django.contrib.postgres.aggregates import ArrayAgg
|
||||
from django.contrib.postgres.search import SearchQuery
|
||||
from django.db import transaction
|
||||
from django.db.models import Count, Exists, F, OuterRef, Prefetch, Q, Subquery, Sum
|
||||
from django.db.models import Count, Exists, F, OuterRef, Prefetch, Q, Sum
|
||||
from django.db.models.functions import Coalesce
|
||||
from django.http import HttpResponse
|
||||
from django.http import HttpResponse, JsonResponse
|
||||
from django.shortcuts import redirect
|
||||
from django.urls import reverse
|
||||
from django.utils.dateparse import parse_date
|
||||
from django.utils.decorators import method_decorator
|
||||
from django.views.decorators.cache import cache_control
|
||||
from django_celery_beat.models import PeriodicTask
|
||||
from drf_spectacular.settings import spectacular_settings
|
||||
from drf_spectacular.types import OpenApiTypes
|
||||
from drf_spectacular.utils import (
|
||||
OpenApiParameter,
|
||||
OpenApiResponse,
|
||||
OpenApiTypes,
|
||||
extend_schema,
|
||||
extend_schema_view,
|
||||
)
|
||||
@@ -51,6 +54,7 @@ from tasks.beat import schedule_provider_scan
|
||||
from tasks.jobs.export import get_s3_client
|
||||
from tasks.tasks import (
|
||||
backfill_scan_resource_summaries_task,
|
||||
check_lighthouse_connection_task,
|
||||
check_provider_connection_task,
|
||||
delete_provider_task,
|
||||
delete_tenant_task,
|
||||
@@ -58,8 +62,13 @@ from tasks.tasks import (
|
||||
)
|
||||
|
||||
from api.base_views import BaseRLSViewSet, BaseTenantViewset, BaseUserViewset
|
||||
from api.compliance import get_compliance_frameworks
|
||||
from api.compliance import (
|
||||
PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE,
|
||||
get_compliance_frameworks,
|
||||
)
|
||||
from api.db_router import MainRouter
|
||||
from api.db_utils import rls_transaction
|
||||
from api.exceptions import TaskFailedException
|
||||
from api.filters import (
|
||||
ComplianceOverviewFilter,
|
||||
FindingFilter,
|
||||
@@ -81,9 +90,11 @@ from api.filters import (
|
||||
)
|
||||
from api.models import (
|
||||
ComplianceOverview,
|
||||
ComplianceRequirementOverview,
|
||||
Finding,
|
||||
Integration,
|
||||
Invitation,
|
||||
LighthouseConfiguration,
|
||||
Membership,
|
||||
Provider,
|
||||
ProviderGroup,
|
||||
@@ -94,6 +105,8 @@ from api.models import (
|
||||
ResourceScanSummary,
|
||||
Role,
|
||||
RoleProviderGroupRelationship,
|
||||
SAMLConfiguration,
|
||||
SAMLDomainIndex,
|
||||
Scan,
|
||||
ScanSummary,
|
||||
SeverityChoices,
|
||||
@@ -111,9 +124,10 @@ from api.utils import (
|
||||
validate_invitation,
|
||||
)
|
||||
from api.uuid_utils import datetime_to_uuid7, uuid7_start
|
||||
from api.v1.mixins import PaginateByPkMixin
|
||||
from api.v1.mixins import PaginateByPkMixin, TaskManagementMixin
|
||||
from api.v1.serializers import (
|
||||
ComplianceOverviewFullSerializer,
|
||||
ComplianceOverviewAttributesSerializer,
|
||||
ComplianceOverviewDetailSerializer,
|
||||
ComplianceOverviewMetadataSerializer,
|
||||
ComplianceOverviewSerializer,
|
||||
FindingDynamicFilterSerializer,
|
||||
@@ -126,6 +140,9 @@ from api.v1.serializers import (
|
||||
InvitationCreateSerializer,
|
||||
InvitationSerializer,
|
||||
InvitationUpdateSerializer,
|
||||
LighthouseConfigCreateSerializer,
|
||||
LighthouseConfigSerializer,
|
||||
LighthouseConfigUpdateSerializer,
|
||||
MembershipSerializer,
|
||||
OverviewFindingSerializer,
|
||||
OverviewProviderSerializer,
|
||||
@@ -146,6 +163,8 @@ from api.v1.serializers import (
|
||||
RoleProviderGroupRelationshipSerializer,
|
||||
RoleSerializer,
|
||||
RoleUpdateSerializer,
|
||||
SAMLConfigurationSerializer,
|
||||
SamlInitiateSerializer,
|
||||
ScanComplianceReportSerializer,
|
||||
ScanCreateSerializer,
|
||||
ScanReportSerializer,
|
||||
@@ -260,7 +279,7 @@ class SchemaView(SpectacularAPIView):
|
||||
|
||||
def get(self, request, *args, **kwargs):
|
||||
spectacular_settings.TITLE = "Prowler API"
|
||||
spectacular_settings.VERSION = "1.8.3"
|
||||
spectacular_settings.VERSION = "1.9.0"
|
||||
spectacular_settings.DESCRIPTION = (
|
||||
"Prowler API specification.\n\nThis file is auto-generated."
|
||||
)
|
||||
@@ -321,6 +340,11 @@ class SchemaView(SpectacularAPIView):
|
||||
"description": "Endpoints for managing third-party integrations, including registration, configuration,"
|
||||
" retrieval, and deletion of integrations such as S3, JIRA, or other services.",
|
||||
},
|
||||
{
|
||||
"name": "Lighthouse",
|
||||
"description": "Endpoints for managing Lighthouse configurations, including creation, retrieval, "
|
||||
"updating, and deletion of configurations such as OpenAI keys, models, and business context.",
|
||||
},
|
||||
]
|
||||
return super().get(request, *args, **kwargs)
|
||||
|
||||
@@ -377,6 +401,173 @@ class GithubSocialLoginView(SocialLoginView):
|
||||
return original_response
|
||||
|
||||
|
||||
@extend_schema(exclude=True)
|
||||
class SAMLInitiateAPIView(GenericAPIView):
|
||||
serializer_class = SamlInitiateSerializer
|
||||
permission_classes = []
|
||||
|
||||
def post(self, request, *args, **kwargs):
|
||||
serializer = self.get_serializer(data=request.data)
|
||||
serializer.is_valid(raise_exception=True)
|
||||
email = serializer.validated_data["email_domain"]
|
||||
domain = email.split("@", 1)[-1].lower()
|
||||
|
||||
try:
|
||||
check = SAMLDomainIndex.objects.get(email_domain=domain)
|
||||
with rls_transaction(str(check.tenant_id)):
|
||||
config = SAMLConfiguration.objects.get(tenant_id=str(check.tenant_id))
|
||||
except (SAMLDomainIndex.DoesNotExist, SAMLConfiguration.DoesNotExist):
|
||||
return Response(
|
||||
{"detail": "Unauthorized domain."}, status=status.HTTP_403_FORBIDDEN
|
||||
)
|
||||
|
||||
# Check certificates are not empty
|
||||
saml_public_cert = os.getenv("SAML_PUBLIC_CERT", "").strip()
|
||||
saml_private_key = os.getenv("SAML_PRIVATE_KEY", "").strip()
|
||||
|
||||
if not saml_public_cert or not saml_private_key:
|
||||
return Response(
|
||||
{"detail": "SAML configuration is invalid: missing certificates."},
|
||||
status=status.HTTP_403_FORBIDDEN,
|
||||
)
|
||||
|
||||
saml_login_url = reverse(
|
||||
"saml_login", kwargs={"organization_slug": config.email_domain}
|
||||
)
|
||||
return redirect(f"{saml_login_url}?email={email}")
|
||||
|
||||
|
||||
@extend_schema_view(
|
||||
list=extend_schema(
|
||||
tags=["SAML"],
|
||||
summary="List all SSO configurations",
|
||||
description="Returns all the SAML-based SSO configurations associated with the current tenant.",
|
||||
),
|
||||
retrieve=extend_schema(
|
||||
tags=["SAML"],
|
||||
summary="Retrieve SSO configuration details",
|
||||
description="Returns the details of a specific SAML configuration belonging to the current tenant.",
|
||||
),
|
||||
create=extend_schema(
|
||||
tags=["SAML"],
|
||||
summary="Create the SSO configuration",
|
||||
description="Creates a new SAML SSO configuration for the current tenant, including email domain and metadata XML.",
|
||||
),
|
||||
partial_update=extend_schema(
|
||||
tags=["SAML"],
|
||||
summary="Update the SSO configuration",
|
||||
description="Partially updates an existing SAML SSO configuration. Supports changes to email domain and metadata XML.",
|
||||
),
|
||||
destroy=extend_schema(
|
||||
tags=["SAML"],
|
||||
summary="Delete the SSO configuration",
|
||||
description="Deletes an existing SAML SSO configuration associated with the current tenant.",
|
||||
),
|
||||
)
|
||||
@method_decorator(CACHE_DECORATOR, name="retrieve")
|
||||
@method_decorator(CACHE_DECORATOR, name="list")
|
||||
class SAMLConfigurationViewSet(BaseRLSViewSet):
|
||||
"""
|
||||
ViewSet for managing SAML SSO configurations per tenant.
|
||||
|
||||
This endpoint allows authorized users to perform CRUD operations on SAMLConfiguration,
|
||||
which define how a tenant integrates with an external SAML Identity Provider (IdP).
|
||||
|
||||
Typical use cases include:
|
||||
- Listing all existing configurations for auditing or UI display.
|
||||
- Retrieving a single configuration to show setup details.
|
||||
- Creating or updating a configuration to onboard or modify SAML integration.
|
||||
- Deleting a configuration when deactivating SAML for a tenant.
|
||||
"""
|
||||
|
||||
serializer_class = SAMLConfigurationSerializer
|
||||
required_permissions = [Permissions.MANAGE_INTEGRATIONS]
|
||||
queryset = SAMLConfiguration.objects.all()
|
||||
|
||||
def get_queryset(self):
|
||||
# If called during schema generation, return an empty queryset
|
||||
if getattr(self, "swagger_fake_view", False):
|
||||
return SAMLConfiguration.objects.none()
|
||||
return SAMLConfiguration.objects.filter(tenant=self.request.tenant_id)
|
||||
|
||||
|
||||
class TenantFinishACSView(FinishACSView):
|
||||
def dispatch(self, request, organization_slug):
|
||||
response = super().dispatch(request, organization_slug)
|
||||
user = getattr(request, "user", None)
|
||||
if not user or not user.is_authenticated:
|
||||
return response
|
||||
|
||||
try:
|
||||
social_app = SocialApp.objects.get(
|
||||
provider="saml", client_id=organization_slug
|
||||
)
|
||||
social_account = SocialAccount.objects.get(
|
||||
user=user, provider=social_app.provider
|
||||
)
|
||||
except (SocialApp.DoesNotExist, SocialAccount.DoesNotExist):
|
||||
return response
|
||||
|
||||
extra = social_account.extra_data
|
||||
user.first_name = (
|
||||
extra.get("firstName", [""])[0] if extra.get("firstName") else ""
|
||||
)
|
||||
user.last_name = extra.get("lastName", [""])[0] if extra.get("lastName") else ""
|
||||
user.company_name = (
|
||||
extra.get("organization", [""])[0] if extra.get("organization") else ""
|
||||
)
|
||||
user.name = f"{user.first_name} {user.last_name}".strip()
|
||||
if user.name == "":
|
||||
user.name = "N/A"
|
||||
user.save()
|
||||
|
||||
email_domain = user.email.split("@")[-1]
|
||||
tenant = (
|
||||
SAMLConfiguration.objects.using(MainRouter.admin_db)
|
||||
.get(email_domain=email_domain)
|
||||
.tenant
|
||||
)
|
||||
role_name = (
|
||||
extra.get("userType", ["saml_default_role"])[0].strip()
|
||||
if extra.get("userType")
|
||||
else "saml_default_role"
|
||||
)
|
||||
try:
|
||||
role = Role.objects.using(MainRouter.admin_db).get(
|
||||
name=role_name, tenant=tenant
|
||||
)
|
||||
except Role.DoesNotExist:
|
||||
role = Role.objects.using(MainRouter.admin_db).create(
|
||||
name=role_name,
|
||||
tenant=tenant,
|
||||
manage_users=False,
|
||||
manage_account=False,
|
||||
manage_billing=False,
|
||||
manage_providers=False,
|
||||
manage_integrations=False,
|
||||
manage_scans=False,
|
||||
unlimited_visibility=False,
|
||||
)
|
||||
UserRoleRelationship.objects.using(MainRouter.admin_db).filter(
|
||||
user=user,
|
||||
tenant_id=tenant.id,
|
||||
).delete()
|
||||
UserRoleRelationship.objects.using(MainRouter.admin_db).create(
|
||||
user=user,
|
||||
role=role,
|
||||
tenant_id=tenant.id,
|
||||
)
|
||||
|
||||
serializer = TokenSocialLoginSerializer(data={"email": user.email})
|
||||
serializer.is_valid(raise_exception=True)
|
||||
return JsonResponse(
|
||||
{
|
||||
"type": "saml-social-tokens",
|
||||
"attributes": serializer.validated_data,
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@extend_schema_view(
|
||||
list=extend_schema(
|
||||
tags=["User"],
|
||||
@@ -1909,6 +2100,8 @@ class FindingViewSet(PaginateByPkMixin, BaseRLSViewSet):
|
||||
)
|
||||
resource_types = list(
|
||||
queryset.values_list("resource_type", flat=True)
|
||||
.exclude(resource_type__isnull=True)
|
||||
.exclude(resource_type__exact="")
|
||||
.distinct()
|
||||
.order_by("resource_type")
|
||||
)
|
||||
@@ -2010,6 +2203,8 @@ class FindingViewSet(PaginateByPkMixin, BaseRLSViewSet):
|
||||
)
|
||||
resource_types = list(
|
||||
queryset.values_list("resource_type", flat=True)
|
||||
.exclude(resource_type__isnull=True)
|
||||
.exclude(resource_type__exact="")
|
||||
.distinct()
|
||||
.order_by("resource_type")
|
||||
)
|
||||
@@ -2391,8 +2586,7 @@ class RoleProviderGroupRelationshipView(RelationshipView, BaseRLSViewSet):
|
||||
list=extend_schema(
|
||||
tags=["Compliance Overview"],
|
||||
summary="List compliance overviews for a scan",
|
||||
description="Retrieve an overview of all the compliance in a given scan. If no region filters are provided, the"
|
||||
" region with the most fails will be returned by default.",
|
||||
description="Retrieve an overview of all the compliance in a given scan.",
|
||||
parameters=[
|
||||
OpenApiParameter(
|
||||
name="filter[scan_id]",
|
||||
@@ -2402,12 +2596,18 @@ class RoleProviderGroupRelationshipView(RelationshipView, BaseRLSViewSet):
|
||||
description="Related scan ID.",
|
||||
),
|
||||
],
|
||||
),
|
||||
retrieve=extend_schema(
|
||||
tags=["Compliance Overview"],
|
||||
summary="Retrieve data from a specific compliance overview",
|
||||
description="Fetch detailed information about a specific compliance overview by its ID, including detailed "
|
||||
"requirement information and check's status.",
|
||||
responses={
|
||||
200: OpenApiResponse(
|
||||
description="Compliance overviews obtained successfully",
|
||||
response=ComplianceOverviewSerializer(many=True),
|
||||
),
|
||||
202: OpenApiResponse(
|
||||
description="The task is in progress", response=TaskSerializer
|
||||
),
|
||||
500: OpenApiResponse(
|
||||
description="Compliance overviews generation task failed"
|
||||
),
|
||||
},
|
||||
),
|
||||
metadata=extend_schema(
|
||||
tags=["Compliance Overview"],
|
||||
@@ -2423,19 +2623,84 @@ class RoleProviderGroupRelationshipView(RelationshipView, BaseRLSViewSet):
|
||||
description="Related scan ID.",
|
||||
),
|
||||
],
|
||||
responses={
|
||||
200: OpenApiResponse(
|
||||
description="Compliance overviews metadata obtained successfully",
|
||||
response=ComplianceOverviewMetadataSerializer,
|
||||
),
|
||||
202: OpenApiResponse(description="The task is in progress"),
|
||||
500: OpenApiResponse(
|
||||
description="Compliance overviews generation task failed"
|
||||
),
|
||||
},
|
||||
),
|
||||
requirements=extend_schema(
|
||||
tags=["Compliance Overview"],
|
||||
summary="List compliance requirements overview for a scan",
|
||||
description="Retrieve a detailed overview of compliance requirements in a given scan, grouped by compliance "
|
||||
"framework. This endpoint provides requirement-level details and aggregates status across regions.",
|
||||
parameters=[
|
||||
OpenApiParameter(
|
||||
name="filter[scan_id]",
|
||||
required=True,
|
||||
type=OpenApiTypes.UUID,
|
||||
location=OpenApiParameter.QUERY,
|
||||
description="Related scan ID.",
|
||||
),
|
||||
OpenApiParameter(
|
||||
name="filter[compliance_id]",
|
||||
required=True,
|
||||
type=OpenApiTypes.STR,
|
||||
location=OpenApiParameter.QUERY,
|
||||
description="Compliance ID.",
|
||||
),
|
||||
],
|
||||
responses={
|
||||
200: OpenApiResponse(
|
||||
description="Compliance requirement details obtained successfully",
|
||||
response=ComplianceOverviewDetailSerializer(many=True),
|
||||
),
|
||||
202: OpenApiResponse(description="The task is in progress"),
|
||||
500: OpenApiResponse(
|
||||
description="Compliance overviews generation task failed"
|
||||
),
|
||||
},
|
||||
filters=True,
|
||||
),
|
||||
attributes=extend_schema(
|
||||
tags=["Compliance Overview"],
|
||||
summary="Get compliance requirement attributes",
|
||||
description="Retrieve detailed attribute information for all requirements in a specific compliance framework "
|
||||
"along with the associated check IDs for each requirement.",
|
||||
parameters=[
|
||||
OpenApiParameter(
|
||||
name="filter[compliance_id]",
|
||||
required=True,
|
||||
type=str,
|
||||
location=OpenApiParameter.QUERY,
|
||||
description="Compliance framework ID to get attributes for.",
|
||||
),
|
||||
],
|
||||
responses={
|
||||
200: OpenApiResponse(
|
||||
description="Compliance attributes obtained successfully",
|
||||
response=ComplianceOverviewAttributesSerializer(many=True),
|
||||
),
|
||||
},
|
||||
),
|
||||
)
|
||||
@method_decorator(CACHE_DECORATOR, name="list")
|
||||
@method_decorator(CACHE_DECORATOR, name="retrieve")
|
||||
class ComplianceOverviewViewSet(BaseRLSViewSet):
|
||||
@method_decorator(CACHE_DECORATOR, name="requirements")
|
||||
@method_decorator(CACHE_DECORATOR, name="attributes")
|
||||
class ComplianceOverviewViewSet(BaseRLSViewSet, TaskManagementMixin):
|
||||
pagination_class = ComplianceOverviewPagination
|
||||
queryset = ComplianceOverview.objects.all()
|
||||
queryset = ComplianceRequirementOverview.objects.all()
|
||||
serializer_class = ComplianceOverviewSerializer
|
||||
filterset_class = ComplianceOverviewFilter
|
||||
http_method_names = ["get"]
|
||||
search_fields = ["compliance_id"]
|
||||
ordering = ["compliance_id"]
|
||||
ordering_fields = ["inserted_at", "compliance_id", "framework", "region"]
|
||||
ordering_fields = ["compliance_id"]
|
||||
# RBAC required permissions (implicit -> MANAGE_PROVIDERS enable unlimited visibility or check the visibility of
|
||||
# the provider through the provider group)
|
||||
required_permissions = []
|
||||
@@ -2446,51 +2711,44 @@ class ComplianceOverviewViewSet(BaseRLSViewSet):
|
||||
role, Permissions.UNLIMITED_VISIBILITY.value, False
|
||||
)
|
||||
|
||||
if self.action == "retrieve":
|
||||
if unlimited_visibility:
|
||||
# User has unlimited visibility, return all compliance
|
||||
return ComplianceOverview.objects.filter(
|
||||
tenant_id=self.request.tenant_id
|
||||
)
|
||||
|
||||
providers = get_providers(role)
|
||||
return ComplianceOverview.objects.filter(
|
||||
tenant_id=self.request.tenant_id, scan__provider__in=providers
|
||||
)
|
||||
|
||||
if unlimited_visibility:
|
||||
base_queryset = self.filter_queryset(
|
||||
ComplianceOverview.objects.filter(tenant_id=self.request.tenant_id)
|
||||
ComplianceRequirementOverview.objects.filter(
|
||||
tenant_id=self.request.tenant_id
|
||||
)
|
||||
)
|
||||
else:
|
||||
providers = Provider.objects.filter(
|
||||
provider_groups__in=role.provider_groups.all()
|
||||
).distinct()
|
||||
base_queryset = self.filter_queryset(
|
||||
ComplianceOverview.objects.filter(
|
||||
ComplianceRequirementOverview.objects.filter(
|
||||
tenant_id=self.request.tenant_id, scan__provider__in=providers
|
||||
)
|
||||
)
|
||||
|
||||
max_failed_ids = (
|
||||
base_queryset.filter(compliance_id=OuterRef("compliance_id"))
|
||||
.order_by("-requirements_failed")
|
||||
.values("id")[:1]
|
||||
)
|
||||
|
||||
return base_queryset.filter(id__in=Subquery(max_failed_ids)).order_by(
|
||||
"compliance_id"
|
||||
)
|
||||
return base_queryset
|
||||
|
||||
def get_serializer_class(self):
|
||||
if self.action == "retrieve":
|
||||
return ComplianceOverviewFullSerializer
|
||||
if hasattr(self, "response_serializer_class"):
|
||||
return self.response_serializer_class
|
||||
elif self.action == "list":
|
||||
return ComplianceOverviewSerializer
|
||||
elif self.action == "metadata":
|
||||
return ComplianceOverviewMetadataSerializer
|
||||
elif self.action == "attributes":
|
||||
return ComplianceOverviewAttributesSerializer
|
||||
elif self.action == "requirements":
|
||||
return ComplianceOverviewDetailSerializer
|
||||
return super().get_serializer_class()
|
||||
|
||||
@extend_schema(exclude=True)
|
||||
def retrieve(self, request, *args, **kwargs):
|
||||
raise MethodNotAllowed(method="GET")
|
||||
|
||||
def list(self, request, *args, **kwargs):
|
||||
if not request.query_params.get("filter[scan_id]"):
|
||||
scan_id = request.query_params.get("filter[scan_id]")
|
||||
if not scan_id:
|
||||
raise ValidationError(
|
||||
[
|
||||
{
|
||||
@@ -2501,7 +2759,82 @@ class ComplianceOverviewViewSet(BaseRLSViewSet):
|
||||
}
|
||||
]
|
||||
)
|
||||
return super().list(request, *args, **kwargs)
|
||||
try:
|
||||
if task := self.get_task_response_if_running(
|
||||
task_name="scan-compliance-overviews",
|
||||
task_kwargs={"tenant_id": self.request.tenant_id, "scan_id": scan_id},
|
||||
raise_on_not_found=False,
|
||||
):
|
||||
return task
|
||||
except TaskFailedException:
|
||||
return Response(
|
||||
{"detail": "Task failed to generate compliance overview data."},
|
||||
status=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
)
|
||||
queryset = self.filter_queryset(self.filter_queryset(self.get_queryset()))
|
||||
|
||||
requirement_status_subquery = queryset.values(
|
||||
"compliance_id", "requirement_id"
|
||||
).annotate(
|
||||
fail_count=Count("id", filter=Q(requirement_status="FAIL")),
|
||||
pass_count=Count("id", filter=Q(requirement_status="PASS")),
|
||||
total_count=Count("id"),
|
||||
)
|
||||
|
||||
compliance_data = {}
|
||||
framework_info = {}
|
||||
|
||||
for item in queryset.values("compliance_id", "framework", "version").distinct():
|
||||
framework_info[item["compliance_id"]] = {
|
||||
"framework": item["framework"],
|
||||
"version": item["version"],
|
||||
}
|
||||
|
||||
for item in requirement_status_subquery:
|
||||
compliance_id = item["compliance_id"]
|
||||
|
||||
if item["fail_count"] > 0:
|
||||
req_status = "FAIL"
|
||||
elif item["pass_count"] == item["total_count"]:
|
||||
req_status = "PASS"
|
||||
else:
|
||||
req_status = "MANUAL"
|
||||
|
||||
if compliance_id not in compliance_data:
|
||||
compliance_data[compliance_id] = {
|
||||
"total_requirements": 0,
|
||||
"requirements_passed": 0,
|
||||
"requirements_failed": 0,
|
||||
"requirements_manual": 0,
|
||||
}
|
||||
|
||||
compliance_data[compliance_id]["total_requirements"] += 1
|
||||
if req_status == "PASS":
|
||||
compliance_data[compliance_id]["requirements_passed"] += 1
|
||||
elif req_status == "FAIL":
|
||||
compliance_data[compliance_id]["requirements_failed"] += 1
|
||||
else:
|
||||
compliance_data[compliance_id]["requirements_manual"] += 1
|
||||
|
||||
response_data = []
|
||||
for compliance_id, data in compliance_data.items():
|
||||
framework = framework_info.get(compliance_id, {})
|
||||
|
||||
response_data.append(
|
||||
{
|
||||
"id": compliance_id,
|
||||
"compliance_id": compliance_id,
|
||||
"framework": framework.get("framework", ""),
|
||||
"version": framework.get("version", ""),
|
||||
"requirements_passed": data["requirements_passed"],
|
||||
"requirements_failed": data["requirements_failed"],
|
||||
"requirements_manual": data["requirements_manual"],
|
||||
"total_requirements": data["total_requirements"],
|
||||
}
|
||||
)
|
||||
|
||||
serializer = self.get_serializer(response_data, many=True)
|
||||
return Response(serializer.data)
|
||||
|
||||
@action(detail=False, methods=["get"], url_name="metadata")
|
||||
def metadata(self, request):
|
||||
@@ -2517,11 +2850,21 @@ class ComplianceOverviewViewSet(BaseRLSViewSet):
|
||||
}
|
||||
]
|
||||
)
|
||||
|
||||
tenant_id = self.request.tenant_id
|
||||
|
||||
try:
|
||||
if task := self.get_task_response_if_running(
|
||||
task_name="scan-compliance-overviews",
|
||||
task_kwargs={"tenant_id": self.request.tenant_id, "scan_id": scan_id},
|
||||
raise_on_not_found=False,
|
||||
):
|
||||
return task
|
||||
except TaskFailedException:
|
||||
return Response(
|
||||
{"detail": "Task failed to generate compliance overview data."},
|
||||
status=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
)
|
||||
regions = list(
|
||||
ComplianceOverview.objects.filter(tenant_id=tenant_id, scan_id=scan_id)
|
||||
self.get_queryset()
|
||||
.filter(scan_id=scan_id)
|
||||
.values_list("region", flat=True)
|
||||
.order_by("region")
|
||||
.distinct()
|
||||
@@ -2532,10 +2875,182 @@ class ComplianceOverviewViewSet(BaseRLSViewSet):
|
||||
serializer.is_valid(raise_exception=True)
|
||||
return Response(serializer.data, status=status.HTTP_200_OK)
|
||||
|
||||
@action(detail=False, methods=["get"], url_name="requirements")
|
||||
def requirements(self, request):
|
||||
scan_id = request.query_params.get("filter[scan_id]")
|
||||
compliance_id = request.query_params.get("filter[compliance_id]")
|
||||
|
||||
if not scan_id:
|
||||
raise ValidationError(
|
||||
[
|
||||
{
|
||||
"detail": "This query parameter is required.",
|
||||
"status": 400,
|
||||
"source": {"pointer": "filter[scan_id]"},
|
||||
"code": "required",
|
||||
}
|
||||
]
|
||||
)
|
||||
|
||||
if not compliance_id:
|
||||
raise ValidationError(
|
||||
[
|
||||
{
|
||||
"detail": "This query parameter is required.",
|
||||
"status": 400,
|
||||
"source": {"pointer": "filter[compliance_id]"},
|
||||
"code": "required",
|
||||
}
|
||||
]
|
||||
)
|
||||
try:
|
||||
if task := self.get_task_response_if_running(
|
||||
task_name="scan-compliance-overviews",
|
||||
task_kwargs={"tenant_id": self.request.tenant_id, "scan_id": scan_id},
|
||||
raise_on_not_found=False,
|
||||
):
|
||||
return task
|
||||
except TaskFailedException:
|
||||
return Response(
|
||||
{"detail": "Task failed to generate compliance overview data."},
|
||||
status=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
)
|
||||
filtered_queryset = self.filter_queryset(self.get_queryset())
|
||||
|
||||
all_requirements = (
|
||||
filtered_queryset.values(
|
||||
"requirement_id", "framework", "version", "description"
|
||||
)
|
||||
.distinct()
|
||||
.annotate(
|
||||
total_instances=Count("id"),
|
||||
manual_count=Count("id", filter=Q(requirement_status="MANUAL")),
|
||||
)
|
||||
)
|
||||
|
||||
passed_instances = (
|
||||
filtered_queryset.filter(requirement_status="PASS")
|
||||
.values("requirement_id")
|
||||
.annotate(pass_count=Count("id"))
|
||||
)
|
||||
|
||||
passed_counts = {
|
||||
item["requirement_id"]: item["pass_count"] for item in passed_instances
|
||||
}
|
||||
|
||||
requirements_summary = []
|
||||
for requirement in all_requirements:
|
||||
requirement_id = requirement["requirement_id"]
|
||||
total_instances = requirement["total_instances"]
|
||||
passed_count = passed_counts.get(requirement_id, 0)
|
||||
is_manual = requirement["manual_count"] == total_instances
|
||||
if is_manual:
|
||||
requirement_status = "MANUAL"
|
||||
elif passed_count == total_instances:
|
||||
requirement_status = "PASS"
|
||||
else:
|
||||
requirement_status = "FAIL"
|
||||
|
||||
requirements_summary.append(
|
||||
{
|
||||
"id": requirement_id,
|
||||
"framework": requirement["framework"],
|
||||
"version": requirement["version"],
|
||||
"description": requirement["description"],
|
||||
"status": requirement_status,
|
||||
}
|
||||
)
|
||||
|
||||
serializer = self.get_serializer(requirements_summary, many=True)
|
||||
return Response(serializer.data, status=status.HTTP_200_OK)
|
||||
|
||||
@action(detail=False, methods=["get"], url_name="attributes")
|
||||
def attributes(self, request):
|
||||
compliance_id = request.query_params.get("filter[compliance_id]")
|
||||
if not compliance_id:
|
||||
raise ValidationError(
|
||||
[
|
||||
{
|
||||
"detail": "This query parameter is required.",
|
||||
"status": 400,
|
||||
"source": {"pointer": "filter[compliance_id]"},
|
||||
"code": "required",
|
||||
}
|
||||
]
|
||||
)
|
||||
|
||||
provider_type = None
|
||||
try:
|
||||
sample_requirement = (
|
||||
self.get_queryset().filter(compliance_id=compliance_id).first()
|
||||
)
|
||||
|
||||
if sample_requirement:
|
||||
provider_type = sample_requirement.scan.provider.provider
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# If we couldn't determine from database, try each provider type
|
||||
if not provider_type:
|
||||
for pt in Provider.ProviderChoices.values:
|
||||
if compliance_id in PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE.get(pt, {}):
|
||||
provider_type = pt
|
||||
break
|
||||
|
||||
if not provider_type:
|
||||
raise NotFound(detail=f"Compliance framework '{compliance_id}' not found.")
|
||||
|
||||
compliance_template = PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE.get(
|
||||
provider_type, {}
|
||||
)
|
||||
compliance_framework = compliance_template.get(compliance_id)
|
||||
|
||||
if not compliance_framework:
|
||||
raise NotFound(detail=f"Compliance framework '{compliance_id}' not found.")
|
||||
|
||||
attribute_data = []
|
||||
for requirement_id, requirement in compliance_framework.get(
|
||||
"requirements", {}
|
||||
).items():
|
||||
check_ids = list(requirement.get("checks", {}).keys())
|
||||
|
||||
metadata = requirement.get("attributes", [])
|
||||
|
||||
base_attributes = {
|
||||
"metadata": metadata,
|
||||
"check_ids": check_ids,
|
||||
}
|
||||
|
||||
# Add technique details for MITRE-ATTACK framework
|
||||
if "mitre_attack" in compliance_id:
|
||||
base_attributes["technique_details"] = {
|
||||
"tactics": requirement.get("tactics", []),
|
||||
"subtechniques": requirement.get("subtechniques", []),
|
||||
"platforms": requirement.get("platforms", []),
|
||||
"technique_url": requirement.get("technique_url", ""),
|
||||
}
|
||||
|
||||
attribute_data.append(
|
||||
{
|
||||
"id": requirement_id,
|
||||
"framework_description": compliance_framework.get(
|
||||
"description", ""
|
||||
),
|
||||
"name": requirement.get("name", ""),
|
||||
"framework": compliance_framework.get("framework", ""),
|
||||
"version": compliance_framework.get("version", ""),
|
||||
"description": requirement.get("description", ""),
|
||||
"attributes": base_attributes,
|
||||
}
|
||||
)
|
||||
|
||||
serializer = self.get_serializer(attribute_data, many=True)
|
||||
return Response(serializer.data, status=status.HTTP_200_OK)
|
||||
|
||||
|
||||
@extend_schema(tags=["Overview"])
|
||||
@extend_schema_view(
|
||||
providers=extend_schema(
|
||||
list=extend_schema(
|
||||
summary="Get aggregated provider data",
|
||||
description=(
|
||||
"Retrieve an aggregated overview of findings and resources grouped by providers. "
|
||||
@@ -2578,7 +3093,7 @@ class ComplianceOverviewViewSet(BaseRLSViewSet):
|
||||
class OverviewViewSet(BaseRLSViewSet):
|
||||
queryset = ComplianceOverview.objects.all()
|
||||
http_method_names = ["get"]
|
||||
ordering = ["-id"]
|
||||
ordering = ["-inserted_at"]
|
||||
# RBAC required permissions (implicit -> MANAGE_PROVIDERS enable unlimited visibility or check the visibility of
|
||||
# the provider through the provider group)
|
||||
required_permissions = []
|
||||
@@ -2640,33 +3155,48 @@ class OverviewViewSet(BaseRLSViewSet):
|
||||
.values_list("id", flat=True)
|
||||
)
|
||||
|
||||
resource_count_queryset = (
|
||||
Resource.all_objects.filter(
|
||||
tenant_id=tenant_id,
|
||||
provider_id=OuterRef("scan__provider_id"),
|
||||
)
|
||||
.order_by()
|
||||
.values("provider_id")
|
||||
.annotate(cnt=Count("id"))
|
||||
.values("cnt")
|
||||
)
|
||||
|
||||
overview_queryset = (
|
||||
findings_aggregated = (
|
||||
ScanSummary.all_objects.filter(
|
||||
tenant_id=tenant_id, scan_id__in=latest_scan_ids
|
||||
)
|
||||
.values(provider=F("scan__provider__provider"))
|
||||
.values(
|
||||
"scan__provider_id",
|
||||
provider=F("scan__provider__provider"),
|
||||
)
|
||||
.annotate(
|
||||
findings_passed=Coalesce(Sum("_pass"), 0),
|
||||
findings_failed=Coalesce(Sum("fail"), 0),
|
||||
findings_muted=Coalesce(Sum("muted"), 0),
|
||||
total_findings=Coalesce(Sum("total"), 0),
|
||||
total_resources=Coalesce(Subquery(resource_count_queryset), 0),
|
||||
)
|
||||
)
|
||||
|
||||
serializer = OverviewProviderSerializer(overview_queryset, many=True)
|
||||
return Response(serializer.data, status=status.HTTP_200_OK)
|
||||
resources_aggregated = (
|
||||
Resource.all_objects.filter(tenant_id=tenant_id)
|
||||
.values("provider_id")
|
||||
.annotate(total_resources=Count("id"))
|
||||
)
|
||||
resource_map = {
|
||||
row["provider_id"]: row["total_resources"] for row in resources_aggregated
|
||||
}
|
||||
|
||||
overview = []
|
||||
for row in findings_aggregated:
|
||||
overview.append(
|
||||
{
|
||||
"provider": row["provider"],
|
||||
"total_resources": resource_map.get(row["scan__provider_id"], 0),
|
||||
"total_findings": row["total_findings"],
|
||||
"findings_passed": row["findings_passed"],
|
||||
"findings_failed": row["findings_failed"],
|
||||
"findings_muted": row["findings_muted"],
|
||||
}
|
||||
)
|
||||
|
||||
return Response(
|
||||
OverviewProviderSerializer(overview, many=True).data,
|
||||
status=status.HTTP_200_OK,
|
||||
)
|
||||
|
||||
@action(detail=False, methods=["get"], url_name="findings")
|
||||
def findings(self, request):
|
||||
@@ -2885,3 +3415,80 @@ class IntegrationViewSet(BaseRLSViewSet):
|
||||
context = super().get_serializer_context()
|
||||
context["allowed_providers"] = self.allowed_providers
|
||||
return context
|
||||
|
||||
|
||||
@extend_schema_view(
|
||||
list=extend_schema(
|
||||
tags=["Lighthouse"],
|
||||
summary="List all Lighthouse configurations",
|
||||
description="Retrieve a list of all Lighthouse configurations.",
|
||||
),
|
||||
create=extend_schema(
|
||||
tags=["Lighthouse"],
|
||||
summary="Create a new Lighthouse configuration",
|
||||
description="Create a new Lighthouse configuration with the specified details.",
|
||||
),
|
||||
partial_update=extend_schema(
|
||||
tags=["Lighthouse"],
|
||||
summary="Partially update a Lighthouse configuration",
|
||||
description="Update certain fields of an existing Lighthouse configuration.",
|
||||
),
|
||||
destroy=extend_schema(
|
||||
tags=["Lighthouse"],
|
||||
summary="Delete a Lighthouse configuration",
|
||||
description="Remove a Lighthouse configuration by its ID.",
|
||||
),
|
||||
connection=extend_schema(
|
||||
tags=["Lighthouse"],
|
||||
summary="Check the connection to the OpenAI API",
|
||||
description="Verify the connection to the OpenAI API for a specific Lighthouse configuration.",
|
||||
request=None,
|
||||
responses={202: OpenApiResponse(response=TaskSerializer)},
|
||||
),
|
||||
)
|
||||
class LighthouseConfigViewSet(BaseRLSViewSet):
|
||||
"""
|
||||
API endpoint for managing Lighthouse configuration.
|
||||
"""
|
||||
|
||||
serializer_class = LighthouseConfigSerializer
|
||||
ordering_fields = ["name", "inserted_at", "updated_at", "is_active"]
|
||||
ordering = ["-inserted_at"]
|
||||
|
||||
def get_queryset(self):
|
||||
return LighthouseConfiguration.objects.filter(tenant_id=self.request.tenant_id)
|
||||
|
||||
def get_serializer_class(self):
|
||||
if self.action == "create":
|
||||
return LighthouseConfigCreateSerializer
|
||||
elif self.action == "partial_update":
|
||||
return LighthouseConfigUpdateSerializer
|
||||
elif self.action == "connection":
|
||||
return TaskSerializer
|
||||
return super().get_serializer_class()
|
||||
|
||||
@extend_schema(exclude=True)
|
||||
def retrieve(self, request, *args, **kwargs):
|
||||
raise MethodNotAllowed(method="GET")
|
||||
|
||||
@action(detail=True, methods=["post"], url_name="connection")
|
||||
def connection(self, request, pk=None):
|
||||
"""
|
||||
Check the connection to the OpenAI API asynchronously.
|
||||
"""
|
||||
instance = self.get_object()
|
||||
with transaction.atomic():
|
||||
task = check_lighthouse_connection_task.delay(
|
||||
lighthouse_config_id=str(instance.id), tenant_id=self.request.tenant_id
|
||||
)
|
||||
prowler_task = Task.objects.get(id=task.id)
|
||||
serializer = TaskSerializer(prowler_task)
|
||||
return Response(
|
||||
data=serializer.data,
|
||||
status=status.HTTP_202_ACCEPTED,
|
||||
headers={
|
||||
"Content-Location": reverse(
|
||||
"task-detail", kwargs={"pk": prowler_task.id}
|
||||
)
|
||||
},
|
||||
)
|
||||
|
||||
@@ -1,6 +1,13 @@
|
||||
import warnings
|
||||
|
||||
from celery import Celery, Task
|
||||
from config.env import env
|
||||
|
||||
# Suppress specific warnings from django-rest-auth: https://github.com/iMerica/dj-rest-auth/issues/684
|
||||
warnings.filterwarnings(
|
||||
"ignore", category=UserWarning, module="dj_rest_auth.registration.serializers"
|
||||
)
|
||||
|
||||
BROKER_VISIBILITY_TIMEOUT = env.int("DJANGO_BROKER_VISIBILITY_TIMEOUT", default=86400)
|
||||
|
||||
celery_app = Celery("tasks")
|
||||
|
||||
@@ -10,6 +10,7 @@ from config.settings.social_login import * # noqa
|
||||
SECRET_KEY = env("SECRET_KEY", default="secret")
|
||||
DEBUG = env.bool("DJANGO_DEBUG", default=False)
|
||||
ALLOWED_HOSTS = ["localhost", "127.0.0.1"]
|
||||
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
|
||||
|
||||
# Application definition
|
||||
|
||||
@@ -26,16 +27,19 @@ INSTALLED_APPS = [
|
||||
"rest_framework",
|
||||
"corsheaders",
|
||||
"drf_spectacular",
|
||||
"drf_spectacular_jsonapi",
|
||||
"django_guid",
|
||||
"rest_framework_json_api",
|
||||
"django_celery_results",
|
||||
"django_celery_beat",
|
||||
"rest_framework_simplejwt.token_blacklist",
|
||||
"allauth",
|
||||
"django.contrib.sites",
|
||||
"allauth.account",
|
||||
"allauth.socialaccount",
|
||||
"allauth.socialaccount.providers.google",
|
||||
"allauth.socialaccount.providers.github",
|
||||
"allauth.socialaccount.providers.saml",
|
||||
"dj_rest_auth.registration",
|
||||
"rest_framework.authtoken",
|
||||
]
|
||||
@@ -127,7 +131,6 @@ DJANGO_GUID = {
|
||||
}
|
||||
|
||||
DATABASE_ROUTERS = ["api.db_router.MainRouter"]
|
||||
POSTGRES_EXTRA_DB_BACKEND_BASE = "database_backend"
|
||||
|
||||
|
||||
# Password validation
|
||||
@@ -239,8 +242,9 @@ DJANGO_OUTPUT_S3_AWS_SECRET_ACCESS_KEY = env.str(
|
||||
DJANGO_OUTPUT_S3_AWS_SESSION_TOKEN = env.str("DJANGO_OUTPUT_S3_AWS_SESSION_TOKEN", "")
|
||||
DJANGO_OUTPUT_S3_AWS_DEFAULT_REGION = env.str("DJANGO_OUTPUT_S3_AWS_DEFAULT_REGION", "")
|
||||
|
||||
DJANGO_DELETION_BATCH_SIZE = env.int("DJANGO_DELETION_BATCH_SIZE", 5000)
|
||||
# HTTP Security Headers
|
||||
SECURE_CONTENT_TYPE_NOSNIFF = True
|
||||
X_FRAME_OPTIONS = "DENY"
|
||||
SECURE_REFERRER_POLICY = "strict-origin-when-cross-origin"
|
||||
|
||||
DJANGO_DELETION_BATCH_SIZE = env.int("DJANGO_DELETION_BATCH_SIZE", 5000)
|
||||
|
||||
@@ -79,9 +79,16 @@ def before_send(event, hint):
|
||||
log_msg = hint["log_record"].msg
|
||||
log_lvl = hint["log_record"].levelno
|
||||
|
||||
# Handle Error events and discard the rest
|
||||
if log_lvl == 40 and any(ignored in log_msg for ignored in IGNORED_EXCEPTIONS):
|
||||
return
|
||||
# Handle Error and Critical events and discard the rest
|
||||
if log_lvl <= 40 and any(ignored in log_msg for ignored in IGNORED_EXCEPTIONS):
|
||||
return None # Explicitly return None to drop the event
|
||||
|
||||
# Ignore exceptions with the ignored_exceptions
|
||||
if "exc_info" in hint and hint["exc_info"]:
|
||||
exc_value = str(hint["exc_info"][1])
|
||||
if any(ignored in exc_value for ignored in IGNORED_EXCEPTIONS):
|
||||
return None # Explicitly return None to drop the event
|
||||
|
||||
return event
|
||||
|
||||
|
||||
|
||||
@@ -11,8 +11,7 @@ GITHUB_OAUTH_CALLBACK_URL = env("SOCIAL_GITHUB_OAUTH_CALLBACK_URL", default="")
|
||||
|
||||
# Allauth settings
|
||||
ACCOUNT_LOGIN_METHODS = {"email"} # Use Email / Password authentication
|
||||
ACCOUNT_USERNAME_REQUIRED = False
|
||||
ACCOUNT_EMAIL_REQUIRED = True
|
||||
ACCOUNT_SIGNUP_FIELDS = ["email*", "password1*", "password2*"]
|
||||
ACCOUNT_EMAIL_VERIFICATION = "none" # Do not require email confirmation
|
||||
ACCOUNT_USER_MODEL_USERNAME_FIELD = None
|
||||
REST_AUTH = {
|
||||
@@ -25,6 +24,11 @@ SOCIALACCOUNT_EMAIL_AUTHENTICATION = True
|
||||
# Connect local account and social account if local account with that email address already exists
|
||||
SOCIALACCOUNT_EMAIL_AUTHENTICATION_AUTO_CONNECT = True
|
||||
SOCIALACCOUNT_ADAPTER = "api.adapters.ProwlerSocialAccountAdapter"
|
||||
|
||||
# SAML keys
|
||||
SAML_PUBLIC_CERT = env("SAML_PUBLIC_CERT", default="")
|
||||
SAML_PRIVATE_KEY = env("SAML_PRIVATE_KEY", default="")
|
||||
|
||||
SOCIALACCOUNT_PROVIDERS = {
|
||||
"google": {
|
||||
"APP": {
|
||||
@@ -50,4 +54,18 @@ SOCIALACCOUNT_PROVIDERS = {
|
||||
"read:org",
|
||||
],
|
||||
},
|
||||
"saml": {
|
||||
"use_nameid_for_email": True,
|
||||
"sp": {
|
||||
"entity_id": "urn:prowler.com:sp",
|
||||
},
|
||||
"advanced": {
|
||||
"x509cert": SAML_PUBLIC_CERT,
|
||||
"private_key": SAML_PRIVATE_KEY,
|
||||
"name_id_format": "urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress",
|
||||
"authn_request_signed": True,
|
||||
"want_assertion_signed": True,
|
||||
"want_message_signed": True,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
+204
-1
@@ -1,8 +1,9 @@
|
||||
import logging
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from unittest.mock import patch
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
from allauth.socialaccount.models import SocialLogin
|
||||
from django.conf import settings
|
||||
from django.db import connection as django_connection
|
||||
from django.db import connections as django_connections
|
||||
@@ -15,10 +16,12 @@ from tasks.jobs.backfill import backfill_resource_scan_summaries
|
||||
from api.db_utils import rls_transaction
|
||||
from api.models import (
|
||||
ComplianceOverview,
|
||||
ComplianceRequirementOverview,
|
||||
Finding,
|
||||
Integration,
|
||||
IntegrationProviderRelationship,
|
||||
Invitation,
|
||||
LighthouseConfiguration,
|
||||
Membership,
|
||||
Provider,
|
||||
ProviderGroup,
|
||||
@@ -26,9 +29,12 @@ from api.models import (
|
||||
Resource,
|
||||
ResourceTag,
|
||||
Role,
|
||||
SAMLConfiguration,
|
||||
SAMLDomainIndex,
|
||||
Scan,
|
||||
ScanSummary,
|
||||
StateChoices,
|
||||
StatusChoices,
|
||||
Task,
|
||||
User,
|
||||
UserRoleRelationship,
|
||||
@@ -777,6 +783,131 @@ def compliance_overviews_fixture(scans_fixture, tenants_fixture):
|
||||
return compliance_overview1, compliance_overview2
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def compliance_requirements_overviews_fixture(scans_fixture, tenants_fixture):
|
||||
"""Fixture for ComplianceRequirementOverview objects used by the new ComplianceOverviewViewSet."""
|
||||
tenant = tenants_fixture[0]
|
||||
scan1, scan2, scan3 = scans_fixture
|
||||
|
||||
# Create ComplianceRequirementOverview objects for scan1
|
||||
requirement_overview1 = ComplianceRequirementOverview.objects.create(
|
||||
tenant=tenant,
|
||||
scan=scan1,
|
||||
compliance_id="aws_account_security_onboarding_aws",
|
||||
framework="AWS-Account-Security-Onboarding",
|
||||
version="1.0",
|
||||
description="Description for AWS Account Security Onboarding",
|
||||
region="eu-west-1",
|
||||
requirement_id="requirement1",
|
||||
requirement_status=StatusChoices.PASS,
|
||||
passed_checks=2,
|
||||
failed_checks=0,
|
||||
total_checks=2,
|
||||
)
|
||||
|
||||
requirement_overview2 = ComplianceRequirementOverview.objects.create(
|
||||
tenant=tenant,
|
||||
scan=scan1,
|
||||
compliance_id="aws_account_security_onboarding_aws",
|
||||
framework="AWS-Account-Security-Onboarding",
|
||||
version="1.0",
|
||||
description="Description for AWS Account Security Onboarding",
|
||||
region="eu-west-1",
|
||||
requirement_id="requirement2",
|
||||
requirement_status=StatusChoices.PASS,
|
||||
passed_checks=2,
|
||||
failed_checks=0,
|
||||
total_checks=2,
|
||||
)
|
||||
|
||||
requirement_overview3 = ComplianceRequirementOverview.objects.create(
|
||||
tenant=tenant,
|
||||
scan=scan1,
|
||||
compliance_id="aws_account_security_onboarding_aws",
|
||||
framework="AWS-Account-Security-Onboarding",
|
||||
version="1.0",
|
||||
description="Description for AWS Account Security Onboarding",
|
||||
region="eu-west-2",
|
||||
requirement_id="requirement1",
|
||||
requirement_status=StatusChoices.PASS,
|
||||
passed_checks=2,
|
||||
failed_checks=0,
|
||||
total_checks=2,
|
||||
)
|
||||
|
||||
requirement_overview4 = ComplianceRequirementOverview.objects.create(
|
||||
tenant=tenant,
|
||||
scan=scan1,
|
||||
compliance_id="aws_account_security_onboarding_aws",
|
||||
framework="AWS-Account-Security-Onboarding",
|
||||
version="1.0",
|
||||
description="Description for AWS Account Security Onboarding",
|
||||
region="eu-west-2",
|
||||
requirement_id="requirement2",
|
||||
requirement_status=StatusChoices.FAIL,
|
||||
passed_checks=1,
|
||||
failed_checks=1,
|
||||
total_checks=2,
|
||||
)
|
||||
|
||||
requirement_overview5 = ComplianceRequirementOverview.objects.create(
|
||||
tenant=tenant,
|
||||
scan=scan1,
|
||||
compliance_id="aws_account_security_onboarding_aws",
|
||||
framework="AWS-Account-Security-Onboarding",
|
||||
version="1.0",
|
||||
description="Description for AWS Account Security Onboarding (MANUAL)",
|
||||
region="eu-west-2",
|
||||
requirement_id="requirement3",
|
||||
requirement_status=StatusChoices.MANUAL,
|
||||
passed_checks=0,
|
||||
failed_checks=0,
|
||||
total_checks=0,
|
||||
)
|
||||
|
||||
# Create a different compliance framework for testing
|
||||
requirement_overview6 = ComplianceRequirementOverview.objects.create(
|
||||
tenant=tenant,
|
||||
scan=scan1,
|
||||
compliance_id="cis_1.4_aws",
|
||||
framework="CIS-1.4-AWS",
|
||||
version="1.4",
|
||||
description="CIS AWS Foundations Benchmark v1.4.0",
|
||||
region="eu-west-1",
|
||||
requirement_id="cis_requirement1",
|
||||
requirement_status=StatusChoices.FAIL,
|
||||
passed_checks=0,
|
||||
failed_checks=3,
|
||||
total_checks=3,
|
||||
)
|
||||
|
||||
# Create another compliance framework for testing MITRE ATT&CK
|
||||
requirement_overview7 = ComplianceRequirementOverview.objects.create(
|
||||
tenant=tenant,
|
||||
scan=scan1,
|
||||
compliance_id="mitre_attack_aws",
|
||||
framework="MITRE-ATTACK",
|
||||
version="1.0",
|
||||
description="MITRE ATT&CK",
|
||||
region="eu-west-1",
|
||||
requirement_id="mitre_requirement1",
|
||||
requirement_status=StatusChoices.FAIL,
|
||||
passed_checks=0,
|
||||
failed_checks=0,
|
||||
total_checks=0,
|
||||
)
|
||||
|
||||
return (
|
||||
requirement_overview1,
|
||||
requirement_overview2,
|
||||
requirement_overview3,
|
||||
requirement_overview4,
|
||||
requirement_overview5,
|
||||
requirement_overview6,
|
||||
requirement_overview7,
|
||||
)
|
||||
|
||||
|
||||
def get_api_tokens(
|
||||
api_client, user_email: str, user_password: str, tenant_id: str = None
|
||||
) -> tuple[str, str]:
|
||||
@@ -929,6 +1060,20 @@ def backfill_scan_metadata_fixture(scans_fixture, findings_fixture):
|
||||
backfill_resource_scan_summaries(tenant_id=tenant_id, scan_id=scan_id)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def lighthouse_config_fixture(authenticated_client, tenants_fixture):
|
||||
return LighthouseConfiguration.objects.create(
|
||||
tenant_id=tenants_fixture[0].id,
|
||||
name="OpenAI",
|
||||
api_key_decoded="sk-test1234567890T3BlbkFJtest1234567890",
|
||||
model="gpt-4o",
|
||||
temperature=0,
|
||||
max_tokens=4000,
|
||||
business_context="Test business context",
|
||||
is_active=True,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture(scope="function")
|
||||
def latest_scan_finding(authenticated_client, providers_fixture, resources_fixture):
|
||||
provider = providers_fixture[0]
|
||||
@@ -970,6 +1115,64 @@ def latest_scan_finding(authenticated_client, providers_fixture, resources_fixtu
|
||||
return finding
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def saml_setup(tenants_fixture):
|
||||
tenant_id = tenants_fixture[0].id
|
||||
domain = "example.com"
|
||||
|
||||
SAMLDomainIndex.objects.create(email_domain=domain, tenant_id=tenant_id)
|
||||
|
||||
metadata_xml = """<?xml version='1.0' encoding='UTF-8'?>
|
||||
<md:EntityDescriptor entityID='TEST' xmlns:md='urn:oasis:names:tc:SAML:2.0:metadata'>
|
||||
<md:IDPSSODescriptor WantAuthnRequestsSigned='false' protocolSupportEnumeration='urn:oasis:names:tc:SAML:2.0:protocol'>
|
||||
<md:KeyDescriptor use='signing'>
|
||||
<ds:KeyInfo xmlns:ds='http://www.w3.org/2000/09/xmldsig#'>
|
||||
<ds:X509Data>
|
||||
<ds:X509Certificate>TEST</ds:X509Certificate>
|
||||
</ds:X509Data>
|
||||
</ds:KeyInfo>
|
||||
</md:KeyDescriptor>
|
||||
<md:NameIDFormat>urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress</md:NameIDFormat>
|
||||
<md:SingleSignOnService Binding='urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST' Location='https://TEST/sso/saml'/>
|
||||
<md:SingleSignOnService Binding='urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect' Location='https://TEST/sso/saml'/>
|
||||
</md:IDPSSODescriptor>
|
||||
</md:EntityDescriptor>
|
||||
"""
|
||||
SAMLConfiguration.objects.create(
|
||||
tenant_id=str(tenant_id),
|
||||
email_domain=domain,
|
||||
metadata_xml=metadata_xml,
|
||||
)
|
||||
|
||||
return {
|
||||
"email": f"user@{domain}",
|
||||
"domain": domain,
|
||||
"tenant_id": tenant_id,
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def saml_sociallogin(users_fixture):
|
||||
user = users_fixture[0]
|
||||
user.email = "samlsso@acme.com"
|
||||
extra_data = {
|
||||
"firstName": ["Test"],
|
||||
"lastName": ["User"],
|
||||
"organization": ["Prowler"],
|
||||
"userType": ["member"],
|
||||
}
|
||||
|
||||
account = MagicMock()
|
||||
account.provider = "saml"
|
||||
account.extra_data = extra_data
|
||||
|
||||
sociallogin = MagicMock(spec=SocialLogin)
|
||||
sociallogin.account = account
|
||||
sociallogin.user = user
|
||||
|
||||
return sociallogin
|
||||
|
||||
|
||||
def get_authorization_header(access_token: str) -> dict:
|
||||
return {"Authorization": f"Bearer {access_token}"}
|
||||
|
||||
|
||||
@@ -1,15 +0,0 @@
|
||||
import django.db
|
||||
from django.db.backends.postgresql.base import (
|
||||
DatabaseWrapper as BuiltinPostgresDatabaseWrapper,
|
||||
)
|
||||
from psycopg2 import InterfaceError
|
||||
|
||||
|
||||
class DatabaseWrapper(BuiltinPostgresDatabaseWrapper):
|
||||
def create_cursor(self, name=None):
|
||||
try:
|
||||
return super().create_cursor(name=name)
|
||||
except InterfaceError:
|
||||
django.db.close_old_connections()
|
||||
django.db.connection.connect()
|
||||
return super().create_cursor(name=name)
|
||||
@@ -3,6 +3,12 @@
|
||||
|
||||
import os
|
||||
import sys
|
||||
import warnings
|
||||
|
||||
# Suppress specific warnings from django-rest-auth: https://github.com/iMerica/dj-rest-auth/issues/684
|
||||
warnings.filterwarnings(
|
||||
"ignore", category=UserWarning, module="dj_rest_auth.registration.serializers"
|
||||
)
|
||||
|
||||
|
||||
def main():
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
from datetime import datetime, timezone
|
||||
|
||||
import openai
|
||||
from celery.utils.log import get_task_logger
|
||||
|
||||
from api.models import Provider
|
||||
from api.models import LighthouseConfiguration, Provider
|
||||
from api.utils import prowler_provider_connection_test
|
||||
|
||||
logger = get_task_logger(__name__)
|
||||
@@ -39,3 +40,46 @@ def check_provider_connection(provider_id: str):
|
||||
|
||||
connection_error = f"{connection_result.error}" if connection_result.error else None
|
||||
return {"connected": connection_result.is_connected, "error": connection_error}
|
||||
|
||||
|
||||
def check_lighthouse_connection(lighthouse_config_id: str):
|
||||
"""
|
||||
Business logic to check the connection status of a Lighthouse configuration.
|
||||
|
||||
Args:
|
||||
lighthouse_config_id (str): The primary key of the LighthouseConfiguration instance to check.
|
||||
|
||||
Returns:
|
||||
dict: A dictionary containing:
|
||||
- 'connected' (bool): Indicates whether the connection is successful.
|
||||
- 'error' (str or None): The error message if the connection failed, otherwise `None`.
|
||||
- 'available_models' (list): List of available models if connection is successful.
|
||||
|
||||
Raises:
|
||||
Model.DoesNotExist: If the lighthouse configuration does not exist.
|
||||
"""
|
||||
lighthouse_config = LighthouseConfiguration.objects.get(pk=lighthouse_config_id)
|
||||
|
||||
if not lighthouse_config.api_key_decoded:
|
||||
lighthouse_config.is_active = False
|
||||
lighthouse_config.save()
|
||||
return {
|
||||
"connected": False,
|
||||
"error": "API key is invalid or missing.",
|
||||
"available_models": [],
|
||||
}
|
||||
|
||||
try:
|
||||
client = openai.OpenAI(api_key=lighthouse_config.api_key_decoded)
|
||||
models = client.models.list()
|
||||
lighthouse_config.is_active = True
|
||||
lighthouse_config.save()
|
||||
return {
|
||||
"connected": True,
|
||||
"error": None,
|
||||
"available_models": [model.id for model in models.data],
|
||||
}
|
||||
except Exception as e:
|
||||
lighthouse_config.is_active = False
|
||||
lighthouse_config.save()
|
||||
return {"connected": False, "error": str(e), "available_models": []}
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import os
|
||||
import re
|
||||
import zipfile
|
||||
|
||||
import boto3
|
||||
@@ -238,15 +239,18 @@ def _generate_output_directory(
|
||||
'/tmp/tenant-1234/aws/scan-5678/prowler-output-2023-02-15T12:34:56',
|
||||
'/tmp/tenant-1234/aws/scan-5678/compliance/prowler-output-2023-02-15T12:34:56'
|
||||
"""
|
||||
# Sanitize the prowler provider name to ensure it is a valid directory name
|
||||
prowler_provider_sanitized = re.sub(r"[^\w\-]", "-", prowler_provider)
|
||||
|
||||
path = (
|
||||
f"{output_directory}/{tenant_id}/{scan_id}/prowler-output-"
|
||||
f"{prowler_provider}-{output_file_timestamp}"
|
||||
f"{prowler_provider_sanitized}-{output_file_timestamp}"
|
||||
)
|
||||
os.makedirs("/".join(path.split("/")[:-1]), exist_ok=True)
|
||||
|
||||
compliance_path = (
|
||||
f"{output_directory}/{tenant_id}/{scan_id}/compliance/prowler-output-"
|
||||
f"{prowler_provider}-{output_file_timestamp}"
|
||||
f"{prowler_provider_sanitized}-{output_file_timestamp}"
|
||||
)
|
||||
os.makedirs("/".join(compliance_path.split("/")[:-1]), exist_ok=True)
|
||||
|
||||
|
||||
@@ -13,9 +13,9 @@ from api.compliance import (
|
||||
PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE,
|
||||
generate_scan_compliance,
|
||||
)
|
||||
from api.db_utils import rls_transaction
|
||||
from api.db_utils import create_objects_in_batches, rls_transaction
|
||||
from api.models import (
|
||||
ComplianceOverview,
|
||||
ComplianceRequirementOverview,
|
||||
Finding,
|
||||
Provider,
|
||||
Resource,
|
||||
@@ -119,7 +119,6 @@ def perform_prowler_scan(
|
||||
ValueError: If the provider cannot be connected.
|
||||
|
||||
"""
|
||||
check_status_by_region = {}
|
||||
exception = None
|
||||
unique_resources = set()
|
||||
scan_resource_cache: set[tuple[str, str, str, str]] = set()
|
||||
@@ -293,16 +292,6 @@ def perform_prowler_scan(
|
||||
)
|
||||
finding_instance.add_resources([resource_instance])
|
||||
|
||||
# Update compliance data if applicable
|
||||
if finding.status.value == "MUTED":
|
||||
continue
|
||||
|
||||
region_dict = check_status_by_region.setdefault(finding.region, {})
|
||||
current_status = region_dict.get(finding.check_id)
|
||||
if current_status == "FAIL":
|
||||
continue
|
||||
region_dict[finding.check_id] = finding.status.value
|
||||
|
||||
# Update scan resource summaries
|
||||
scan_resource_cache.add(
|
||||
(
|
||||
@@ -335,63 +324,6 @@ def perform_prowler_scan(
|
||||
if exception is not None:
|
||||
raise exception
|
||||
|
||||
try:
|
||||
regions = prowler_provider.get_regions()
|
||||
except AttributeError:
|
||||
regions = set()
|
||||
|
||||
compliance_template = PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE[
|
||||
provider_instance.provider
|
||||
]
|
||||
compliance_overview_by_region = {
|
||||
region: deepcopy(compliance_template) for region in regions
|
||||
}
|
||||
|
||||
for region, check_status in check_status_by_region.items():
|
||||
compliance_data = compliance_overview_by_region.setdefault(
|
||||
region, deepcopy(compliance_template)
|
||||
)
|
||||
for check_name, status in check_status.items():
|
||||
generate_scan_compliance(
|
||||
compliance_data,
|
||||
provider_instance.provider,
|
||||
check_name,
|
||||
status,
|
||||
)
|
||||
|
||||
# Prepare compliance overview objects
|
||||
compliance_overview_objects = []
|
||||
for region, compliance_data in compliance_overview_by_region.items():
|
||||
for compliance_id, compliance in compliance_data.items():
|
||||
compliance_overview_objects.append(
|
||||
ComplianceOverview(
|
||||
tenant_id=tenant_id,
|
||||
scan=scan_instance,
|
||||
region=region,
|
||||
compliance_id=compliance_id,
|
||||
framework=compliance["framework"],
|
||||
version=compliance["version"],
|
||||
description=compliance["description"],
|
||||
requirements=compliance["requirements"],
|
||||
requirements_passed=compliance["requirements_status"]["passed"],
|
||||
requirements_failed=compliance["requirements_status"]["failed"],
|
||||
requirements_manual=compliance["requirements_status"]["manual"],
|
||||
total_requirements=compliance["total_requirements"],
|
||||
)
|
||||
)
|
||||
try:
|
||||
with rls_transaction(tenant_id):
|
||||
ComplianceOverview.objects.bulk_create(
|
||||
compliance_overview_objects, batch_size=500
|
||||
)
|
||||
except Exception as overview_exception:
|
||||
import sentry_sdk
|
||||
|
||||
sentry_sdk.capture_exception(overview_exception)
|
||||
logger.error(
|
||||
f"Error storing compliance overview for scan {scan_id}: {overview_exception}"
|
||||
)
|
||||
|
||||
try:
|
||||
resource_scan_summaries = [
|
||||
ResourceScanSummary(
|
||||
@@ -570,3 +502,114 @@ def aggregate_findings(tenant_id: str, scan_id: str):
|
||||
for agg in aggregation
|
||||
}
|
||||
ScanSummary.objects.bulk_create(scan_aggregations, batch_size=3000)
|
||||
|
||||
|
||||
def create_compliance_requirements(tenant_id: str, scan_id: str):
|
||||
"""
|
||||
Create detailed compliance requirement overview records for a scan.
|
||||
|
||||
This function processes the compliance data collected during a scan and creates
|
||||
individual records for each compliance requirement in each region. These detailed
|
||||
records provide a granular view of compliance status.
|
||||
|
||||
Args:
|
||||
tenant_id (str): The ID of the tenant for which to create records.
|
||||
scan_id (str): The ID of the scan for which to create records.
|
||||
|
||||
Returns:
|
||||
dict: A dictionary containing the number of requirements created and the regions processed.
|
||||
|
||||
Raises:
|
||||
ValidationError: If tenant_id is not a valid UUID.
|
||||
"""
|
||||
try:
|
||||
with rls_transaction(tenant_id):
|
||||
scan_instance = Scan.objects.get(pk=scan_id)
|
||||
provider_instance = scan_instance.provider
|
||||
prowler_provider = initialize_prowler_provider(provider_instance)
|
||||
|
||||
# Get check status data by region from findings
|
||||
check_status_by_region = {}
|
||||
with rls_transaction(tenant_id):
|
||||
findings = Finding.objects.filter(scan_id=scan_id, muted=False)
|
||||
for finding in findings:
|
||||
# Get region from resources
|
||||
for resource in finding.resources.all():
|
||||
region = resource.region
|
||||
region_dict = check_status_by_region.setdefault(region, {})
|
||||
current_status = region_dict.get(finding.check_id)
|
||||
if current_status == "FAIL":
|
||||
continue
|
||||
region_dict[finding.check_id] = finding.status
|
||||
|
||||
try:
|
||||
# Try to get regions from provider
|
||||
regions = prowler_provider.get_regions()
|
||||
except (AttributeError, Exception):
|
||||
# If not available, use regions from findings
|
||||
regions = set(check_status_by_region.keys())
|
||||
|
||||
# Get compliance template for the provider
|
||||
compliance_template = PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE[
|
||||
provider_instance.provider
|
||||
]
|
||||
|
||||
# Create compliance data by region
|
||||
compliance_overview_by_region = {
|
||||
region: deepcopy(compliance_template) for region in regions
|
||||
}
|
||||
|
||||
# Apply check statuses to compliance data
|
||||
for region, check_status in check_status_by_region.items():
|
||||
compliance_data = compliance_overview_by_region.setdefault(
|
||||
region, deepcopy(compliance_template)
|
||||
)
|
||||
for check_name, status in check_status.items():
|
||||
generate_scan_compliance(
|
||||
compliance_data,
|
||||
provider_instance.provider,
|
||||
check_name,
|
||||
status,
|
||||
)
|
||||
|
||||
# Prepare compliance requirement objects
|
||||
compliance_requirement_objects = []
|
||||
for region, compliance_data in compliance_overview_by_region.items():
|
||||
for compliance_id, compliance in compliance_data.items():
|
||||
# Create an overview record for each requirement within each compliance framework
|
||||
for requirement_id, requirement in compliance["requirements"].items():
|
||||
compliance_requirement_objects.append(
|
||||
ComplianceRequirementOverview(
|
||||
tenant_id=tenant_id,
|
||||
scan=scan_instance,
|
||||
region=region,
|
||||
compliance_id=compliance_id,
|
||||
framework=compliance["framework"],
|
||||
version=compliance["version"],
|
||||
requirement_id=requirement_id,
|
||||
description=requirement["description"],
|
||||
passed_checks=requirement["checks_status"]["pass"],
|
||||
failed_checks=requirement["checks_status"]["fail"],
|
||||
total_checks=requirement["checks_status"]["total"],
|
||||
requirement_status=requirement["status"],
|
||||
)
|
||||
)
|
||||
|
||||
# Bulk create requirement records
|
||||
create_objects_in_batches(
|
||||
tenant_id, ComplianceRequirementOverview, compliance_requirement_objects
|
||||
)
|
||||
|
||||
return {
|
||||
"requirements_created": len(compliance_requirement_objects),
|
||||
"regions_processed": list(regions),
|
||||
"compliance_frameworks": (
|
||||
list(compliance_overview_by_region.get(list(regions)[0], {}).keys())
|
||||
if regions
|
||||
else []
|
||||
),
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error creating compliance requirements for scan {scan_id}: {e}")
|
||||
raise e
|
||||
|
||||
@@ -8,7 +8,7 @@ from config.celery import RLSTask
|
||||
from config.django.base import DJANGO_FINDINGS_BATCH_SIZE, DJANGO_TMP_OUTPUT_DIRECTORY
|
||||
from django_celery_beat.models import PeriodicTask
|
||||
from tasks.jobs.backfill import backfill_resource_scan_summaries
|
||||
from tasks.jobs.connection import check_provider_connection
|
||||
from tasks.jobs.connection import check_lighthouse_connection, check_provider_connection
|
||||
from tasks.jobs.deletion import delete_provider, delete_tenant
|
||||
from tasks.jobs.export import (
|
||||
COMPLIANCE_CLASS_MAP,
|
||||
@@ -17,7 +17,11 @@ from tasks.jobs.export import (
|
||||
_generate_output_directory,
|
||||
_upload_to_s3,
|
||||
)
|
||||
from tasks.jobs.scan import aggregate_findings, perform_prowler_scan
|
||||
from tasks.jobs.scan import (
|
||||
aggregate_findings,
|
||||
create_compliance_requirements,
|
||||
perform_prowler_scan,
|
||||
)
|
||||
from tasks.utils import batched, get_next_execution_datetime
|
||||
|
||||
from api.compliance import get_compliance_frameworks
|
||||
@@ -101,6 +105,7 @@ def perform_scan_task(
|
||||
|
||||
chain(
|
||||
perform_scan_summary_task.si(tenant_id, scan_id),
|
||||
create_compliance_requirements_task.si(tenant_id=tenant_id, scan_id=scan_id),
|
||||
generate_outputs.si(
|
||||
scan_id=scan_id, provider_id=provider_id, tenant_id=tenant_id
|
||||
),
|
||||
@@ -211,6 +216,9 @@ def perform_scheduled_scan_task(self, tenant_id: str, provider_id: str):
|
||||
|
||||
chain(
|
||||
perform_scan_summary_task.si(tenant_id, scan_instance.id),
|
||||
create_compliance_requirements_task.si(
|
||||
tenant_id=tenant_id, scan_id=str(scan_instance.id)
|
||||
),
|
||||
generate_outputs.si(
|
||||
scan_id=str(scan_instance.id), provider_id=provider_id, tenant_id=tenant_id
|
||||
),
|
||||
@@ -371,3 +379,38 @@ def backfill_scan_resource_summaries_task(tenant_id: str, scan_id: str):
|
||||
scan_id (str): The scan identifier.
|
||||
"""
|
||||
return backfill_resource_scan_summaries(tenant_id=tenant_id, scan_id=scan_id)
|
||||
|
||||
|
||||
@shared_task(base=RLSTask, name="scan-compliance-overviews")
|
||||
def create_compliance_requirements_task(tenant_id: str, scan_id: str):
|
||||
"""
|
||||
Creates detailed compliance requirement records for a scan.
|
||||
|
||||
This task processes the compliance data collected during a scan and creates
|
||||
individual records for each compliance requirement in each region. These detailed
|
||||
records provide a granular view of compliance status.
|
||||
|
||||
Args:
|
||||
tenant_id (str): The tenant ID for which to create records.
|
||||
scan_id (str): The ID of the scan for which to create records.
|
||||
"""
|
||||
return create_compliance_requirements(tenant_id=tenant_id, scan_id=scan_id)
|
||||
|
||||
|
||||
@shared_task(base=RLSTask, name="lighthouse-connection-check")
|
||||
@set_tenant
|
||||
def check_lighthouse_connection_task(lighthouse_config_id: str, tenant_id: str = None):
|
||||
"""
|
||||
Task to check the connection status of a Lighthouse configuration.
|
||||
|
||||
Args:
|
||||
lighthouse_config_id (str): The primary key of the LighthouseConfiguration instance to check.
|
||||
tenant_id (str): The tenant ID for the task.
|
||||
|
||||
Returns:
|
||||
dict: A dictionary containing:
|
||||
- 'connected' (bool): Indicates whether the connection is successful.
|
||||
- 'error' (str or None): The error message if the connection failed, otherwise `None`.
|
||||
- 'available_models' (list): List of available models if connection is successful.
|
||||
"""
|
||||
return check_lighthouse_connection(lighthouse_config_id=lighthouse_config_id)
|
||||
|
||||
@@ -55,3 +55,22 @@ class TestScheduleProviderScan:
|
||||
assert "There is already a scheduled scan for this provider." in str(
|
||||
exc_info.value
|
||||
)
|
||||
|
||||
def test_remove_periodic_task(self, providers_fixture):
|
||||
provider_instance = providers_fixture[0]
|
||||
|
||||
assert Scan.objects.count() == 0
|
||||
with patch("tasks.tasks.perform_scheduled_scan_task.apply_async"):
|
||||
schedule_provider_scan(provider_instance)
|
||||
|
||||
assert Scan.objects.count() == 1
|
||||
scan = Scan.objects.first()
|
||||
periodic_task = scan.scheduler_task
|
||||
assert periodic_task is not None
|
||||
|
||||
periodic_task.delete()
|
||||
|
||||
scan.refresh_from_db()
|
||||
# Assert the scan still exists but its scheduler_task is set to None
|
||||
# Otherwise, Scan.DoesNotExist would be raised
|
||||
assert Scan.objects.get(id=scan.id).scheduler_task is None
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
from datetime import datetime, timezone
|
||||
from unittest.mock import patch, MagicMock
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
from tasks.jobs.connection import check_lighthouse_connection, check_provider_connection
|
||||
|
||||
from api.models import Provider
|
||||
from tasks.jobs.connection import check_provider_connection
|
||||
from api.models import LighthouseConfiguration, Provider
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
@@ -70,3 +70,60 @@ def test_check_provider_connection_exception(
|
||||
|
||||
mock_provider_instance.save.assert_called_once()
|
||||
assert mock_provider_instance.connected is False
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"lighthouse_data",
|
||||
[
|
||||
{
|
||||
"name": "OpenAI",
|
||||
"api_key_decoded": "sk-test1234567890T3BlbkFJtest1234567890",
|
||||
"model": "gpt-4o",
|
||||
"temperature": 0,
|
||||
"max_tokens": 4000,
|
||||
"business_context": "Test business context",
|
||||
"is_active": True,
|
||||
},
|
||||
],
|
||||
)
|
||||
@patch("tasks.jobs.connection.openai.OpenAI")
|
||||
@pytest.mark.django_db
|
||||
def test_check_lighthouse_connection(
|
||||
mock_openai_client, tenants_fixture, lighthouse_data
|
||||
):
|
||||
lighthouse_config = LighthouseConfiguration.objects.create(
|
||||
**lighthouse_data, tenant_id=tenants_fixture[0].id
|
||||
)
|
||||
|
||||
mock_models = MagicMock()
|
||||
mock_models.data = [MagicMock(id="gpt-4o"), MagicMock(id="gpt-4o-mini")]
|
||||
mock_openai_client.return_value.models.list.return_value = mock_models
|
||||
|
||||
result = check_lighthouse_connection(
|
||||
lighthouse_config_id=str(lighthouse_config.id),
|
||||
)
|
||||
lighthouse_config.refresh_from_db()
|
||||
|
||||
mock_openai_client.assert_called_once_with(
|
||||
api_key=lighthouse_data["api_key_decoded"]
|
||||
)
|
||||
assert lighthouse_config.is_active is True
|
||||
assert result["connected"] is True
|
||||
assert result["error"] is None
|
||||
assert result["available_models"] == ["gpt-4o", "gpt-4o-mini"]
|
||||
|
||||
|
||||
@patch("tasks.jobs.connection.LighthouseConfiguration.objects.get")
|
||||
@pytest.mark.django_db
|
||||
def test_check_lighthouse_connection_missing_api_key(mock_lighthouse_get):
|
||||
mock_lighthouse_instance = MagicMock()
|
||||
mock_lighthouse_instance.api_key_decoded = None
|
||||
mock_lighthouse_get.return_value = mock_lighthouse_instance
|
||||
|
||||
result = check_lighthouse_connection("lighthouse_config_id")
|
||||
|
||||
assert result["connected"] is False
|
||||
assert result["error"] == "API key is invalid or missing."
|
||||
assert result["available_models"] == []
|
||||
assert mock_lighthouse_instance.is_active is False
|
||||
mock_lighthouse_instance.save.assert_called_once()
|
||||
|
||||
@@ -145,3 +145,22 @@ class TestOutputs:
|
||||
|
||||
assert path.endswith(f"{provider}-{output_file_timestamp}")
|
||||
assert compliance.endswith(f"{provider}-{output_file_timestamp}")
|
||||
|
||||
def test_generate_output_directory_invalid_character(self, tmpdir):
|
||||
from prowler.config.config import output_file_timestamp
|
||||
|
||||
base_tmp = Path(str(tmpdir.mkdir("generate_output")))
|
||||
base_dir = str(base_tmp)
|
||||
tenant_id = "t1"
|
||||
scan_id = "s1"
|
||||
provider = "aws/test@check"
|
||||
|
||||
path, compliance = _generate_output_directory(
|
||||
base_dir, provider, tenant_id, scan_id
|
||||
)
|
||||
|
||||
assert os.path.isdir(os.path.dirname(path))
|
||||
assert os.path.isdir(os.path.dirname(compliance))
|
||||
|
||||
assert path.endswith(f"aws-test-check-{output_file_timestamp}")
|
||||
assert compliance.endswith(f"aws-test-check-{output_file_timestamp}")
|
||||
|
||||
@@ -7,11 +7,13 @@ import pytest
|
||||
from tasks.jobs.scan import (
|
||||
_create_finding_delta,
|
||||
_store_resources,
|
||||
create_compliance_requirements,
|
||||
perform_prowler_scan,
|
||||
)
|
||||
from tasks.utils import CustomEncoder
|
||||
|
||||
from api.models import (
|
||||
ComplianceRequirementOverview,
|
||||
Finding,
|
||||
Provider,
|
||||
Resource,
|
||||
@@ -235,7 +237,7 @@ class TestPerformScan:
|
||||
):
|
||||
tenant_id = uuid.uuid4()
|
||||
provider_instance = MagicMock()
|
||||
provider_instance.id = "provider456"
|
||||
provider_instance.id = "provider123"
|
||||
|
||||
finding = MagicMock()
|
||||
finding.resource_uid = "resource_uid_123"
|
||||
@@ -250,15 +252,16 @@ class TestPerformScan:
|
||||
resource_instance.region = finding.region
|
||||
|
||||
mock_get_or_create_resource.return_value = (resource_instance, True)
|
||||
|
||||
tag_instance = MagicMock()
|
||||
mock_get_or_create_tag.return_value = (tag_instance, True)
|
||||
|
||||
resource, resource_uid_tuple = _store_resources(
|
||||
finding, tenant_id, provider_instance
|
||||
finding, str(tenant_id), provider_instance
|
||||
)
|
||||
|
||||
mock_get_or_create_resource.assert_called_once_with(
|
||||
tenant_id=tenant_id,
|
||||
tenant_id=str(tenant_id),
|
||||
provider=provider_instance,
|
||||
uid=finding.resource_uid,
|
||||
defaults={
|
||||
@@ -305,11 +308,11 @@ class TestPerformScan:
|
||||
mock_get_or_create_tag.return_value = (tag_instance, True)
|
||||
|
||||
resource, resource_uid_tuple = _store_resources(
|
||||
finding, tenant_id, provider_instance
|
||||
finding, str(tenant_id), provider_instance
|
||||
)
|
||||
|
||||
mock_get_or_create_resource.assert_called_once_with(
|
||||
tenant_id=tenant_id,
|
||||
tenant_id=str(tenant_id),
|
||||
provider=provider_instance,
|
||||
uid=finding.resource_uid,
|
||||
defaults={
|
||||
@@ -363,14 +366,14 @@ class TestPerformScan:
|
||||
]
|
||||
|
||||
resource, resource_uid_tuple = _store_resources(
|
||||
finding, tenant_id, provider_instance
|
||||
finding, str(tenant_id), provider_instance
|
||||
)
|
||||
|
||||
mock_get_or_create_tag.assert_any_call(
|
||||
tenant_id=tenant_id, key="tag1", value="value1"
|
||||
tenant_id=str(tenant_id), key="tag1", value="value1"
|
||||
)
|
||||
mock_get_or_create_tag.assert_any_call(
|
||||
tenant_id=tenant_id, key="tag2", value="value2"
|
||||
tenant_id=str(tenant_id), key="tag2", value="value2"
|
||||
)
|
||||
resource_instance.upsert_or_delete_tags.assert_called_once()
|
||||
tags_passed = resource_instance.upsert_or_delete_tags.call_args[1]["tags"]
|
||||
@@ -382,3 +385,808 @@ class TestPerformScan:
|
||||
|
||||
|
||||
# TODO Add tests for aggregations
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
class TestCreateComplianceRequirements:
|
||||
def test_create_compliance_requirements_success(
|
||||
self,
|
||||
tenants_fixture,
|
||||
scans_fixture,
|
||||
providers_fixture,
|
||||
findings_fixture,
|
||||
resources_fixture,
|
||||
):
|
||||
with (
|
||||
patch("api.db_utils.rls_transaction"),
|
||||
patch(
|
||||
"tasks.jobs.scan.initialize_prowler_provider"
|
||||
) as mock_initialize_prowler_provider,
|
||||
patch(
|
||||
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE"
|
||||
) as mock_compliance_template,
|
||||
patch("tasks.jobs.scan.generate_scan_compliance"),
|
||||
patch("tasks.jobs.scan.create_objects_in_batches") as mock_create_objects,
|
||||
patch("api.models.Finding.objects.filter") as mock_findings_filter,
|
||||
):
|
||||
tenant = tenants_fixture[0]
|
||||
scan = scans_fixture[0]
|
||||
provider = providers_fixture[0]
|
||||
|
||||
provider.provider = Provider.ProviderChoices.AWS
|
||||
provider.save()
|
||||
|
||||
scan.provider = provider
|
||||
scan.save()
|
||||
|
||||
tenant_id = str(tenant.id)
|
||||
scan_id = str(scan.id)
|
||||
|
||||
mock_prowler_provider_instance = MagicMock()
|
||||
mock_prowler_provider_instance.get_regions.return_value = [
|
||||
"us-east-1",
|
||||
"us-west-2",
|
||||
]
|
||||
mock_initialize_prowler_provider.return_value = (
|
||||
mock_prowler_provider_instance
|
||||
)
|
||||
|
||||
mock_compliance_template.__getitem__.return_value = {
|
||||
"cis_1.4_aws": {
|
||||
"framework": "CIS AWS Foundations Benchmark",
|
||||
"version": "1.4.0",
|
||||
"requirements": {
|
||||
"1.1": {
|
||||
"description": "Ensure root access key does not exist",
|
||||
"checks_status": {
|
||||
"pass": 0,
|
||||
"fail": 0,
|
||||
"manual": 0,
|
||||
"total": 1,
|
||||
},
|
||||
"status": "PASS",
|
||||
},
|
||||
"1.2": {
|
||||
"description": "Ensure MFA is enabled for root account",
|
||||
"checks_status": {
|
||||
"pass": 0,
|
||||
"fail": 1,
|
||||
"manual": 0,
|
||||
"total": 1,
|
||||
},
|
||||
"status": "FAIL",
|
||||
},
|
||||
},
|
||||
},
|
||||
"aws_account_security_onboarding_aws": {
|
||||
"framework": "AWS Account Security Onboarding",
|
||||
"version": "1.0",
|
||||
"requirements": {
|
||||
"requirement1": {
|
||||
"description": "Basic security requirement",
|
||||
"checks_status": {
|
||||
"pass": 1,
|
||||
"fail": 0,
|
||||
"manual": 0,
|
||||
"total": 1,
|
||||
},
|
||||
"status": "PASS",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
mock_findings_filter.return_value = []
|
||||
|
||||
result = create_compliance_requirements(tenant_id, scan_id)
|
||||
|
||||
assert "requirements_created" in result
|
||||
assert "regions_processed" in result
|
||||
assert "compliance_frameworks" in result
|
||||
assert result["regions_processed"] == ["us-east-1", "us-west-2"]
|
||||
assert result["requirements_created"] == 6
|
||||
assert len(result["compliance_frameworks"]) == 2
|
||||
|
||||
mock_create_objects.assert_called_once()
|
||||
call_args = mock_create_objects.call_args[0]
|
||||
assert call_args[0] == tenant_id
|
||||
assert call_args[1] == ComplianceRequirementOverview
|
||||
assert len(call_args[2]) == 6
|
||||
|
||||
compliance_objects = call_args[2]
|
||||
for obj in compliance_objects:
|
||||
assert isinstance(obj, ComplianceRequirementOverview)
|
||||
assert obj.tenant.id == tenant.id
|
||||
assert obj.scan == scan
|
||||
assert obj.region in ["us-east-1", "us-west-2"]
|
||||
assert obj.compliance_id in [
|
||||
"cis_1.4_aws",
|
||||
"aws_account_security_onboarding_aws",
|
||||
]
|
||||
|
||||
def test_create_compliance_requirements_with_findings(
|
||||
self,
|
||||
tenants_fixture,
|
||||
scans_fixture,
|
||||
providers_fixture,
|
||||
):
|
||||
with (
|
||||
patch("api.db_utils.rls_transaction"),
|
||||
patch(
|
||||
"tasks.jobs.scan.initialize_prowler_provider"
|
||||
) as mock_initialize_prowler_provider,
|
||||
patch(
|
||||
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE"
|
||||
) as mock_compliance_template,
|
||||
patch(
|
||||
"tasks.jobs.scan.generate_scan_compliance"
|
||||
) as mock_generate_compliance,
|
||||
patch("tasks.jobs.scan.create_objects_in_batches"),
|
||||
patch("api.models.Finding.objects.filter") as mock_findings_filter,
|
||||
):
|
||||
tenant = tenants_fixture[0]
|
||||
scan = scans_fixture[0]
|
||||
provider = providers_fixture[0]
|
||||
|
||||
provider.provider = Provider.ProviderChoices.AWS
|
||||
provider.save()
|
||||
scan.provider = provider
|
||||
scan.save()
|
||||
|
||||
tenant_id = str(tenant.id)
|
||||
scan_id = str(scan.id)
|
||||
|
||||
mock_finding1 = MagicMock()
|
||||
mock_finding1.check_id = "check1"
|
||||
mock_finding1.status = "PASS"
|
||||
mock_resource1 = MagicMock()
|
||||
mock_resource1.region = "us-east-1"
|
||||
mock_finding1.resources.all.return_value = [mock_resource1]
|
||||
|
||||
mock_finding2 = MagicMock()
|
||||
mock_finding2.check_id = "check2"
|
||||
mock_finding2.status = "FAIL"
|
||||
mock_resource2 = MagicMock()
|
||||
mock_resource2.region = "us-west-2"
|
||||
mock_finding2.resources.all.return_value = [mock_resource2]
|
||||
|
||||
mock_findings_filter.return_value = [mock_finding1, mock_finding2]
|
||||
|
||||
mock_prowler_provider_instance = MagicMock()
|
||||
mock_prowler_provider_instance.get_regions.return_value = [
|
||||
"us-east-1",
|
||||
"us-west-2",
|
||||
]
|
||||
mock_initialize_prowler_provider.return_value = (
|
||||
mock_prowler_provider_instance
|
||||
)
|
||||
|
||||
mock_compliance_template.__getitem__.return_value = {
|
||||
"test_compliance": {
|
||||
"framework": "Test Framework",
|
||||
"version": "1.0",
|
||||
"requirements": {
|
||||
"req_1": {
|
||||
"description": "Test Requirement 1",
|
||||
"checks": {"check_1": None},
|
||||
"checks_status": {
|
||||
"pass": 2,
|
||||
"fail": 1,
|
||||
"manual": 0,
|
||||
"total": 3,
|
||||
},
|
||||
"status": "FAIL",
|
||||
},
|
||||
"req_2": {
|
||||
"description": "Test Requirement 2",
|
||||
"checks": {"check_2": None},
|
||||
"checks_status": {
|
||||
"pass": 2,
|
||||
"fail": 0,
|
||||
"manual": 0,
|
||||
"total": 2,
|
||||
},
|
||||
"status": "PASS",
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
result = create_compliance_requirements(tenant_id, scan_id)
|
||||
|
||||
mock_findings_filter.assert_called_once_with(scan_id=scan_id, muted=False)
|
||||
assert mock_generate_compliance.call_count == 2
|
||||
assert result["requirements_created"] == 4
|
||||
assert set(result["regions_processed"]) == {"us-east-1", "us-west-2"}
|
||||
|
||||
def test_create_compliance_requirements_no_provider_regions(
|
||||
self,
|
||||
tenants_fixture,
|
||||
scans_fixture,
|
||||
providers_fixture,
|
||||
):
|
||||
with (
|
||||
patch("api.db_utils.rls_transaction"),
|
||||
patch(
|
||||
"tasks.jobs.scan.initialize_prowler_provider"
|
||||
) as mock_initialize_prowler_provider,
|
||||
patch(
|
||||
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE"
|
||||
) as mock_compliance_template,
|
||||
patch("tasks.jobs.scan.generate_scan_compliance"),
|
||||
patch("tasks.jobs.scan.create_objects_in_batches"),
|
||||
patch("api.models.Finding.objects.filter") as mock_findings_filter,
|
||||
):
|
||||
tenant = tenants_fixture[0]
|
||||
scan = scans_fixture[0]
|
||||
provider = providers_fixture[0]
|
||||
|
||||
provider.provider = Provider.ProviderChoices.KUBERNETES
|
||||
provider.save()
|
||||
scan.provider = provider
|
||||
scan.save()
|
||||
|
||||
tenant_id = str(tenant.id)
|
||||
scan_id = str(scan.id)
|
||||
|
||||
mock_finding = MagicMock()
|
||||
mock_finding.check_id = "check1"
|
||||
mock_finding.status = "PASS"
|
||||
mock_resource = MagicMock()
|
||||
mock_resource.region = "default"
|
||||
mock_finding.resources.all.return_value = [mock_resource]
|
||||
mock_findings_filter.return_value = [mock_finding]
|
||||
|
||||
mock_prowler_provider_instance = MagicMock()
|
||||
mock_prowler_provider_instance.get_regions.side_effect = AttributeError(
|
||||
"No get_regions method"
|
||||
)
|
||||
mock_initialize_prowler_provider.return_value = (
|
||||
mock_prowler_provider_instance
|
||||
)
|
||||
|
||||
mock_compliance_template.__getitem__.return_value = {
|
||||
"kubernetes_cis": {
|
||||
"framework": "CIS Kubernetes Benchmark",
|
||||
"version": "1.6.0",
|
||||
"requirements": {
|
||||
"1.1": {
|
||||
"description": "Test requirement",
|
||||
"checks_status": {
|
||||
"pass": 0,
|
||||
"fail": 0,
|
||||
"manual": 0,
|
||||
"total": 1,
|
||||
},
|
||||
"status": "PASS",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
result = create_compliance_requirements(tenant_id, scan_id)
|
||||
|
||||
assert result["regions_processed"] == ["default"]
|
||||
|
||||
def test_create_compliance_requirements_empty_findings(
|
||||
self,
|
||||
tenants_fixture,
|
||||
scans_fixture,
|
||||
providers_fixture,
|
||||
):
|
||||
with (
|
||||
patch("api.db_utils.rls_transaction"),
|
||||
patch(
|
||||
"tasks.jobs.scan.initialize_prowler_provider"
|
||||
) as mock_initialize_prowler_provider,
|
||||
patch(
|
||||
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE"
|
||||
) as mock_compliance_template,
|
||||
patch(
|
||||
"tasks.jobs.scan.generate_scan_compliance"
|
||||
) as mock_generate_compliance,
|
||||
patch("tasks.jobs.scan.create_objects_in_batches"),
|
||||
patch("api.models.Finding.objects.filter") as mock_findings_filter,
|
||||
):
|
||||
tenant = tenants_fixture[0]
|
||||
scan = scans_fixture[0]
|
||||
provider = providers_fixture[0]
|
||||
|
||||
provider.provider = Provider.ProviderChoices.AWS
|
||||
provider.save()
|
||||
scan.provider = provider
|
||||
scan.save()
|
||||
|
||||
tenant_id = str(tenant.id)
|
||||
scan_id = str(scan.id)
|
||||
|
||||
mock_findings_filter.return_value = []
|
||||
|
||||
mock_prowler_provider_instance = MagicMock()
|
||||
mock_prowler_provider_instance.get_regions.return_value = ["us-east-1"]
|
||||
mock_initialize_prowler_provider.return_value = (
|
||||
mock_prowler_provider_instance
|
||||
)
|
||||
|
||||
mock_compliance_template.__getitem__.return_value = {
|
||||
"cis_1.4_aws": {
|
||||
"framework": "CIS AWS Foundations Benchmark",
|
||||
"version": "1.4.0",
|
||||
"requirements": {
|
||||
"1.1": {
|
||||
"description": "Test requirement",
|
||||
"checks_status": {
|
||||
"pass": 0,
|
||||
"fail": 0,
|
||||
"manual": 0,
|
||||
"total": 1,
|
||||
},
|
||||
"status": "PASS",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
mock_findings_filter.return_value = []
|
||||
|
||||
result = create_compliance_requirements(tenant_id, scan_id)
|
||||
|
||||
assert result["regions_processed"] == ["us-east-1"]
|
||||
assert result["requirements_created"] == 1
|
||||
mock_generate_compliance.assert_not_called()
|
||||
|
||||
def test_create_compliance_requirements_error_handling(
|
||||
self,
|
||||
tenants_fixture,
|
||||
scans_fixture,
|
||||
providers_fixture,
|
||||
):
|
||||
with (
|
||||
patch("api.db_utils.rls_transaction"),
|
||||
patch(
|
||||
"tasks.jobs.scan.initialize_prowler_provider"
|
||||
) as mock_initialize_prowler_provider,
|
||||
):
|
||||
tenant = tenants_fixture[0]
|
||||
scan = scans_fixture[0]
|
||||
provider = providers_fixture[0]
|
||||
|
||||
provider.provider = Provider.ProviderChoices.AWS
|
||||
provider.save()
|
||||
scan.provider = provider
|
||||
scan.save()
|
||||
|
||||
tenant_id = str(tenant.id)
|
||||
scan_id = str(scan.id)
|
||||
|
||||
mock_initialize_prowler_provider.side_effect = Exception(
|
||||
"Provider initialization failed"
|
||||
)
|
||||
|
||||
with pytest.raises(Exception, match="Provider initialization failed"):
|
||||
create_compliance_requirements(tenant_id, scan_id)
|
||||
|
||||
def test_create_compliance_requirements_muted_findings_excluded(
|
||||
self,
|
||||
tenants_fixture,
|
||||
scans_fixture,
|
||||
providers_fixture,
|
||||
):
|
||||
with (
|
||||
patch("api.db_utils.rls_transaction"),
|
||||
patch(
|
||||
"tasks.jobs.scan.initialize_prowler_provider"
|
||||
) as mock_initialize_prowler_provider,
|
||||
patch(
|
||||
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE"
|
||||
) as mock_compliance_template,
|
||||
patch("tasks.jobs.scan.generate_scan_compliance"),
|
||||
patch("tasks.jobs.scan.create_objects_in_batches"),
|
||||
patch("api.models.Finding.objects.filter") as mock_findings_filter,
|
||||
):
|
||||
tenant = tenants_fixture[0]
|
||||
scan = scans_fixture[0]
|
||||
provider = providers_fixture[0]
|
||||
|
||||
provider.provider = Provider.ProviderChoices.AWS
|
||||
provider.save()
|
||||
scan.provider = provider
|
||||
scan.save()
|
||||
|
||||
tenant_id = str(tenant.id)
|
||||
scan_id = str(scan.id)
|
||||
|
||||
mock_findings_filter.return_value = []
|
||||
|
||||
mock_prowler_provider_instance = MagicMock()
|
||||
mock_prowler_provider_instance.get_regions.return_value = ["us-east-1"]
|
||||
mock_initialize_prowler_provider.return_value = (
|
||||
mock_prowler_provider_instance
|
||||
)
|
||||
|
||||
mock_compliance_template.__getitem__.return_value = {}
|
||||
|
||||
mock_findings_filter.return_value = []
|
||||
|
||||
create_compliance_requirements(tenant_id, scan_id)
|
||||
|
||||
mock_findings_filter.assert_called_once_with(scan_id=scan_id, muted=False)
|
||||
|
||||
def test_create_compliance_requirements_check_status_priority(
|
||||
self,
|
||||
tenants_fixture,
|
||||
scans_fixture,
|
||||
providers_fixture,
|
||||
):
|
||||
with (
|
||||
patch("api.db_utils.rls_transaction"),
|
||||
patch(
|
||||
"tasks.jobs.scan.initialize_prowler_provider"
|
||||
) as mock_initialize_prowler_provider,
|
||||
patch(
|
||||
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE"
|
||||
) as mock_compliance_template,
|
||||
patch(
|
||||
"tasks.jobs.scan.generate_scan_compliance"
|
||||
) as mock_generate_compliance,
|
||||
patch("tasks.jobs.scan.create_objects_in_batches"),
|
||||
patch("api.models.Finding.objects.filter") as mock_findings_filter,
|
||||
):
|
||||
tenant = tenants_fixture[0]
|
||||
scan = scans_fixture[0]
|
||||
provider = providers_fixture[0]
|
||||
|
||||
provider.provider = Provider.ProviderChoices.AWS
|
||||
provider.save()
|
||||
scan.provider = provider
|
||||
scan.save()
|
||||
|
||||
tenant_id = str(tenant.id)
|
||||
scan_id = str(scan.id)
|
||||
|
||||
mock_finding1 = MagicMock()
|
||||
mock_finding1.check_id = "check1"
|
||||
mock_finding1.status = "PASS"
|
||||
mock_resource1 = MagicMock()
|
||||
mock_resource1.region = "us-east-1"
|
||||
mock_finding1.resources.all.return_value = [mock_resource1]
|
||||
|
||||
mock_finding2 = MagicMock()
|
||||
mock_finding2.check_id = "check1"
|
||||
mock_finding2.status = "FAIL"
|
||||
mock_resource2 = MagicMock()
|
||||
mock_resource2.region = "us-east-1"
|
||||
mock_finding2.resources.all.return_value = [mock_resource2]
|
||||
|
||||
mock_findings_filter.return_value = [mock_finding1, mock_finding2]
|
||||
|
||||
mock_prowler_provider_instance = MagicMock()
|
||||
mock_prowler_provider_instance.get_regions.return_value = ["us-east-1"]
|
||||
mock_initialize_prowler_provider.return_value = (
|
||||
mock_prowler_provider_instance
|
||||
)
|
||||
|
||||
mock_compliance_template.__getitem__.return_value = {
|
||||
"cis_1.4_aws": {
|
||||
"framework": "CIS AWS Foundations Benchmark",
|
||||
"version": "1.4.0",
|
||||
"requirements": {
|
||||
"1.1": {
|
||||
"description": "Test requirement",
|
||||
"checks_status": {
|
||||
"pass": 0,
|
||||
"fail": 0,
|
||||
"manual": 0,
|
||||
"total": 1,
|
||||
},
|
||||
"status": "PASS",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
create_compliance_requirements(tenant_id, scan_id)
|
||||
|
||||
assert mock_generate_compliance.call_count == 1
|
||||
|
||||
def test_compliance_overview_aggregation_requirement_fail_priority(
|
||||
self,
|
||||
tenants_fixture,
|
||||
scans_fixture,
|
||||
providers_fixture,
|
||||
):
|
||||
with (
|
||||
patch("api.db_utils.rls_transaction"),
|
||||
patch(
|
||||
"tasks.jobs.scan.initialize_prowler_provider"
|
||||
) as mock_initialize_prowler_provider,
|
||||
patch(
|
||||
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE"
|
||||
) as mock_compliance_template,
|
||||
patch(
|
||||
"tasks.jobs.scan.generate_scan_compliance"
|
||||
) as mock_generate_compliance,
|
||||
patch("tasks.jobs.scan.create_objects_in_batches") as mock_create_objects,
|
||||
patch("api.models.Finding.objects.filter") as mock_findings_filter,
|
||||
):
|
||||
tenant = tenants_fixture[0]
|
||||
scan = scans_fixture[0]
|
||||
providers_fixture[0]
|
||||
|
||||
mock_findings_filter.return_value = []
|
||||
|
||||
mock_prowler_provider = MagicMock()
|
||||
mock_prowler_provider.get_regions.return_value = [
|
||||
"us-east-1",
|
||||
"us-west-2",
|
||||
"eu-west-1",
|
||||
]
|
||||
mock_initialize_prowler_provider.return_value = mock_prowler_provider
|
||||
|
||||
mock_compliance_template.__getitem__.return_value = {
|
||||
"test_compliance": {
|
||||
"framework": "Test Framework",
|
||||
"version": "1.0",
|
||||
"requirements": {
|
||||
"req_1": {
|
||||
"description": "Test Requirement 1",
|
||||
"checks": {"check_1": None},
|
||||
"checks_status": {
|
||||
"pass": 2,
|
||||
"fail": 1,
|
||||
"manual": 0,
|
||||
"total": 3,
|
||||
},
|
||||
"status": "FAIL",
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
mock_generate_compliance.return_value = {
|
||||
"test_compliance": {
|
||||
"framework": "Test Framework",
|
||||
"version": "1.0",
|
||||
"requirements": {
|
||||
"req_1": {
|
||||
"description": "Test Requirement 1",
|
||||
"checks": {
|
||||
"check_1": {
|
||||
"us-east-1": {"status": "PASS"},
|
||||
"us-west-2": {"status": "FAIL"},
|
||||
"eu-west-1": {"status": "PASS"},
|
||||
}
|
||||
},
|
||||
"checks_status": {
|
||||
"pass": 2,
|
||||
"fail": 1,
|
||||
"manual": 0,
|
||||
"total": 3,
|
||||
},
|
||||
"status": "FAIL",
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
created_objects = []
|
||||
mock_create_objects.side_effect = (
|
||||
lambda tenant_id, model, objs, batch_size=500: created_objects.extend(
|
||||
objs
|
||||
)
|
||||
)
|
||||
|
||||
create_compliance_requirements(str(tenant.id), str(scan.id))
|
||||
|
||||
assert len(created_objects) == 3
|
||||
assert all(obj.requirement_status == "FAIL" for obj in created_objects)
|
||||
|
||||
def test_compliance_overview_aggregation_requirement_pass_all_regions(
|
||||
self,
|
||||
tenants_fixture,
|
||||
scans_fixture,
|
||||
providers_fixture,
|
||||
):
|
||||
with (
|
||||
patch("api.db_utils.rls_transaction"),
|
||||
patch(
|
||||
"tasks.jobs.scan.initialize_prowler_provider"
|
||||
) as mock_initialize_prowler_provider,
|
||||
patch(
|
||||
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE"
|
||||
) as mock_compliance_template,
|
||||
patch(
|
||||
"tasks.jobs.scan.generate_scan_compliance"
|
||||
) as mock_generate_compliance,
|
||||
patch("tasks.jobs.scan.create_objects_in_batches") as mock_create_objects,
|
||||
patch("api.models.Finding.objects.filter") as mock_findings_filter,
|
||||
):
|
||||
tenant = tenants_fixture[0]
|
||||
scan = scans_fixture[0]
|
||||
providers_fixture[0]
|
||||
|
||||
mock_findings_filter.return_value = []
|
||||
|
||||
mock_prowler_provider = MagicMock()
|
||||
mock_prowler_provider.get_regions.return_value = ["us-east-1", "us-west-2"]
|
||||
mock_initialize_prowler_provider.return_value = mock_prowler_provider
|
||||
|
||||
mock_compliance_template.__getitem__.return_value = {
|
||||
"test_compliance": {
|
||||
"framework": "Test Framework",
|
||||
"version": "1.0",
|
||||
"requirements": {
|
||||
"req_1": {
|
||||
"description": "Test Requirement 1",
|
||||
"checks": {"check_1": None},
|
||||
"checks_status": {
|
||||
"pass": 2,
|
||||
"fail": 0,
|
||||
"manual": 0,
|
||||
"total": 2,
|
||||
},
|
||||
"status": "PASS",
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
mock_generate_compliance.return_value = {
|
||||
"test_compliance": {
|
||||
"framework": "Test Framework",
|
||||
"version": "1.0",
|
||||
"requirements": {
|
||||
"req_1": {
|
||||
"description": "Test Requirement 1",
|
||||
"checks": {
|
||||
"check_1": {
|
||||
"us-east-1": {"status": "PASS"},
|
||||
"us-west-2": {"status": "PASS"},
|
||||
}
|
||||
},
|
||||
"checks_status": {
|
||||
"pass": 2,
|
||||
"fail": 0,
|
||||
"manual": 0,
|
||||
"total": 2,
|
||||
},
|
||||
"status": "PASS",
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
created_objects = []
|
||||
mock_create_objects.side_effect = (
|
||||
lambda tenant_id, model, objs, batch_size=500: created_objects.extend(
|
||||
objs
|
||||
)
|
||||
)
|
||||
|
||||
create_compliance_requirements(str(tenant.id), str(scan.id))
|
||||
|
||||
assert len(created_objects) == 2
|
||||
assert all(obj.requirement_status == "PASS" for obj in created_objects)
|
||||
|
||||
def test_compliance_overview_aggregation_multiple_requirements_mixed_status(
|
||||
self,
|
||||
tenants_fixture,
|
||||
scans_fixture,
|
||||
providers_fixture,
|
||||
):
|
||||
with (
|
||||
patch("api.db_utils.rls_transaction"),
|
||||
patch(
|
||||
"tasks.jobs.scan.initialize_prowler_provider"
|
||||
) as mock_initialize_prowler_provider,
|
||||
patch(
|
||||
"tasks.jobs.scan.PROWLER_COMPLIANCE_OVERVIEW_TEMPLATE"
|
||||
) as mock_compliance_template,
|
||||
patch(
|
||||
"tasks.jobs.scan.generate_scan_compliance"
|
||||
) as mock_generate_compliance,
|
||||
patch("tasks.jobs.scan.create_objects_in_batches") as mock_create_objects,
|
||||
patch("api.models.Finding.objects.filter") as mock_findings_filter,
|
||||
):
|
||||
tenant = tenants_fixture[0]
|
||||
scan = scans_fixture[0]
|
||||
providers_fixture[0]
|
||||
|
||||
mock_findings_filter.return_value = []
|
||||
|
||||
mock_prowler_provider = MagicMock()
|
||||
mock_prowler_provider.get_regions.return_value = ["us-east-1", "us-west-2"]
|
||||
mock_initialize_prowler_provider.return_value = mock_prowler_provider
|
||||
|
||||
mock_compliance_template.__getitem__.return_value = {
|
||||
"test_compliance": {
|
||||
"framework": "Test Framework",
|
||||
"version": "1.0",
|
||||
"requirements": {
|
||||
"req_1": {
|
||||
"description": "Test Requirement 1",
|
||||
"checks": {"check_1": None},
|
||||
"checks_status": {
|
||||
"pass": 2,
|
||||
"fail": 0,
|
||||
"manual": 0,
|
||||
"total": 2,
|
||||
},
|
||||
"status": "PASS",
|
||||
},
|
||||
"req_2": {
|
||||
"description": "Test Requirement 2",
|
||||
"checks": {"check_2": None},
|
||||
"checks_status": {
|
||||
"pass": 1,
|
||||
"fail": 1,
|
||||
"manual": 0,
|
||||
"total": 2,
|
||||
},
|
||||
"status": "FAIL",
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
mock_generate_compliance.return_value = {
|
||||
"test_compliance": {
|
||||
"framework": "Test Framework",
|
||||
"version": "1.0",
|
||||
"requirements": {
|
||||
"req_1": {
|
||||
"description": "Test Requirement 1",
|
||||
"checks": {
|
||||
"check_1": {
|
||||
"us-east-1": {"status": "PASS"},
|
||||
"us-west-2": {"status": "PASS"},
|
||||
}
|
||||
},
|
||||
"checks_status": {
|
||||
"pass": 2,
|
||||
"fail": 0,
|
||||
"manual": 0,
|
||||
"total": 2,
|
||||
},
|
||||
"status": "PASS",
|
||||
},
|
||||
"req_2": {
|
||||
"description": "Test Requirement 2",
|
||||
"checks": {
|
||||
"check_2": {
|
||||
"us-east-1": {"status": "PASS"},
|
||||
"us-west-2": {"status": "FAIL"},
|
||||
}
|
||||
},
|
||||
"checks_status": {
|
||||
"pass": 1,
|
||||
"fail": 1,
|
||||
"manual": 0,
|
||||
"total": 2,
|
||||
},
|
||||
"status": "FAIL",
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
created_objects = []
|
||||
mock_create_objects.side_effect = (
|
||||
lambda tenant_id, model, objs, batch_size=500: created_objects.extend(
|
||||
objs
|
||||
)
|
||||
)
|
||||
|
||||
create_compliance_requirements(str(tenant.id), str(scan.id))
|
||||
|
||||
assert len(created_objects) == 4
|
||||
req_1_objects = [
|
||||
obj for obj in created_objects if obj.requirement_id == "req_1"
|
||||
]
|
||||
req_2_objects = [
|
||||
obj for obj in created_objects if obj.requirement_id == "req_2"
|
||||
]
|
||||
assert len(req_1_objects) == 2
|
||||
assert len(req_2_objects) == 2
|
||||
assert all(obj.requirement_status == "PASS" for obj in req_1_objects)
|
||||
assert all(obj.requirement_status == "FAIL" for obj in req_2_objects)
|
||||
|
||||
@@ -0,0 +1,128 @@
|
||||
import random
|
||||
from collections import defaultdict
|
||||
|
||||
import requests
|
||||
from locust import events, task
|
||||
from utils.helpers import APIUserBase, get_api_token, get_auth_headers
|
||||
|
||||
GLOBAL = {
|
||||
"token": None,
|
||||
"available_scans_info": {},
|
||||
}
|
||||
SUPPORTED_COMPLIANCE_IDS = {
|
||||
"aws": ["ens_rd2022", "cis_2.0", "prowler_threatscore", "soc2"],
|
||||
"gcp": ["ens_rd2022", "cis_2.0", "prowler_threatscore", "soc2"],
|
||||
"azure": ["ens_rd2022", "cis_2.0", "prowler_threatscore", "soc2"],
|
||||
"m365": ["cis_4.0", "iso27001_2022", "prowler_threatscore"],
|
||||
}
|
||||
|
||||
|
||||
def _get_random_scan() -> tuple:
|
||||
provider_type = random.choice(list(GLOBAL["available_scans_info"].keys()))
|
||||
scan_info = random.choice(GLOBAL["available_scans_info"][provider_type])
|
||||
return provider_type, scan_info
|
||||
|
||||
|
||||
def _get_random_compliance_id(provider: str) -> str:
|
||||
return f"{random.choice(SUPPORTED_COMPLIANCE_IDS[provider])}_{provider}"
|
||||
|
||||
|
||||
def _get_compliance_available_scans_by_provider_type(host: str, token: str) -> dict:
|
||||
excluded_providers = ["kubernetes"]
|
||||
|
||||
response_dict = defaultdict(list)
|
||||
provider_response = requests.get(
|
||||
f"{host}/providers?fields[providers]=id,provider&filter[connected]=true",
|
||||
headers=get_auth_headers(token),
|
||||
)
|
||||
for provider in provider_response.json()["data"]:
|
||||
provider_id = provider["id"]
|
||||
provider_type = provider["attributes"]["provider"]
|
||||
if provider_type in excluded_providers:
|
||||
continue
|
||||
|
||||
scan_response = requests.get(
|
||||
f"{host}/scans?fields[scans]=id&filter[provider]={provider_id}&filter[state]=completed",
|
||||
headers=get_auth_headers(token),
|
||||
)
|
||||
scan_data = scan_response.json()["data"]
|
||||
if not scan_data:
|
||||
continue
|
||||
scan_id = scan_data[0]["id"]
|
||||
response_dict[provider_type].append(scan_id)
|
||||
return response_dict
|
||||
|
||||
|
||||
def _get_compliance_regions_from_scan(host: str, token: str, scan_id: str) -> list:
|
||||
response = requests.get(
|
||||
f"{host}/compliance-overviews/metadata?filter[scan_id]={scan_id}",
|
||||
headers=get_auth_headers(token),
|
||||
)
|
||||
assert response.status_code == 200, f"Failed to get scan: {response.text}"
|
||||
return response.json()["data"]["attributes"]["regions"]
|
||||
|
||||
|
||||
@events.test_start.add_listener
|
||||
def on_test_start(environment, **kwargs):
|
||||
GLOBAL["token"] = get_api_token(environment.host)
|
||||
scans_by_provider = _get_compliance_available_scans_by_provider_type(
|
||||
environment.host, GLOBAL["token"]
|
||||
)
|
||||
scan_info = defaultdict(list)
|
||||
for provider, scans in scans_by_provider.items():
|
||||
for scan in scans:
|
||||
scan_info[provider].append(
|
||||
{
|
||||
"scan_id": scan,
|
||||
"regions": _get_compliance_regions_from_scan(
|
||||
environment.host, GLOBAL["token"], scan
|
||||
),
|
||||
}
|
||||
)
|
||||
GLOBAL["available_scans_info"] = scan_info
|
||||
|
||||
|
||||
class APIUser(APIUserBase):
|
||||
def on_start(self):
|
||||
self.token = GLOBAL["token"]
|
||||
|
||||
@task(3)
|
||||
def compliance_overviews_default(self):
|
||||
provider_type, scan_info = _get_random_scan()
|
||||
name = f"/compliance-overviews ({provider_type})"
|
||||
endpoint = f"/compliance-overviews?" f"filter[scan_id]={scan_info['scan_id']}"
|
||||
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
|
||||
|
||||
@task(2)
|
||||
def compliance_overviews_region(self):
|
||||
provider_type, scan_info = _get_random_scan()
|
||||
name = f"/compliance-overviews?filter[region] ({provider_type})"
|
||||
endpoint = (
|
||||
f"/compliance-overviews"
|
||||
f"?filter[scan_id]={scan_info['scan_id']}"
|
||||
f"&filter[region]={random.choice(scan_info['regions'])}"
|
||||
)
|
||||
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
|
||||
|
||||
@task(2)
|
||||
def compliance_overviews_requirements(self):
|
||||
provider_type, scan_info = _get_random_scan()
|
||||
compliance_id = _get_random_compliance_id(provider_type)
|
||||
name = f"/compliance-overviews/requirements ({compliance_id})"
|
||||
endpoint = (
|
||||
f"/compliance-overviews/requirements"
|
||||
f"?filter[scan_id]={scan_info['scan_id']}"
|
||||
f"&filter[compliance_id]={compliance_id}"
|
||||
)
|
||||
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
|
||||
|
||||
@task
|
||||
def compliance_overviews_attributes(self):
|
||||
provider_type, _ = _get_random_scan()
|
||||
compliance_id = _get_random_compliance_id(provider_type)
|
||||
name = f"/compliance-overviews/attributes ({compliance_id})"
|
||||
endpoint = (
|
||||
f"/compliance-overviews/attributes"
|
||||
f"?filter[compliance_id]={compliance_id}"
|
||||
)
|
||||
self.client.get(endpoint, headers=get_auth_headers(self.token), name=name)
|
||||
Binary file not shown.
@@ -0,0 +1,117 @@
|
||||
# Prowler Multicloud CIS Benchmarks PowerBI Template
|
||||

|
||||
|
||||
## Getting Started
|
||||
|
||||
1. Install Microsoft PowerBI Desktop
|
||||
|
||||
This report requires the Microsoft PowerBI Desktop software which can be downloaded for free from Microsoft.
|
||||
2. Run compliance scans in Prowler
|
||||
|
||||
The report uses compliance csv outputs from Prowler. Compliance scans be run using either [Prowler CLI](https://docs.prowler.com/projects/prowler-open-source/en/latest/#prowler-cli) or [Prowler Cloud/App](https://cloud.prowler.com/sign-in)
|
||||
1. Prowler CLI -> Run a Prowler scan using the --compliance option
|
||||
2. Prowler Cloud/App -> Navigate to the compliance section to download csv outputs
|
||||

|
||||
|
||||
|
||||
The template supports the following CIS Benchmarks only:
|
||||
|
||||
| Compliance Framework | Version |
|
||||
| ---------------------------------------------- | ------- |
|
||||
| CIS Amazon Web Services Foundations Benchmark | v4.0.1 |
|
||||
| CIS Google Cloud Platform Foundation Benchmark | v3.0.0 |
|
||||
| CIS Microsoft Azure Foundations Benchmark | v3.0.0 |
|
||||
| CIS Kubernetes Benchmark | v1.10.0 |
|
||||
|
||||
Ensure you run or download the correct benchmark versions.
|
||||
3. Create a local directory to store Prowler csvoutputs
|
||||
|
||||
Once downloaded, place your csv outputs in a directory on your local machine. If you rename the files, they must maintain the provider in the filename.
|
||||
|
||||
To use time-series capabilities such as "compliance percent over time" you'll need scans from multiple dates.
|
||||
4. Download and run the PowerBI template file (.pbit)
|
||||
|
||||
Running the .pbit file will open PowerBI Desktop and prompt you for the full filepath to the local directory
|
||||
5. Enter the full filepath to the directory created in step 3
|
||||
|
||||
Provide the full filepath from the root directory.
|
||||
|
||||
Ensure that the filepath is not wrapped in quotation marks (""). If you use Window's "copy as path" feature, it will automatically include quotation marks.
|
||||
6. Save the report as a PowerBI file (.pbix)
|
||||
|
||||
Once the filepath is entered, the template will automatically ingest and populate the report. You can then save this file as a new PowerBI report. If you'd like to generate another report, simply re-run the template file (.pbit) from step 4.
|
||||
|
||||
## Validation
|
||||
|
||||
After setting up your dashboard, you may want to validate the Prowler csv files were ingested correctly. To do this, navigate to the "Configuration" tab.
|
||||
|
||||
The "loaded CIS Benchmarks" table shows the supported benchmarks and versions. This is defined by the template file and not editable by the user. All benchmarks will be loaded regardless of which providers you provided csv outputs for.
|
||||
|
||||
The "Prowler CSV Folder" shows the path to the local directory you provided.
|
||||
|
||||
The "Loaded Prowler Exports" table shows the ingested csv files from the local directory. It will mark files that are treated as the latest assessment with a green checkmark.
|
||||
|
||||

|
||||
|
||||
## Report Sections
|
||||
|
||||
The PowerBI Report is broken into three main report pages
|
||||
|
||||
| Report Page | Description |
|
||||
| ----------- | ----------------------------------------------------------------------------------- |
|
||||
| Overview | Provides general CIS Benchmark overview across both AWS, Azure, GCP, and Kubernetes |
|
||||
| Benchmark | Provides overview of a single CIS Benchmark |
|
||||
| Requirement | Drill-through page to view details of a single requirement |
|
||||
|
||||
|
||||
### Overview Page
|
||||
|
||||
The overview page is a general CIS Benchmark overview across both AWS, Azure, GCP, and Kubernetes.
|
||||
|
||||

|
||||
|
||||
The page has the following components:
|
||||
|
||||
| Component | Description |
|
||||
| ---------------------------------------- | ------------------------------------------------------------------------ |
|
||||
| CIS Benchmark Overview | Table with benchmark name, Version, and overall compliance percentage |
|
||||
| Provider by Requirement Status | Bar chart showing benchmark requirements by status by provider |
|
||||
| Compliance Percent Heatmap | Heatmap showing compliance percent by benchmark and profile level |
|
||||
| Profile level by Requirement Status | Bar chart showing requirements by status and profile level |
|
||||
| Compliance Percent Over Time by Provider | Line chart showing overall compliance perecentage over time by provider. |
|
||||
|
||||
### Benchmark Page
|
||||
|
||||
The benchmark page provides an overview of a single CIS Benchmark. You can select the benchmark from the dropdown as well as scope down to specific profile levels or regions.
|
||||
|
||||

|
||||
|
||||
The page has the following components:
|
||||
|
||||
| Component | Description |
|
||||
| --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| Compliance Percent Heatmap | Heatmap showing compliance percent by region and profile level |
|
||||
| Benchmark Section by Requirement Status | Bar chart showing benchmark requirements by bennchmark section and status |
|
||||
| Compliance percent Over Time by Region | Line chart showing overall compliance percentage over time by region |
|
||||
| Benchmark Requirements | Table showing requirement section, requirement number, reuqirement title, number of resources tested, status, and number of failing checks |
|
||||
|
||||
### Requirement Page
|
||||
|
||||
The requirement page is a drill-through page to view details of a single requirement. To populate the requirement page right click on a requiement from the "Benchmark Requirements" table on the benchmark page and select "Drill through" -> "Requirement".
|
||||
|
||||

|
||||
|
||||
The requirement page has the following components:
|
||||
|
||||
| Component | Description |
|
||||
| ------------------------------------------ | --------------------------------------------------------------------------------- |
|
||||
| Title | Title of the requirement |
|
||||
| Rationale | Rationale of the requirement |
|
||||
| Remediation | Remedation guidance for the requirement |
|
||||
| Region by Check Status | Bar chart showing Prowler checks by region and status |
|
||||
| Resource Checks for Benchmark Requirements | Table showing Resource ID, Resource Name, Status, Description, and Prowler Checkl |
|
||||
|
||||
## Walkthrough Video
|
||||
[](https://www.youtube.com/watch?v=lfKFkTqBxjU)
|
||||
|
||||
|
||||
@@ -2569,6 +2569,356 @@ def get_section_containers_3_levels(data, section_1, section_2, section_3):
|
||||
return html.Div(section_containers, className="compliance-data-layout")
|
||||
|
||||
|
||||
def get_section_containers_threatscore(data, section_1, section_2, section_3):
|
||||
data["STATUS"] = data["STATUS"].apply(map_status_to_icon)
|
||||
findings_counts_marco = (
|
||||
data.groupby([section_1, "STATUS"]).size().unstack(fill_value=0)
|
||||
)
|
||||
section_containers = []
|
||||
data[section_1] = data[section_1].astype(str)
|
||||
data[section_2] = data[section_2].astype(str)
|
||||
data[section_3] = data[section_3].astype(str)
|
||||
|
||||
data.sort_values(
|
||||
by=section_3,
|
||||
key=lambda x: x.map(extract_numeric_values),
|
||||
ascending=True,
|
||||
inplace=True,
|
||||
)
|
||||
|
||||
for marco in data[section_1].unique():
|
||||
success_marco = findings_counts_marco.loc[marco].get(pass_emoji, 0)
|
||||
failed_marco = findings_counts_marco.loc[marco].get(fail_emoji, 0)
|
||||
|
||||
fig_name = go.Figure(
|
||||
[
|
||||
go.Bar(
|
||||
name="Failed",
|
||||
x=[failed_marco],
|
||||
y=[""],
|
||||
orientation="h",
|
||||
marker=dict(color="#e77676"),
|
||||
width=[0.8],
|
||||
),
|
||||
go.Bar(
|
||||
name="Success",
|
||||
x=[success_marco],
|
||||
y=[""],
|
||||
orientation="h",
|
||||
marker=dict(color="#45cc6e"),
|
||||
width=[0.8],
|
||||
),
|
||||
]
|
||||
)
|
||||
fig_name.update_layout(
|
||||
barmode="stack",
|
||||
margin=dict(l=10, r=10, t=10, b=10),
|
||||
paper_bgcolor="rgba(0,0,0,0)",
|
||||
plot_bgcolor="rgba(0,0,0,0)",
|
||||
showlegend=False,
|
||||
width=350,
|
||||
height=30,
|
||||
xaxis=dict(showticklabels=False, showgrid=False, zeroline=False),
|
||||
yaxis=dict(showticklabels=False, showgrid=False, zeroline=False),
|
||||
annotations=[
|
||||
dict(
|
||||
x=success_marco + failed_marco,
|
||||
y=0,
|
||||
xref="x",
|
||||
yref="y",
|
||||
text=str(success_marco),
|
||||
showarrow=False,
|
||||
font=dict(color="#45cc6e", size=14),
|
||||
xanchor="left",
|
||||
yanchor="middle",
|
||||
),
|
||||
dict(
|
||||
x=0,
|
||||
y=0,
|
||||
xref="x",
|
||||
yref="y",
|
||||
text=str(failed_marco),
|
||||
showarrow=False,
|
||||
font=dict(color="#e77676", size=14),
|
||||
xanchor="right",
|
||||
yanchor="middle",
|
||||
),
|
||||
],
|
||||
)
|
||||
fig_name.add_annotation(
|
||||
x=failed_marco,
|
||||
y=0.3,
|
||||
text="|",
|
||||
showarrow=False,
|
||||
font=dict(size=20),
|
||||
xanchor="center",
|
||||
yanchor="middle",
|
||||
)
|
||||
|
||||
graph_div = html.Div(
|
||||
dcc.Graph(
|
||||
figure=fig_name, config={"staticPlot": True}, className="info-bar"
|
||||
),
|
||||
className="graph-section",
|
||||
)
|
||||
direct_internal_items = []
|
||||
|
||||
for categoria in data[data[section_1] == marco][section_2].unique():
|
||||
specific_data = data[
|
||||
(data[section_1] == marco) & (data[section_2] == categoria)
|
||||
]
|
||||
findings_counts_categoria = (
|
||||
specific_data.groupby([section_2, "STATUS"])
|
||||
.size()
|
||||
.unstack(fill_value=0)
|
||||
)
|
||||
success_categoria = findings_counts_categoria.loc[categoria].get(
|
||||
pass_emoji, 0
|
||||
)
|
||||
failed_categoria = findings_counts_categoria.loc[categoria].get(
|
||||
fail_emoji, 0
|
||||
)
|
||||
|
||||
fig_section = go.Figure(
|
||||
[
|
||||
go.Bar(
|
||||
name="Failed",
|
||||
x=[failed_categoria],
|
||||
y=[""],
|
||||
orientation="h",
|
||||
marker=dict(color="#e77676"),
|
||||
width=[0.8],
|
||||
),
|
||||
go.Bar(
|
||||
name="Success",
|
||||
x=[success_categoria],
|
||||
y=[""],
|
||||
orientation="h",
|
||||
marker=dict(color="#45cc6e"),
|
||||
width=[0.8],
|
||||
),
|
||||
]
|
||||
)
|
||||
fig_section.update_layout(
|
||||
barmode="stack",
|
||||
margin=dict(l=10, r=10, t=10, b=10),
|
||||
paper_bgcolor="rgba(0,0,0,0)",
|
||||
plot_bgcolor="rgba(0,0,0,0)",
|
||||
showlegend=False,
|
||||
width=350,
|
||||
height=30,
|
||||
xaxis=dict(showticklabels=False, showgrid=False, zeroline=False),
|
||||
yaxis=dict(showticklabels=False, showgrid=False, zeroline=False),
|
||||
annotations=[
|
||||
dict(
|
||||
x=success_categoria + failed_categoria,
|
||||
y=0,
|
||||
xref="x",
|
||||
yref="y",
|
||||
text=str(success_categoria),
|
||||
showarrow=False,
|
||||
font=dict(color="#45cc6e", size=14),
|
||||
xanchor="left",
|
||||
yanchor="middle",
|
||||
),
|
||||
dict(
|
||||
x=0,
|
||||
y=0,
|
||||
xref="x",
|
||||
yref="y",
|
||||
text=str(failed_categoria),
|
||||
showarrow=False,
|
||||
font=dict(color="#e77676", size=14),
|
||||
xanchor="right",
|
||||
yanchor="middle",
|
||||
),
|
||||
],
|
||||
)
|
||||
fig_section.add_annotation(
|
||||
x=failed_categoria,
|
||||
y=0.3,
|
||||
text="|",
|
||||
showarrow=False,
|
||||
font=dict(size=20),
|
||||
xanchor="center",
|
||||
yanchor="middle",
|
||||
)
|
||||
|
||||
graph_div_section = html.Div(
|
||||
dcc.Graph(
|
||||
figure=fig_section,
|
||||
config={"staticPlot": True},
|
||||
className="info-bar-child",
|
||||
),
|
||||
className="graph-section-req",
|
||||
)
|
||||
direct_internal_items_idgrupocontrol = []
|
||||
|
||||
for idgrupocontrol in specific_data[section_3].unique():
|
||||
specific_data2 = specific_data[
|
||||
specific_data[section_3] == idgrupocontrol
|
||||
]
|
||||
findings_counts_idgrupocontrol = (
|
||||
specific_data2.groupby([section_3, "STATUS"])
|
||||
.size()
|
||||
.unstack(fill_value=0)
|
||||
)
|
||||
success_idgrupocontrol = findings_counts_idgrupocontrol.loc[
|
||||
idgrupocontrol
|
||||
].get(pass_emoji, 0)
|
||||
failed_idgrupocontrol = findings_counts_idgrupocontrol.loc[
|
||||
idgrupocontrol
|
||||
].get(fail_emoji, 0)
|
||||
|
||||
fig_idgrupocontrol = go.Figure(
|
||||
[
|
||||
go.Bar(
|
||||
name="Failed",
|
||||
x=[failed_idgrupocontrol],
|
||||
y=[""],
|
||||
orientation="h",
|
||||
marker=dict(color="#e77676"),
|
||||
width=[0.8],
|
||||
),
|
||||
go.Bar(
|
||||
name="Success",
|
||||
x=[success_idgrupocontrol],
|
||||
y=[""],
|
||||
orientation="h",
|
||||
marker=dict(color="#45cc6e"),
|
||||
width=[0.8],
|
||||
),
|
||||
]
|
||||
)
|
||||
fig_idgrupocontrol.update_layout(
|
||||
barmode="stack",
|
||||
margin=dict(l=10, r=10, t=10, b=10),
|
||||
paper_bgcolor="rgba(0,0,0,0)",
|
||||
plot_bgcolor="rgba(0,0,0,0)",
|
||||
showlegend=False,
|
||||
width=350,
|
||||
height=30,
|
||||
xaxis=dict(showticklabels=False, showgrid=False, zeroline=False),
|
||||
yaxis=dict(showticklabels=False, showgrid=False, zeroline=False),
|
||||
annotations=[
|
||||
dict(
|
||||
x=success_idgrupocontrol + failed_idgrupocontrol,
|
||||
y=0,
|
||||
xref="x",
|
||||
yref="y",
|
||||
text=str(success_idgrupocontrol),
|
||||
showarrow=False,
|
||||
font=dict(color="#45cc6e", size=14),
|
||||
xanchor="left",
|
||||
yanchor="middle",
|
||||
),
|
||||
dict(
|
||||
x=0,
|
||||
y=0,
|
||||
xref="x",
|
||||
yref="y",
|
||||
text=str(failed_idgrupocontrol),
|
||||
showarrow=False,
|
||||
font=dict(color="#e77676", size=14),
|
||||
xanchor="right",
|
||||
yanchor="middle",
|
||||
),
|
||||
],
|
||||
)
|
||||
fig_idgrupocontrol.add_annotation(
|
||||
x=failed_idgrupocontrol,
|
||||
y=0.3,
|
||||
text="|",
|
||||
showarrow=False,
|
||||
font=dict(size=20),
|
||||
xanchor="center",
|
||||
yanchor="middle",
|
||||
)
|
||||
|
||||
graph_div_idgrupocontrol = html.Div(
|
||||
dcc.Graph(
|
||||
figure=fig_idgrupocontrol,
|
||||
config={"staticPlot": True},
|
||||
className="info-bar-child",
|
||||
),
|
||||
className="graph-section-req",
|
||||
)
|
||||
|
||||
data_table = dash_table.DataTable(
|
||||
data=specific_data2.to_dict("records"),
|
||||
columns=[
|
||||
{"name": i, "id": i}
|
||||
for i in [
|
||||
"CHECKID",
|
||||
"STATUS",
|
||||
"REGION",
|
||||
"ACCOUNTID",
|
||||
"RESOURCEID",
|
||||
]
|
||||
],
|
||||
style_table={"overflowX": "auto"},
|
||||
style_as_list_view=True,
|
||||
style_cell={"textAlign": "left", "padding": "5px"},
|
||||
)
|
||||
|
||||
title_internal = f"{idgrupocontrol} - {specific_data2['REQUIREMENTS_DESCRIPTION'].iloc[0]}"
|
||||
|
||||
# Cut the title if it's too long
|
||||
title_internal = (
|
||||
title_internal[:130] + " ..."
|
||||
if len(title_internal) > 130
|
||||
else title_internal
|
||||
)
|
||||
|
||||
internal_accordion_item_2 = dbc.AccordionItem(
|
||||
title=title_internal,
|
||||
children=[
|
||||
graph_div_idgrupocontrol,
|
||||
html.Div([data_table], className="inner-accordion-content"),
|
||||
],
|
||||
)
|
||||
direct_internal_items_idgrupocontrol.append(
|
||||
html.Div(
|
||||
[
|
||||
graph_div_idgrupocontrol,
|
||||
dbc.Accordion(
|
||||
[internal_accordion_item_2],
|
||||
start_collapsed=True,
|
||||
flush=True,
|
||||
),
|
||||
],
|
||||
className="accordion-inner--child",
|
||||
)
|
||||
)
|
||||
|
||||
internal_accordion_item = dbc.AccordionItem(
|
||||
title=categoria,
|
||||
children=direct_internal_items_idgrupocontrol,
|
||||
)
|
||||
internal_section_container = html.Div(
|
||||
[
|
||||
graph_div_section,
|
||||
dbc.Accordion(
|
||||
[internal_accordion_item], start_collapsed=True, flush=True
|
||||
),
|
||||
],
|
||||
className="accordion-inner--child",
|
||||
)
|
||||
direct_internal_items.append(internal_section_container)
|
||||
|
||||
accordion_item = dbc.AccordionItem(title=marco, children=direct_internal_items)
|
||||
section_container = html.Div(
|
||||
[
|
||||
graph_div,
|
||||
dbc.Accordion([accordion_item], start_collapsed=True, flush=True),
|
||||
],
|
||||
className="accordion-inner",
|
||||
)
|
||||
section_containers.append(section_container)
|
||||
|
||||
return html.Div(section_containers, className="compliance-data-layout")
|
||||
|
||||
|
||||
# This function extracts and compares up to two numeric values, ensuring correct sorting for version-like strings.
|
||||
def extract_numeric_values(value):
|
||||
numbers = re.findall(r"\d+", str(value))
|
||||
|
||||
@@ -0,0 +1,24 @@
|
||||
import warnings
|
||||
|
||||
from dashboard.common_methods import get_section_containers_cis
|
||||
|
||||
warnings.filterwarnings("ignore")
|
||||
|
||||
|
||||
def get_table(data):
|
||||
aux = data[
|
||||
[
|
||||
"REQUIREMENTS_ID",
|
||||
"REQUIREMENTS_DESCRIPTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"CHECKID",
|
||||
"STATUS",
|
||||
"REGION",
|
||||
"ACCOUNTID",
|
||||
"RESOURCEID",
|
||||
]
|
||||
].copy()
|
||||
|
||||
return get_section_containers_cis(
|
||||
aux, "REQUIREMENTS_ID", "REQUIREMENTS_ATTRIBUTES_SECTION"
|
||||
)
|
||||
@@ -0,0 +1,24 @@
|
||||
import warnings
|
||||
|
||||
from dashboard.common_methods import get_section_containers_cis
|
||||
|
||||
warnings.filterwarnings("ignore")
|
||||
|
||||
|
||||
def get_table(data):
|
||||
aux = data[
|
||||
[
|
||||
"REQUIREMENTS_ID",
|
||||
"REQUIREMENTS_DESCRIPTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"CHECKID",
|
||||
"STATUS",
|
||||
"REGION",
|
||||
"ACCOUNTID",
|
||||
"RESOURCEID",
|
||||
]
|
||||
].copy()
|
||||
|
||||
return get_section_containers_cis(
|
||||
aux, "REQUIREMENTS_ID", "REQUIREMENTS_ATTRIBUTES_SECTION"
|
||||
)
|
||||
@@ -0,0 +1,23 @@
|
||||
import warnings
|
||||
|
||||
from dashboard.common_methods import get_section_container_iso
|
||||
|
||||
warnings.filterwarnings("ignore")
|
||||
|
||||
|
||||
def get_table(data):
|
||||
aux = data[
|
||||
[
|
||||
"REQUIREMENTS_ATTRIBUTES_CATEGORY",
|
||||
"REQUIREMENTS_ATTRIBUTES_OBJETIVE_ID",
|
||||
"REQUIREMENTS_ATTRIBUTES_OBJETIVE_NAME",
|
||||
"CHECKID",
|
||||
"STATUS",
|
||||
"REGION",
|
||||
"ACCOUNTID",
|
||||
"RESOURCEID",
|
||||
]
|
||||
]
|
||||
return get_section_container_iso(
|
||||
aux, "REQUIREMENTS_ATTRIBUTES_CATEGORY", "REQUIREMENTS_ATTRIBUTES_OBJETIVE_ID"
|
||||
)
|
||||
@@ -0,0 +1,43 @@
|
||||
import warnings
|
||||
|
||||
from dashboard.common_methods import get_section_containers_3_levels
|
||||
|
||||
warnings.filterwarnings("ignore")
|
||||
|
||||
|
||||
def get_table(data):
|
||||
data["REQUIREMENTS_DESCRIPTION"] = (
|
||||
data["REQUIREMENTS_ID"] + " - " + data["REQUIREMENTS_DESCRIPTION"]
|
||||
)
|
||||
|
||||
data["REQUIREMENTS_DESCRIPTION"] = data["REQUIREMENTS_DESCRIPTION"].apply(
|
||||
lambda x: x[:150] + "..." if len(str(x)) > 150 else x
|
||||
)
|
||||
|
||||
data["REQUIREMENTS_ATTRIBUTES_SECTION"] = data[
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION"
|
||||
].apply(lambda x: x[:80] + "..." if len(str(x)) > 80 else x)
|
||||
|
||||
data["REQUIREMENTS_ATTRIBUTES_SUBSECTION"] = data[
|
||||
"REQUIREMENTS_ATTRIBUTES_SUBSECTION"
|
||||
].apply(lambda x: x[:150] + "..." if len(str(x)) > 150 else x)
|
||||
|
||||
aux = data[
|
||||
[
|
||||
"REQUIREMENTS_DESCRIPTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
|
||||
"CHECKID",
|
||||
"STATUS",
|
||||
"REGION",
|
||||
"ACCOUNTID",
|
||||
"RESOURCEID",
|
||||
]
|
||||
]
|
||||
|
||||
return get_section_containers_3_levels(
|
||||
aux,
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
|
||||
"REQUIREMENTS_DESCRIPTION",
|
||||
)
|
||||
@@ -0,0 +1,43 @@
|
||||
import warnings
|
||||
|
||||
from dashboard.common_methods import get_section_containers_3_levels
|
||||
|
||||
warnings.filterwarnings("ignore")
|
||||
|
||||
|
||||
def get_table(data):
|
||||
data["REQUIREMENTS_DESCRIPTION"] = (
|
||||
data["REQUIREMENTS_ID"] + " - " + data["REQUIREMENTS_DESCRIPTION"]
|
||||
)
|
||||
|
||||
data["REQUIREMENTS_DESCRIPTION"] = data["REQUIREMENTS_DESCRIPTION"].apply(
|
||||
lambda x: x[:150] + "..." if len(str(x)) > 150 else x
|
||||
)
|
||||
|
||||
data["REQUIREMENTS_ATTRIBUTES_SECTION"] = data[
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION"
|
||||
].apply(lambda x: x[:80] + "..." if len(str(x)) > 80 else x)
|
||||
|
||||
data["REQUIREMENTS_ATTRIBUTES_SUBSECTION"] = data[
|
||||
"REQUIREMENTS_ATTRIBUTES_SUBSECTION"
|
||||
].apply(lambda x: x[:150] + "..." if len(str(x)) > 150 else x)
|
||||
|
||||
aux = data[
|
||||
[
|
||||
"REQUIREMENTS_DESCRIPTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
|
||||
"CHECKID",
|
||||
"STATUS",
|
||||
"REGION",
|
||||
"ACCOUNTID",
|
||||
"RESOURCEID",
|
||||
]
|
||||
]
|
||||
|
||||
return get_section_containers_3_levels(
|
||||
aux,
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
|
||||
"REQUIREMENTS_DESCRIPTION",
|
||||
)
|
||||
@@ -0,0 +1,43 @@
|
||||
import warnings
|
||||
|
||||
from dashboard.common_methods import get_section_containers_3_levels
|
||||
|
||||
warnings.filterwarnings("ignore")
|
||||
|
||||
|
||||
def get_table(data):
|
||||
data["REQUIREMENTS_DESCRIPTION"] = (
|
||||
data["REQUIREMENTS_ID"] + " - " + data["REQUIREMENTS_DESCRIPTION"]
|
||||
)
|
||||
|
||||
data["REQUIREMENTS_DESCRIPTION"] = data["REQUIREMENTS_DESCRIPTION"].apply(
|
||||
lambda x: x[:150] + "..." if len(str(x)) > 150 else x
|
||||
)
|
||||
|
||||
data["REQUIREMENTS_ATTRIBUTES_SECTION"] = data[
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION"
|
||||
].apply(lambda x: x[:80] + "..." if len(str(x)) > 80 else x)
|
||||
|
||||
data["REQUIREMENTS_ATTRIBUTES_SUBSECTION"] = data[
|
||||
"REQUIREMENTS_ATTRIBUTES_SUBSECTION"
|
||||
].apply(lambda x: x[:150] + "..." if len(str(x)) > 150 else x)
|
||||
|
||||
aux = data[
|
||||
[
|
||||
"REQUIREMENTS_DESCRIPTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
|
||||
"CHECKID",
|
||||
"STATUS",
|
||||
"REGION",
|
||||
"ACCOUNTID",
|
||||
"RESOURCEID",
|
||||
]
|
||||
]
|
||||
|
||||
return get_section_containers_3_levels(
|
||||
aux,
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
|
||||
"REQUIREMENTS_DESCRIPTION",
|
||||
)
|
||||
@@ -1,6 +1,6 @@
|
||||
import warnings
|
||||
|
||||
from dashboard.common_methods import get_section_containers_cis
|
||||
from dashboard.common_methods import get_section_containers_threatscore
|
||||
|
||||
warnings.filterwarnings("ignore")
|
||||
|
||||
@@ -11,6 +11,7 @@ def get_table(data):
|
||||
"REQUIREMENTS_ID",
|
||||
"REQUIREMENTS_DESCRIPTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
|
||||
"CHECKID",
|
||||
"STATUS",
|
||||
"REGION",
|
||||
@@ -19,6 +20,9 @@ def get_table(data):
|
||||
]
|
||||
].copy()
|
||||
|
||||
return get_section_containers_cis(
|
||||
aux, "REQUIREMENTS_ID", "REQUIREMENTS_ATTRIBUTES_SECTION"
|
||||
return get_section_containers_threatscore(
|
||||
aux,
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
|
||||
"REQUIREMENTS_ID",
|
||||
)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import warnings
|
||||
|
||||
from dashboard.common_methods import get_section_containers_cis
|
||||
from dashboard.common_methods import get_section_containers_threatscore
|
||||
|
||||
warnings.filterwarnings("ignore")
|
||||
|
||||
@@ -11,6 +11,7 @@ def get_table(data):
|
||||
"REQUIREMENTS_ID",
|
||||
"REQUIREMENTS_DESCRIPTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
|
||||
"CHECKID",
|
||||
"STATUS",
|
||||
"REGION",
|
||||
@@ -19,6 +20,9 @@ def get_table(data):
|
||||
]
|
||||
].copy()
|
||||
|
||||
return get_section_containers_cis(
|
||||
aux, "REQUIREMENTS_ID", "REQUIREMENTS_ATTRIBUTES_SECTION"
|
||||
return get_section_containers_threatscore(
|
||||
aux,
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
|
||||
"REQUIREMENTS_ID",
|
||||
)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import warnings
|
||||
|
||||
from dashboard.common_methods import get_section_containers_cis
|
||||
from dashboard.common_methods import get_section_containers_threatscore
|
||||
|
||||
warnings.filterwarnings("ignore")
|
||||
|
||||
@@ -11,6 +11,7 @@ def get_table(data):
|
||||
"REQUIREMENTS_ID",
|
||||
"REQUIREMENTS_DESCRIPTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
|
||||
"CHECKID",
|
||||
"STATUS",
|
||||
"REGION",
|
||||
@@ -19,6 +20,9 @@ def get_table(data):
|
||||
]
|
||||
].copy()
|
||||
|
||||
return get_section_containers_cis(
|
||||
aux, "REQUIREMENTS_ID", "REQUIREMENTS_ATTRIBUTES_SECTION"
|
||||
return get_section_containers_threatscore(
|
||||
aux,
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
|
||||
"REQUIREMENTS_ID",
|
||||
)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import warnings
|
||||
|
||||
from dashboard.common_methods import get_section_containers_cis
|
||||
from dashboard.common_methods import get_section_containers_threatscore
|
||||
|
||||
warnings.filterwarnings("ignore")
|
||||
|
||||
@@ -11,6 +11,7 @@ def get_table(data):
|
||||
"REQUIREMENTS_ID",
|
||||
"REQUIREMENTS_DESCRIPTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
|
||||
"CHECKID",
|
||||
"STATUS",
|
||||
"REGION",
|
||||
@@ -19,6 +20,9 @@ def get_table(data):
|
||||
]
|
||||
].copy()
|
||||
|
||||
return get_section_containers_cis(
|
||||
aux, "REQUIREMENTS_ID", "REQUIREMENTS_ATTRIBUTES_SECTION"
|
||||
return get_section_containers_threatscore(
|
||||
aux,
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SUBSECTION",
|
||||
"REQUIREMENTS_ID",
|
||||
)
|
||||
|
||||
@@ -0,0 +1,24 @@
|
||||
import warnings
|
||||
|
||||
from dashboard.common_methods import get_section_containers_format3
|
||||
|
||||
warnings.filterwarnings("ignore")
|
||||
|
||||
|
||||
def get_table(data):
|
||||
aux = data[
|
||||
[
|
||||
"REQUIREMENTS_ID",
|
||||
"REQUIREMENTS_DESCRIPTION",
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
"CHECKID",
|
||||
"STATUS",
|
||||
"REGION",
|
||||
"ACCOUNTID",
|
||||
"RESOURCEID",
|
||||
]
|
||||
].copy()
|
||||
|
||||
return get_section_containers_format3(
|
||||
aux, "REQUIREMENTS_ATTRIBUTES_SECTION", "REQUIREMENTS_ID"
|
||||
)
|
||||
@@ -90,12 +90,28 @@ def create_layout_overview(
|
||||
),
|
||||
html.Div(
|
||||
[
|
||||
(
|
||||
html.Label(
|
||||
"Table Rows:",
|
||||
className="text-prowler-stone-900 font-bold text-sm",
|
||||
style={"margin-right": "10px"},
|
||||
)
|
||||
html.Label(
|
||||
"Search:",
|
||||
className="text-prowler-stone-900 font-bold text-sm",
|
||||
style={"margin-right": "10px"},
|
||||
),
|
||||
dcc.Input(
|
||||
id="search-input",
|
||||
type="text",
|
||||
placeholder="Search by check title, service, region...",
|
||||
debounce=True,
|
||||
style={
|
||||
"padding": "4px 8px",
|
||||
"border": "1px solid #ccc",
|
||||
"borderRadius": "4px",
|
||||
"marginRight": "20px",
|
||||
"width": "250px",
|
||||
},
|
||||
),
|
||||
html.Label(
|
||||
"Table Rows:",
|
||||
className="text-prowler-stone-900 font-bold text-sm",
|
||||
style={"margin-right": "10px"},
|
||||
),
|
||||
table_row_dropdown,
|
||||
download_button_csv,
|
||||
|
||||
+102
-43
@@ -651,58 +651,114 @@ def get_table(current_compliance, table):
|
||||
|
||||
|
||||
def get_threatscore_mean_by_pillar(df):
|
||||
modified_df = df[df["STATUS"] == "FAIL"]
|
||||
score_per_pillar = {}
|
||||
max_score_per_pillar = {}
|
||||
|
||||
modified_df["REQUIREMENTS_ATTRIBUTES_LEVELOFRISK"] = pd.to_numeric(
|
||||
modified_df["REQUIREMENTS_ATTRIBUTES_LEVELOFRISK"], errors="coerce"
|
||||
)
|
||||
for _, row in df.iterrows():
|
||||
pillar = (
|
||||
row["REQUIREMENTS_ATTRIBUTES_SECTION"].split(" - ")[0]
|
||||
if isinstance(row["REQUIREMENTS_ATTRIBUTES_SECTION"], str)
|
||||
else "Unknown"
|
||||
)
|
||||
|
||||
pillar_means = (
|
||||
modified_df.groupby("REQUIREMENTS_ATTRIBUTES_SECTION")[
|
||||
"REQUIREMENTS_ATTRIBUTES_LEVELOFRISK"
|
||||
]
|
||||
.mean()
|
||||
.round(2)
|
||||
)
|
||||
if pillar not in score_per_pillar:
|
||||
score_per_pillar[pillar] = 0
|
||||
max_score_per_pillar[pillar] = 0
|
||||
|
||||
level_of_risk = pd.to_numeric(
|
||||
row["REQUIREMENTS_ATTRIBUTES_LEVELOFRISK"], errors="coerce"
|
||||
)
|
||||
level_of_risk = 1 if pd.isna(level_of_risk) else level_of_risk
|
||||
|
||||
weight = 1
|
||||
if "REQUIREMENTS_ATTRIBUTES_WEIGHT" in row and not pd.isna(
|
||||
row["REQUIREMENTS_ATTRIBUTES_WEIGHT"]
|
||||
):
|
||||
weight = pd.to_numeric(
|
||||
row["REQUIREMENTS_ATTRIBUTES_WEIGHT"], errors="coerce"
|
||||
)
|
||||
weight = 1 if pd.isna(weight) else weight
|
||||
|
||||
max_score_per_pillar[pillar] += level_of_risk * weight
|
||||
|
||||
if row["STATUS"] == "PASS":
|
||||
score_per_pillar[pillar] += level_of_risk * weight
|
||||
|
||||
output = []
|
||||
for pillar, mean in pillar_means.items():
|
||||
output.append(f"{pillar} - [{mean}]")
|
||||
for pillar in max_score_per_pillar:
|
||||
risk_score = 0
|
||||
if max_score_per_pillar[pillar] > 0:
|
||||
risk_score = (score_per_pillar[pillar] / max_score_per_pillar[pillar]) * 100
|
||||
|
||||
output.append(f"{pillar} - [{risk_score:.1f}%]")
|
||||
|
||||
for value in output:
|
||||
if value.split(" - ")[0] in df["REQUIREMENTS_ATTRIBUTES_SECTION"].values:
|
||||
base_pillar = value.split(" - ")[0]
|
||||
if base_pillar in df["REQUIREMENTS_ATTRIBUTES_SECTION"].values:
|
||||
df.loc[
|
||||
df["REQUIREMENTS_ATTRIBUTES_SECTION"] == value.split(" - ")[0],
|
||||
df["REQUIREMENTS_ATTRIBUTES_SECTION"] == base_pillar,
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION",
|
||||
] = value
|
||||
|
||||
return df
|
||||
|
||||
|
||||
def get_table_prowler_threatscore(df):
|
||||
df = df[df["STATUS"] == "FAIL"]
|
||||
score_per_pillar = {}
|
||||
max_score_per_pillar = {}
|
||||
pillars = {}
|
||||
|
||||
# Delete " - " from the column REQUIREMENTS_ATTRIBUTES_SECTION
|
||||
df["REQUIREMENTS_ATTRIBUTES_SECTION"] = (
|
||||
df["REQUIREMENTS_ATTRIBUTES_SECTION"].str.split(" - ").str[0]
|
||||
)
|
||||
df_copy = df.copy()
|
||||
|
||||
df["REQUIREMENTS_ATTRIBUTES_LEVELOFRISK"] = pd.to_numeric(
|
||||
df["REQUIREMENTS_ATTRIBUTES_LEVELOFRISK"], errors="coerce"
|
||||
)
|
||||
|
||||
score_df = (
|
||||
df.groupby("REQUIREMENTS_ATTRIBUTES_SECTION")[
|
||||
"REQUIREMENTS_ATTRIBUTES_LEVELOFRISK"
|
||||
]
|
||||
.mean()
|
||||
.reset_index()
|
||||
.rename(
|
||||
columns={
|
||||
"REQUIREMENTS_ATTRIBUTES_SECTION": "Pillar",
|
||||
"REQUIREMENTS_ATTRIBUTES_LEVELOFRISK": "Score",
|
||||
}
|
||||
for _, row in df_copy.iterrows():
|
||||
pillar = (
|
||||
row["REQUIREMENTS_ATTRIBUTES_SECTION"].split(" - ")[0]
|
||||
if isinstance(row["REQUIREMENTS_ATTRIBUTES_SECTION"], str)
|
||||
else "Unknown"
|
||||
)
|
||||
)
|
||||
|
||||
if pillar not in pillars:
|
||||
pillars[pillar] = {"FAIL": 0, "PASS": 0, "MUTED": 0}
|
||||
score_per_pillar[pillar] = 0
|
||||
max_score_per_pillar[pillar] = 0
|
||||
|
||||
level_of_risk = pd.to_numeric(
|
||||
row["REQUIREMENTS_ATTRIBUTES_LEVELOFRISK"], errors="coerce"
|
||||
)
|
||||
level_of_risk = 1 if pd.isna(level_of_risk) else level_of_risk
|
||||
|
||||
weight = 1
|
||||
if "REQUIREMENTS_ATTRIBUTES_WEIGHT" in row and not pd.isna(
|
||||
row["REQUIREMENTS_ATTRIBUTES_WEIGHT"]
|
||||
):
|
||||
weight = pd.to_numeric(
|
||||
row["REQUIREMENTS_ATTRIBUTES_WEIGHT"], errors="coerce"
|
||||
)
|
||||
weight = 1 if pd.isna(weight) else weight
|
||||
|
||||
max_score_per_pillar[pillar] += level_of_risk * weight
|
||||
|
||||
if row["STATUS"] == "PASS":
|
||||
pillars[pillar]["PASS"] += 1
|
||||
score_per_pillar[pillar] += level_of_risk * weight
|
||||
elif row["STATUS"] == "FAIL":
|
||||
pillars[pillar]["FAIL"] += 1
|
||||
|
||||
if "MUTED" in row and row["MUTED"] == "True":
|
||||
pillars[pillar]["MUTED"] += 1
|
||||
|
||||
result_df = []
|
||||
|
||||
for pillar in pillars.keys():
|
||||
risk_score = 0
|
||||
if max_score_per_pillar[pillar] > 0:
|
||||
risk_score = (score_per_pillar[pillar] / max_score_per_pillar[pillar]) * 100
|
||||
|
||||
result_df.append({"Pillar": pillar, "Score": risk_score})
|
||||
|
||||
score_df = pd.DataFrame(result_df)
|
||||
|
||||
score_df = score_df.sort_values("Score", ascending=True)
|
||||
|
||||
fig = px.bar(
|
||||
score_df,
|
||||
@@ -710,22 +766,25 @@ def get_table_prowler_threatscore(df):
|
||||
y="Score",
|
||||
color="Score",
|
||||
color_continuous_scale=[
|
||||
"#45cc6e",
|
||||
"#f4d44d",
|
||||
"#e77676",
|
||||
], # verde → amarillo → rojo
|
||||
hover_data={"Score": True, "Pillar": True},
|
||||
labels={"Score": "Average Risk Score", "Pillar": "Section"},
|
||||
"#f4d44d",
|
||||
"#45cc6e",
|
||||
],
|
||||
labels={"Score": "Risk Score (%)", "Pillar": "Section"},
|
||||
height=400,
|
||||
text="Score",
|
||||
)
|
||||
|
||||
fig.update_traces(texttemplate="%{text:.1f}%", textposition="outside")
|
||||
|
||||
fig.update_layout(
|
||||
xaxis_title="Pillar",
|
||||
yaxis_title="Level of Risk",
|
||||
yaxis_title="Risk Score (%)",
|
||||
margin=dict(l=20, r=20, t=30, b=20),
|
||||
plot_bgcolor="rgba(0,0,0,0)",
|
||||
paper_bgcolor="rgba(0,0,0,0)",
|
||||
coloraxis_colorbar=dict(title="Risk"),
|
||||
coloraxis_colorbar=dict(title="Risk %"),
|
||||
yaxis=dict(range=[0, 110]),
|
||||
)
|
||||
|
||||
return dcc.Graph(
|
||||
|
||||
@@ -83,7 +83,18 @@ def load_csv_files(csv_files):
|
||||
"""Load CSV files into a single pandas DataFrame."""
|
||||
dfs = []
|
||||
for file in csv_files:
|
||||
df = pd.read_csv(file, sep=";", on_bad_lines="skip")
|
||||
account_columns = ["ACCOUNT_ID", "ACCOUNT_UID", "SUBSCRIPTION"]
|
||||
|
||||
df_sample = pd.read_csv(file, sep=";", on_bad_lines="skip", nrows=1)
|
||||
|
||||
dtype_dict = {}
|
||||
for col in account_columns:
|
||||
if col in df_sample.columns:
|
||||
dtype_dict[col] = str
|
||||
|
||||
# Read the full file with proper dtypes
|
||||
df = pd.read_csv(file, sep=";", on_bad_lines="skip", dtype=dtype_dict)
|
||||
|
||||
if "CHECK_ID" in df.columns:
|
||||
if "TIMESTAMP" in df.columns or df["PROVIDER"].unique() == "aws":
|
||||
dfs.append(df.astype(str))
|
||||
@@ -120,7 +131,6 @@ if data is None:
|
||||
]
|
||||
)
|
||||
else:
|
||||
|
||||
# This handles the case where we are using v3 outputs
|
||||
if "ASSESSMENT_START_TIME" in data.columns:
|
||||
data["ASSESSMENT_START_TIME"] = data["ASSESSMENT_START_TIME"].str.replace(
|
||||
@@ -518,6 +528,7 @@ else:
|
||||
Input("service-filter", "value"),
|
||||
Input("table-rows", "value"),
|
||||
Input("status-filter", "value"),
|
||||
Input("search-input", "value"),
|
||||
Input("aws_card", "n_clicks"),
|
||||
Input("azure_card", "n_clicks"),
|
||||
Input("gcp_card", "n_clicks"),
|
||||
@@ -540,6 +551,7 @@ def filter_data(
|
||||
service_values,
|
||||
table_row_values,
|
||||
status_values,
|
||||
search_value,
|
||||
aws_clicks,
|
||||
azure_clicks,
|
||||
gcp_clicks,
|
||||
@@ -1144,6 +1156,15 @@ def filter_data(
|
||||
}
|
||||
|
||||
index_count = 0
|
||||
if search_value:
|
||||
search_value = search_value.lower()
|
||||
filtered_data = filtered_data[
|
||||
filtered_data["CHECK_TITLE"].str.lower().str.contains(search_value)
|
||||
| filtered_data["SERVICE_NAME"].str.lower().str.contains(search_value)
|
||||
| filtered_data["REGION"].str.lower().str.contains(search_value)
|
||||
| filtered_data["STATUS"].str.lower().str.contains(search_value)
|
||||
]
|
||||
|
||||
full_filtered_data = filtered_data.copy()
|
||||
filtered_data = filtered_data.head(table_row_values)
|
||||
# Sort the filtered_data
|
||||
|
||||
@@ -0,0 +1,122 @@
|
||||
# AWS Provider
|
||||
|
||||
In this page you can find all the details about [Amazon Web Services (AWS)](https://aws.amazon.com/) provider implementation in Prowler.
|
||||
|
||||
By default, Prowler will audit just one account and organization settings per scan. To configure it, follow the [getting started](../index.md#aws) page.
|
||||
|
||||
## AWS Provider Classes Architecture
|
||||
|
||||
The AWS provider implementation follows the general [Provider structure](./provider.md). This section focuses on the AWS-specific implementation, highlighting how the generic provider concepts are realized for AWS in Prowler. For a full overview of the provider pattern, base classes, and extension guidelines, see [Provider documentation](./provider.md). In next subsection you can find a list of the main classes of the AWS provider.
|
||||
|
||||
### `AwsProvider` (Main Class)
|
||||
|
||||
- **Location:** [`prowler/providers/aws/aws_provider.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/aws/aws_provider.py)
|
||||
- **Base Class:** Inherits from `Provider` (see [base class details](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/common/provider.py)).
|
||||
- **Purpose:** Central orchestrator for AWS-specific logic, session management, credential validation, role assumption, region and organization discovery, and configuration.
|
||||
- **Key AWS Responsibilities:**
|
||||
- Initializes and manages AWS sessions (with or without role assumption, MFA, etc.).
|
||||
- Validates credentials and sets up the AWS identity context.
|
||||
- Loads and manages configuration, mutelist, and fixer settings.
|
||||
- Discovers enabled AWS regions and organization metadata.
|
||||
- Provides properties and methods for downstream AWS service classes to access session, identity, and configuration data.
|
||||
|
||||
### Data Models
|
||||
|
||||
- **Location:** [`prowler/providers/aws/models.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/aws/models.py)
|
||||
- **Purpose:** Define structured data for AWS identity, session, credentials, organization info, and more.
|
||||
- **Key AWS Models:**
|
||||
- `AWSOrganizationsInfo`: Holds AWS Organizations metadata, to be used by the checks.
|
||||
- `AWSCredentials`, `AWSAssumeRoleInfo`, `AWSAssumeRoleConfiguration`: Used for role assumption and session management.
|
||||
- `AWSIdentityInfo`: Stores account, user, partition, and region context for the scan.
|
||||
- `AWSSession`: Wraps the current and original [boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) sessions and config.
|
||||
|
||||
### `AWSService` (Service Base Class)
|
||||
|
||||
- **Location:** [`prowler/providers/aws/lib/service/service.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/aws/lib/service/service.py)
|
||||
- **Purpose:** Abstract base class that all AWS service-specific classes inherit from. This implements the generic service pattern (described in [service page](./services.md#service-base-class)) specifically for AWS.
|
||||
- **Key AWS Responsibilities:**
|
||||
- Receives an `AwsProvider` instance to access session, identity, and configuration.
|
||||
- Manages clients for all services by regions.
|
||||
- Provides `__threading_call__` method to make boto3 calls in parallel. By default, this calls are made by region, but it can be overridden with the first parameter of the method and use by resource.
|
||||
- Exposes common audit context (`audited_account`, `audited_account_arn`, `audited_partition`, `audited_resources`) to subclasses.
|
||||
|
||||
### Exception Handling
|
||||
|
||||
- **Location:** [`prowler/providers/aws/exceptions/exceptions.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/aws/exceptions/exceptions.py)
|
||||
- **Purpose:** Custom exception classes for AWS-specific error handling, such as credential and role errors.
|
||||
|
||||
### Session and Utility Helpers
|
||||
|
||||
- **Location:** [`prowler/providers/aws/lib/`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/aws/lib/)
|
||||
- **Purpose:** Helpers for session setup, ARN parsing, mutelist management, and other cross-cutting concerns.
|
||||
|
||||
## Specific Patterns in AWS Services
|
||||
|
||||
The generic service pattern is described in [service page](./services.md#service-structure-and-initialisation). You can find all the right now implemented services in the following locations:
|
||||
|
||||
- Directly in the code, in location [`prowler/providers/aws/services/`](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers/aws/services)
|
||||
- In the [Prowler Hub](https://hub.prowler.com/). For a more human-readable view.
|
||||
|
||||
The best reference to understand how to implement a new service is following the [service implementation documentation](./services.md#adding-a-new-service) and taking other services already implemented as reference. In next subsection you can find a list of common patterns that are used accross all AWS services.
|
||||
|
||||
### AWS Service Common Patterns
|
||||
|
||||
- Services communicate with AWS using boto3, you can find the documentation with all the services [here](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/index.html).
|
||||
- Every AWS service class inherits from `AWSService`, ensuring access to session, identity, configuration, and threading utilities.
|
||||
- The constructor (`__init__`) always calls `super().__init__` with the service name and provider (e.g. `super().__init__(__class__.__name__, provider))`). Ensure that the service name in boto3 is the same that you use in the constructor. Usually is used the `__class__.__name__` to get the service name because it is the same as the class name.
|
||||
- Resource containers **must** be initialized in the constructor. They should be dictionaries, with the key being the resource ARN or equivalent unique identifier and the value being the resource object.
|
||||
- Resource discovery and attribute collection are parallelized using `self.__threading_call__`, typically by region or resource, for performance. The first parameter of the method is the iterator, if not provided, it will be the region; but if present indicate an array of the resources to be processed.
|
||||
- Resource filtering is consistently enforced using `self.audit_resources` attribute and `is_resource_filtered` function, it is used to see if user has provided some resource that is not in the audit scope, so we can skip it in the service logic. Normally it is used befor storing the resource in the service container as follows: `if not self.audit_resources or (is_resource_filtered(resource["arn"], self.audit_resources)):`.
|
||||
- All AWS resources are represented as Pydantic `BaseModel` classes, providing type safety and structured access to resource attributes.
|
||||
- AWS API calls are wrapped in try/except blocks, with specific handling for `ClientError` and generic exceptions, always logging errors.
|
||||
- If ARN is not present for some resource, it can be constructed using string interpolation, always including partition, service, region, account, and resource ID.
|
||||
- Tags and additional attributes that cannot be retrieved from the default call, should be collected and stored for each resource using dedicated methods and threading using the resource object list as iterator.
|
||||
|
||||
## Specific Patterns in AWS Checks
|
||||
|
||||
The AWS checks pattern is described in [checks page](./checks.md). You can find all the right now implemented checks:
|
||||
|
||||
- Directly in the code, within each service folder, each check has its own folder named after the name of the check. (e.g. [`prowler/providers/aws/services/s3/s3_bucket_acl_prohibited/`](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers/aws/services/s3/s3_bucket_acl_prohibited))
|
||||
- In the [Prowler Hub](https://hub.prowler.com/). For a more human-readable view.
|
||||
|
||||
The best reference to understand how to implement a new check is following the [check creation documentation](./checks.md#creating-a-check) and taking other similar checks as reference.
|
||||
|
||||
### Check Report Class
|
||||
|
||||
The `Check_Report_AWS` class models a single finding for an AWS resource in a check report. It is defined in [`prowler/lib/check/models.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/lib/check/models.py) and inherits from the generic `Check_Report` base class.
|
||||
|
||||
#### Purpose
|
||||
|
||||
`Check_Report_AWS` extends the base report structure with AWS-specific fields, enabling detailed tracking of the resource, ARN, and region associated with each finding.
|
||||
|
||||
#### Constructor and Attribute Population
|
||||
|
||||
When you instantiate `Check_Report_AWS`, you must provide the check metadata and a resource object. The class will attempt to automatically populate its AWS-specific attributes from the resource, using the following logic (in order of precedence):
|
||||
|
||||
- **`resource_id`**:
|
||||
- Uses `resource.id` if present.
|
||||
- Otherwise, uses `resource.name` if present.
|
||||
- Defaults to an empty string if none are available.
|
||||
|
||||
- **`resource_arn`**:
|
||||
- Uses `resource.arn` if present.
|
||||
- Defaults to an empty string if ARN is not present in the resource object.
|
||||
|
||||
- **`region`**:
|
||||
- Uses `resource.region` if present.
|
||||
- Defaults to an empty string if region is not present in the resource object.
|
||||
|
||||
If the resource object does not contain the required attributes, you must set them manually in the check logic.
|
||||
|
||||
Other attributes are inherited from the `Check_Report` class, from that ones you **always** have to set the `status` and `status_extended` attributes in the check logic.
|
||||
|
||||
#### Example Usage
|
||||
|
||||
```python
|
||||
report = Check_Report_AWS(
|
||||
metadata=check_metadata,
|
||||
resource=resource_object
|
||||
)
|
||||
report.status = "PASS"
|
||||
report.status_extended = "Resource is compliant."
|
||||
```
|
||||
@@ -0,0 +1,121 @@
|
||||
# Azure Provider
|
||||
|
||||
In this page you can find all the details about [Microsoft Azure](https://azure.microsoft.com/) provider implementation in Prowler.
|
||||
|
||||
By default, Prowler will audit all the subscriptions that it is able to list in the Microsoft Entra tenant, and tenant Entra ID service. To configure it, follow the [getting started](../index.md#azure) page.
|
||||
|
||||
## Azure Provider Classes Architecture
|
||||
|
||||
The Azure provider implementation follows the general [Provider structure](./provider.md). This section focuses on the Azure-specific implementation, highlighting how the generic provider concepts are realized for Azure in Prowler. For a full overview of the provider pattern, base classes, and extension guidelines, see [Provider documentation](./provider.md). In next subsection you can find a list of the main classes of the Azure provider.
|
||||
|
||||
### `AzureProvider` (Main Class)
|
||||
|
||||
- **Location:** [`prowler/providers/azure/azure_provider.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/azure/azure_provider.py)
|
||||
- **Base Class:** Inherits from `Provider` (see [base class details](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/common/provider.py)).
|
||||
- **Purpose:** Central orchestrator for Azure-specific logic, session management, credential validation, and configuration.
|
||||
- **Key Azure Responsibilities:**
|
||||
- Initializes and manages Azure sessions (supports Service Principal, CLI, Browser, and Managed Identity authentication).
|
||||
- Validates credentials and sets up the Azure identity context.
|
||||
- Loads and manages configuration, mutelist, and fixer settings.
|
||||
- Retrieves subscription(s) metadata.
|
||||
- Provides properties and methods for downstream Azure service classes to access session, identity, and configuration data.
|
||||
|
||||
### Data Models
|
||||
|
||||
- **Location:** [`prowler/providers/azure/models.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/azure/models.py)
|
||||
- **Purpose:** Define structured data for Azure identity, session, region configuration, and subscription info.
|
||||
- **Key Azure Models:**
|
||||
- `AzureIdentityInfo`: Holds Azure identity metadata, including tenant ID, domain, subscription names and IDs, and locations.
|
||||
- `AzureRegionConfig`: Stores the specific region that will be audited. That can be: Global, US Government or China.
|
||||
- `AzureSubscription`: Represents a subscription with ID, display name, and state.
|
||||
|
||||
### `AzureService` (Service Base Class)
|
||||
|
||||
- **Location:** [`prowler/providers/azure/lib/service/service.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/azure/lib/service/service.py)
|
||||
- **Purpose:** Abstract base class that all Azure service-specific classes inherit from. This implements the generic service pattern (described in [service page](./services.md#service-base-class)) specifically for Azure.
|
||||
- **Key Azure Responsibilities:**
|
||||
- Receives an `AzureProvider` instance to access session, identity, and configuration.
|
||||
- Manages clients for all services by subscription.
|
||||
- Exposes common audit context (`subscriptions`, `locations`, `audit_config`, `fixer_config`) to subclasses.
|
||||
|
||||
### Exception Handling
|
||||
|
||||
- **Location:** [`prowler/providers/azure/exceptions/exceptions.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/azure/exceptions/exceptions.py)
|
||||
- **Purpose:** Custom exception classes for Azure-specific error handling, such as credential, region, and session errors.
|
||||
|
||||
### Session and Utility Helpers
|
||||
|
||||
- **Location:** [`prowler/providers/azure/lib/`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/azure/lib/)
|
||||
- **Purpose:** Helpers for argument parsing, region setup, mutelist management, and other cross-cutting concerns.
|
||||
|
||||
## Specific Patterns in Azure Services
|
||||
|
||||
The generic service pattern is described in [service page](./services.md#service-structure-and-initialisation). You can find all the currently implemented services in the following locations:
|
||||
|
||||
- Directly in the code, in location [`prowler/providers/azure/services/`](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers/azure/services)
|
||||
- In the [Prowler Hub](https://hub.prowler.com/) for a more human-readable view.
|
||||
|
||||
The best reference to understand how to implement a new service is following the [service implementation documentation](./services.md#adding-a-new-service) and taking other services already implemented as reference. In next subsection you can find a list of common patterns that are used accross all Azure services.
|
||||
|
||||
### Azure Service Common Patterns
|
||||
|
||||
- Services communicate with Azure using the Azure Python SDK, mainly using the Azure Management Client (except for the Microsoft Entra ID service, that is using the Microsoft Graph API), you can find the documentation with all the management services [here](https://learn.microsoft.com/en-us/python/api/overview/azure/?view=azure-python).
|
||||
- Every Azure service class inherits from `AzureService`, ensuring access to session, identity, configuration, and client utilities.
|
||||
- The constructor (`__init__`) always calls `super().__init__` with the service Azure Management Client and Prowler provider object (e.g `super().__init__(WebSiteManagementClient, provider)`).
|
||||
- Resource containers **must** be initialized in the constructor, and they should be dictionaries, with the key being the subscription ID, the value being a dictionary with the resource ID as key and the resource object as value.
|
||||
- All Azure resources are represented as Pydantic `BaseModel` classes, providing type safety and structured access to resource attributes. Some are represented as dataclasses due to legacy reasons, but new resources should be represented as Pydantic `BaseModel` classes.
|
||||
- Azure SDK functions are wrapped in try/except blocks, with specific handling for errors, always logging errors. It is a best practice to create a custom function for every Azure SDK call, in that way we can handle the errors in a more specific way.
|
||||
|
||||
## Specific Patterns in Azure Checks
|
||||
|
||||
The Azure checks pattern is described in [checks page](./checks.md). You can find all the currently implemented checks:
|
||||
|
||||
- Directly in the code, within each service folder, each check has its own folder named after the name of the check. (e.g. [`prowler/providers/azure/services/storage/storage_blob_public_access_level_is_disabled/`](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers/azure/services/storage/storage_blob_public_access_level_is_disabled))
|
||||
- In the [Prowler Hub](https://hub.prowler.com/) for a more human-readable view.
|
||||
|
||||
The best reference to understand how to implement a new check is the [Azure check implementation documentation](./checks.md#creating-a-check) and taking other similar checks as reference.
|
||||
|
||||
### Check Report Class
|
||||
|
||||
The `Check_Report_Azure` class models a single finding for an Azure resource in a check report. It is defined in [`prowler/lib/check/models.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/lib/check/models.py) and inherits from the generic `Check_Report` base class.
|
||||
|
||||
#### Purpose
|
||||
|
||||
`Check_Report_Azure` extends the base report structure with Azure-specific fields, enabling detailed tracking of the resource, resource ID, name, subscription, and location associated with each finding.
|
||||
|
||||
#### Constructor and Attribute Population
|
||||
|
||||
When you instantiate `Check_Report_Azure`, you must provide the check metadata and a resource object. The class will attempt to automatically populate its Azure-specific attributes from the resource, using the following logic (in order of precedence):
|
||||
|
||||
- **`resource_id`**:
|
||||
- Uses `resource.id` if present.
|
||||
- Otherwise, uses `resource.resource_id` if present.
|
||||
- Defaults to an empty string if not available.
|
||||
|
||||
- **`resource_name`**:
|
||||
- Uses `resource.name` if present.
|
||||
- Otherwise, uses `resource.resource_name` if present.
|
||||
- Defaults to an empty string if not available.
|
||||
|
||||
- **`subscription`**:
|
||||
- Defaults to an empty string, it **must** be set in the check logic.
|
||||
|
||||
- **`location`**:
|
||||
- Uses `resource.location` if present.
|
||||
- Defaults to an empty string if not available.
|
||||
|
||||
If the resource object does not contain the required attributes, you must set them manually in the check logic.
|
||||
|
||||
Other attributes are inherited from the `Check_Report` class, from which you **always** have to set the `status` and `status_extended` attributes in the check logic.
|
||||
|
||||
#### Example Usage
|
||||
|
||||
```python
|
||||
report = Check_Report_Azure(
|
||||
metadata=check_metadata,
|
||||
resource=resource_object
|
||||
)
|
||||
report.subscription = subscription_id
|
||||
report.status = "PASS"
|
||||
report.status_extended = "Resource is compliant."
|
||||
```
|
||||
+231
-262
@@ -1,370 +1,339 @@
|
||||
# Create a new Check for a Provider
|
||||
# Prowler Checks
|
||||
|
||||
Here you can find how to create new checks for Prowler.
|
||||
|
||||
**To create a check is required to have a Prowler provider service already created, so if the service is not present or the attribute you want to audit is not retrieved by the service, please refer to the [Service](./services.md) documentation.**
|
||||
This guide explains how to create new checks in Prowler.
|
||||
|
||||
## Introduction
|
||||
|
||||
The checks are the fundamental piece of Prowler. A check is a simply piece of code that ensures if something is configured against cybersecurity best practices. Then the check generates a finding with the result and includes the check's metadata to give the user more contextual information about the result, the risk and how to remediate it.
|
||||
Checks are the core component of Prowler. A check is a piece of code designed to validate whether a configuration aligns with cybersecurity best practices. Execution of a check yields a finding, which includes the result and contextual metadata (e.g., outcome, risks, remediation).
|
||||
|
||||
To create a new check for a supported Prowler provider, you will need to create a folder with the check name inside the specific service for the selected provider.
|
||||
### Creating a Check
|
||||
|
||||
We are going to use the `ec2_ami_public` check from the `AWS` provider as an example. So the folder name will be `prowler/providers/aws/services/ec2/ec2_ami_public` (following the format `prowler/providers/<provider>/services/<service>/<check_name>`), with the name of check following the pattern: `service_subservice_resource_action`.
|
||||
To create a new check:
|
||||
|
||||
- Prerequisites: A Prowler provider and service must exist. Verify support and check for pre-existing checks via [Prowler Hub](https://hub.prowler.com). If the provider or service is not present, please refer to the [Provider](./provider.md) and [Service](./services.md) documentation for creation instructions.
|
||||
|
||||
- Navigate to the service directory. The path should be as follows: `prowler/providers/<provider>/services/<service>`.
|
||||
|
||||
- Create a check-specific folder. The path should follow this pattern: `prowler/providers/<provider>/services/<service>/<check_name>`. Adhere to the [Naming Format for Checks](#naming-format-for-checks).
|
||||
|
||||
- Populate the folder with files as specified in [File Creation](#file-creation).
|
||||
|
||||
### Naming Format for Checks
|
||||
|
||||
Checks must be named following the format: `service_subservice_resource_action`.
|
||||
|
||||
The name components are:
|
||||
|
||||
- `service` – The main service being audited (e.g., ec2, entra, iam, etc.)
|
||||
- `subservice` – An individual component or subset of functionality within the service that is being audited. This may correspond to a shortened version of the class attribute accessed within the check. If there is no subservice, just omit.
|
||||
- `resource` – The specific resource type being evaluated (e.g., instance, policy, role, etc.)
|
||||
- `action` – The security aspect or configuration being checked (e.g., public, encrypted, enabled, etc.)
|
||||
|
||||
### File Creation
|
||||
|
||||
Each check in Prowler follows a straightforward structure. Within the newly created folder, three files must be added to implement the check logic:
|
||||
|
||||
- `__init__.py` (empty file) – Ensures Python treats the check folder as a package.
|
||||
- `<check_name>.py` (code file) – Contains the check logic, following the prescribed format. Please refer to the [prowler's check code structure](./checks.md#prowlers-check-code-structure) for more information.
|
||||
- `<check_name>.metadata.json` (metadata file) – Defines the check's metadata for contextual information. Please refer to the [check metadata](./checks.md#) for more information.
|
||||
|
||||
## Prowler's Check Code Structure
|
||||
|
||||
Prowler's check structure is designed for clarity and maintainability. It follows a dynamic loading approach based on predefined paths, ensuring seamless integration of new checks into a provider's service without additional manual steps.
|
||||
|
||||
Below the code for a generic check is presented. It is strongly recommended to consult other checks from the same provider and service to understand provider-specific details and patterns. This will help ensure consistency and proper implementation of provider-specific requirements.
|
||||
|
||||
Report fields are the most dependent on the provider, consult the `CheckReport<Provider>` class for more information on what can be included in the report [here](https://github.com/prowler-cloud/prowler/blob/master/prowler/lib/check/models.py).
|
||||
|
||||
???+ note
|
||||
A subservice is an specific component of a service that is gonna be audited. Sometimes it could be the shortened name of the class attribute that is gonna be accessed in the check.
|
||||
Legacy providers (AWS, Azure, GCP, Kubernetes) follow the `Check_Report_<Provider>` naming convention. This is not recommended for current instances. Newer providers adopt the `CheckReport<Provider>` naming convention. Learn more at [Prowler Code](https://github.com/prowler-cloud/prowler/tree/master/prowler/lib/check/models.py).
|
||||
|
||||
Inside that folder, we need to create three files:
|
||||
```python title="Generic Check Class"
|
||||
# Required Imports
|
||||
# Import the base Check class and the provider-specific CheckReport class
|
||||
from prowler.lib.check.models import Check, CheckReport<Provider>
|
||||
# Import the provider service client
|
||||
from prowler.providers.<provider>.services.<service>.<service>_client import <service>_client
|
||||
|
||||
- An empty `__init__.py`: to make Python treat this check folder as a package.
|
||||
- A `check_name.py` with the above format containing the check's logic. Refer to the [check](./checks.md#check)
|
||||
- A `check_name.metadata.json` containing the check's metadata. Refer to the [check metadata](./checks.md#check-metadata)
|
||||
# Defining the Check Class
|
||||
# Each check must be implemented as a Python class with the same name as its corresponding file.
|
||||
# The class must inherit from the Check base class.
|
||||
class <check_name>(Check):
|
||||
"""Short description of what is being checked"""
|
||||
|
||||
## Check
|
||||
|
||||
The Prowler's check structure is very simple and following it there is nothing more to do to include a check in a provider's service because the load is done dynamically based on the paths.
|
||||
|
||||
The following is the code for the `ec2_ami_public` check:
|
||||
```python title="Check Class"
|
||||
# At the top of the file we need to import the following:
|
||||
# - Check class which is in charge of the following:
|
||||
# - Retrieve the check metadata and expose the `metadata()`
|
||||
# to return a JSON representation of the metadata,
|
||||
# read more at Check Metadata Model down below.
|
||||
# - Enforce that each check requires to have the `execute()` function
|
||||
from prowler.lib.check.models import Check, Check_Report_AWS
|
||||
|
||||
# Then you have to import the provider service client
|
||||
# read more at the Service documentation.
|
||||
from prowler.providers.aws.services.ec2.ec2_client import ec2_client
|
||||
|
||||
# For each check we need to create a python class called the same as the
|
||||
# file which inherits from the Check class.
|
||||
class ec2_ami_public(Check):
|
||||
"""ec2_ami_public verifies if an EC2 AMI is publicly shared"""
|
||||
|
||||
# Then, within the check's class we need to create the "execute(self)"
|
||||
# function, which is enforce by the "Check" class to implement
|
||||
# the Check's interface and let Prowler to run this check.
|
||||
def execute(self):
|
||||
"""Execute <check short description>
|
||||
|
||||
# Inside the execute(self) function we need to create
|
||||
# the list of findings initialised to an empty list []
|
||||
Returns:
|
||||
List[CheckReport<Provider>]: A list of reports containing the result of the check.
|
||||
"""
|
||||
findings = []
|
||||
|
||||
# Then, using the service client we need to iterate by the resource we
|
||||
# want to check, in this case EC2 AMIs stored in the
|
||||
# "ec2_client.images" object.
|
||||
for image in ec2_client.images:
|
||||
|
||||
# Once iterating for the images, we have to intialise
|
||||
# the Check_Report_AWS class passing the check's metadata
|
||||
# using the "metadata" function explained above.
|
||||
report = Check_Report_AWS(self.metadata())
|
||||
|
||||
# For each Prowler check we MUST fill the following
|
||||
# Check_Report_AWS fields:
|
||||
# - region
|
||||
# - resource_id
|
||||
# - resource_arn
|
||||
# - resource_tags
|
||||
# - status
|
||||
# - status_extended
|
||||
report.region = image.region
|
||||
report.resource_id = image.id
|
||||
report.resource_arn = image.arn
|
||||
# The resource_tags should be filled if the resource has the ability
|
||||
# of having tags, please check the service first.
|
||||
report.resource_tags = image.tags
|
||||
|
||||
# Then we need to create the business logic for the check
|
||||
# which always should be simple because the Prowler service
|
||||
# must do the heavy lifting and the check should be in charge
|
||||
# of parsing the data provided
|
||||
# Iterate over the target resources using the provider service client
|
||||
for resource in <service>_client.<resources>:
|
||||
# Initialize the provider-specific report class, passing metadata and resource
|
||||
report = Check_Report_<Provider>(metadata=self.metadata(), resource=resource)
|
||||
# Set required fields and implement check logic
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"EC2 AMI {image.id} is not public."
|
||||
|
||||
# In this example each "image" object has a boolean attribute
|
||||
# called "public" to set if the AMI is publicly shared
|
||||
if image.public:
|
||||
report.status_extended = f"<Description about why the resource is compliant>"
|
||||
# If some of the information needed for the report is not inside the resource, it can be set it manually here.
|
||||
# This depends on the provider and the resource that is being audited.
|
||||
# report.region = resource.region
|
||||
# report.resource_tags = getattr(resource, "tags", [])
|
||||
# ...
|
||||
# Example check logic (replace with actual logic):
|
||||
if <non_compliant_condition>:
|
||||
report.status = "FAIL"
|
||||
report.status_extended = (
|
||||
f"EC2 AMI {image.id} is currently public."
|
||||
)
|
||||
|
||||
# Then at the same level as the "report"
|
||||
# object we need to append it to the findings list.
|
||||
report.status_extended = f"<Description about why the resource is not compliant>"
|
||||
findings.append(report)
|
||||
|
||||
# Last thing to do is to return the findings list to Prowler
|
||||
return findings
|
||||
```
|
||||
|
||||
### Check Status
|
||||
### Data Requirements for Checks in Prowler
|
||||
|
||||
All the checks MUST fill the `report.status` and `report.status_extended` with the following criteria:
|
||||
One of the most important aspects when creating a new check is ensuring that all required data is available from the service client. Often, default API calls are insufficient. Extending the service class with new methods or resource attributes may be required to fetch and store requisite data.
|
||||
|
||||
- Status -- `report.status`
|
||||
- `PASS` --> If the check is passing against the configured value.
|
||||
- `FAIL` --> If the check is failing against the configured value.
|
||||
- `MANUAL` --> This value cannot be used unless a manual operation is required in order to determine if the `report.status` is whether `PASS` or `FAIL`.
|
||||
- Status Extended -- `report.status_extended`
|
||||
- MUST end in a dot `.`
|
||||
- MUST include the service audited with the resource and a brief explanation of the result generated, e.g.: `EC2 AMI ami-0123456789 is not public.`
|
||||
### Statuses for Checks in Prowler
|
||||
|
||||
### Check Region
|
||||
Required Fields: status and status\_extended
|
||||
|
||||
All the checks MUST fill the `report.region` with the following criteria:
|
||||
Each check **must** populate the `report.status` and `report.status_extended` fields according to the following criteria:
|
||||
|
||||
- If the audited resource is regional use the `region` (the name changes depending on the provider: `location` in Azure and GCP and `namespace` in K8s) attribute within the resource object.
|
||||
- If the audited resource is global use the `service_client.region` within the service client object.
|
||||
- Status field: `report.status`
|
||||
- `PASS` – Assigned when the check confirms compliance with the configured value.
|
||||
- `FAIL` – Assigned when the check detects non-compliance with the configured value.
|
||||
- `MANUAL` – This status must not be used unless manual verification is necessary to determine whether the status (`report.status`) passes (`PASS`) or fails (`FAIL`).
|
||||
|
||||
### Check Severity
|
||||
- Status extended field: `report.status_extended`
|
||||
- It **must** end with a period (`.`).
|
||||
- It **must** include the audited service, the resource, and a concise explanation of the check result, for instance: `EC2 AMI ami-0123456789 is not public.`.
|
||||
|
||||
The severity of the checks are defined in the metadata file with the `Severity` field. The severity is always in lowercase and can be one of the following values:
|
||||
### Prowler's Check Severity Levels
|
||||
|
||||
- `critical`
|
||||
- `high`
|
||||
- `medium`
|
||||
- `low`
|
||||
- `informational`
|
||||
The severity of each check is defined in the metadata file using the `Severity` field. Severity values are always lowercase and must be one of the predefined categories below.
|
||||
|
||||
You may need to change it in the check's code if the check has different scenarios that could change the severity. This can be done by using the `report.check_metadata.Severity` attribute:
|
||||
- `critical` – Issue that must be addressed immediately.
|
||||
- `high` – Issue that should be addressed as soon as possible.
|
||||
- `medium` – Issue that should be addressed within a reasonable timeframe.
|
||||
- `low` – Issue that can be addressed in the future.
|
||||
- `informational` – Not an issue but provides valuable information.
|
||||
|
||||
If the check involves multiple scenarios that may alter its severity, adjustments can be made dynamically within the check's logic using the severity `report.check_metadata.Severity` attribute:
|
||||
|
||||
```python
|
||||
if <valid for more than 6 months>:
|
||||
if <generic_condition_1>:
|
||||
report.status = "PASS"
|
||||
report.check_metadata.Severity = "informational"
|
||||
report.status_extended = f"RDS Instance {db_instance.id} certificate has over 6 months of validity left."
|
||||
elif <valid for more than 3 months>:
|
||||
report.status = "PASS"
|
||||
report.status_extended = f"<Resource> is compliant with <requirement>."
|
||||
elif <generic_condition_2>:
|
||||
report.status = "FAIL"
|
||||
report.check_metadata.Severity = "low"
|
||||
report.status_extended = f"RDS Instance {db_instance.id} certificate has between 3 and 6 months of validity."
|
||||
elif <valid for more than 1 month>:
|
||||
report.status_extended = f"<Resource> is not compliant with <requirement>: <reason>."
|
||||
elif <generic_condition_3>:
|
||||
report.status = "FAIL"
|
||||
report.check_metadata.Severity = "medium"
|
||||
report.status_extended = f"RDS Instance {db_instance.id} certificate less than 3 months of validity."
|
||||
elif <valid for less than 1 month>:
|
||||
report.status_extended = f"<Resource> is not compliant with <requirement>: <reason>."
|
||||
elif <generic_condition_4>:
|
||||
report.status = "FAIL"
|
||||
report.check_metadata.Severity = "high"
|
||||
report.status_extended = f"RDS Instance {db_instance.id} certificate less than 1 month of validity."
|
||||
report.status_extended = f"<Resource> is not compliant with <requirement>: <reason>."
|
||||
else:
|
||||
report.status = "FAIL"
|
||||
report.check_metadata.Severity = "critical"
|
||||
report.status_extended = (
|
||||
f"RDS Instance {db_instance.id} certificate has expired."
|
||||
)
|
||||
report.status_extended = f"<Resource> is not compliant with <requirement>: <critical reason>."
|
||||
```
|
||||
### Resource ID, Name and ARN
|
||||
All the checks MUST fill the `report.resource_id` and `report.resource_arn` with the following criteria:
|
||||
|
||||
### Resource Identification in Prowler
|
||||
|
||||
Each check **must** populate the report with an unique identifier for the audited resource. This identifier or identifiers are going to depend on the provider and the resource that is being audited. Here are the criteria for each provider:
|
||||
|
||||
- AWS
|
||||
- Resouce ID and resource ARN:
|
||||
- If the resource audited is the AWS account:
|
||||
- `resource_id` -> AWS Account Number
|
||||
- `resource_arn` -> AWS Account Root ARN
|
||||
- If we can’t get the ARN from the resource audited, we create a valid ARN with the `resource_id` part as the resource audited. Examples:
|
||||
- Bedrock -> `arn:<partition>:bedrock:<region>:<account-id>:model-invocation-logging`
|
||||
- DirectConnect -> `arn:<partition>:directconnect:<region>:<account-id>:dxcon`
|
||||
- If there is no real resource to audit we do the following:
|
||||
- resource_id -> `resource_type/unknown`
|
||||
- resource_arn -> `arn:<partition>:<service>:<region>:<account-id>:<resource_type>/unknown`
|
||||
|
||||
- Amazon Resource ID — `report.resource_id`.
|
||||
- The resource identifier. This is the name of the resource, the ID of the resource, or a resource path. Some resource identifiers include a parent resource (sub-resource-type/parent-resource/sub-resource) or a qualifier such as a version (resource-type:resource-name:qualifier).
|
||||
- If the resource ID cannot be retrieved directly from the audited resource, it can be extracted from the ARN. It is the last part of the ARN after the last slash (`/`) or colon (`:`).
|
||||
- If no actual resource to audit exists, this format can be used: `<resource_type>/unknown`
|
||||
|
||||
- Amazon Resource Name — `report.resource_arn`.
|
||||
- The [Amazon Resource Name (ARN)](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html) of the audited entity.
|
||||
- If the ARN cannot be retrieved directly from the audited resource, construct a valid ARN using the `resource_id` component as the audited entity. Examples:
|
||||
- Bedrock — `arn:<partition>:bedrock:<region>:<account-id>:model-invocation-logging`.
|
||||
- DirectConnect — `arn:<partition>:directconnect:<region>:<account-id>:dxcon`.
|
||||
- If no actual resource to audit exists, this format can be used: `arn:<partition>:<service>:<region>:<account-id>:<resource_type>/unknown`.
|
||||
- Examples:
|
||||
- AWS Security Hub -> `arn:<partition>:security-hub:<region>:<account-id>:hub/unknown`
|
||||
- Access Analyzer -> `arn:<partition>:access-analyzer:<region>:<account-id>:analyzer/unknown`
|
||||
- GuardDuty -> `arn:<partition>:guardduty:<region>:<account-id>:detector/unknown`
|
||||
- AWS Security Hub — `arn:<partition>:security-hub:<region>:<account-id>:hub/unknown`.
|
||||
- Access Analyzer — `arn:<partition>:access-analyzer:<region>:<account-id>:analyzer/unknown`.
|
||||
- GuardDuty — `arn:<partition>:guardduty:<region>:<account-id>:detector/unknown`.
|
||||
|
||||
- GCP
|
||||
- Resource ID -- `report.resource_id`
|
||||
- GCP Resource --> Resource ID
|
||||
- Resource Name -- `report.resource_name`
|
||||
- GCP Resource --> Resource Name
|
||||
|
||||
- Resource ID — `report.resource_id`.
|
||||
- Resource ID represents the full, [unambiguous path to a resource](https://google.aip.dev/122#full-resource-names), known as the full resource name. Typically, it follows the format: `//{api_service/resource_path}`.
|
||||
- If the resource ID cannot be retrieved directly from the audited resource, by default the resource name is used.
|
||||
- Resource Name — `report.resource_name`.
|
||||
- Resource Name usually refers to the name of a resource within its service.
|
||||
|
||||
- Azure
|
||||
- Resource ID -- `report.resource_id`
|
||||
- Azure Resource --> Resource ID
|
||||
- Resource Name -- `report.resource_name`
|
||||
- Azure Resource --> Resource Name
|
||||
|
||||
### Python Model
|
||||
The following is the Python model for the check's class.
|
||||
- Resource ID — `report.resource_id`.
|
||||
- Resource ID represents the full Azure Resource Manager path to a resource, which follows the format: `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}`.
|
||||
- Resource Name — `report.resource_name`.
|
||||
- Resource Name usually refers to the name of a resource within its service.
|
||||
- If the [resource name](https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/resource-name-rules) cannot be retrieved directly from the audited resource, the last part of the resource ID can be used.
|
||||
|
||||
As per April 11th 2024 the `Check_Metadata_Model` can be found [here](https://github.com/prowler-cloud/prowler/blob/master/prowler/lib/check/models.py#L36-L82).
|
||||
- Kubernetes
|
||||
|
||||
```python
|
||||
class Check(ABC, Check_Metadata_Model):
|
||||
"""Prowler Check"""
|
||||
- Resource ID — `report.resource_id`.
|
||||
- The UID of the Kubernetes object. This is a system-generated string that uniquely identifies the object within the cluster for its entire lifetime. See [Kubernetes Object Names and IDs - UIDs](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids).
|
||||
- Resource Name — `report.resource_name`.
|
||||
- The name of the Kubernetes object. This is a client-provided string that must be unique for the resource type within a namespace (for namespaced resources) or cluster (for cluster-scoped resources). Names typically follow DNS subdomain or label conventions. See [Kubernetes Object Names and IDs - Names](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names).
|
||||
|
||||
def __init__(self, **data):
|
||||
"""Check's init function. Calls the CheckMetadataModel init."""
|
||||
# Parse the Check's metadata file
|
||||
metadata_file = (
|
||||
os.path.abspath(sys.modules[self.__module__].__file__)[:-3]
|
||||
+ ".metadata.json"
|
||||
)
|
||||
# Store it to validate them with Pydantic
|
||||
data = Check_Metadata_Model.parse_file(metadata_file).dict()
|
||||
# Calls parents init function
|
||||
super().__init__(**data)
|
||||
- M365
|
||||
|
||||
def metadata(self) -> dict:
|
||||
"""Return the JSON representation of the check's metadata"""
|
||||
return self.json()
|
||||
- Resource ID — `report.resource_id`.
|
||||
- If the audited resource has a globally unique identifier such as a `guid`, use it as the `resource_id`.
|
||||
- If no `guid` exists, use another unique and relevant identifier for the resource, such as the tenant domain, the internal policy ID, or a representative string following the format `<resource_type>/<name_or_id>`.
|
||||
- Resource Name — `report.resource_name`.
|
||||
- Use the visible or descriptive name of the audited resource. If no explicit name is available, use a clear description of the resource or configuration being evaluated.
|
||||
- Examples:
|
||||
- For an organization:
|
||||
- `resource_id`: Organization GUID
|
||||
- `resource_name`: Organization name
|
||||
- For a policy:
|
||||
- `resource_id`: Unique policy ID
|
||||
- `resource_name`: Policy display name
|
||||
- For global configurations:
|
||||
- `resource_id`: Tenant domain or representative string (e.g., "userSettings")
|
||||
- `resource_name`: Description of the configuration (e.g., "SharePoint Settings")
|
||||
|
||||
@abstractmethod
|
||||
def execute(self):
|
||||
"""Execute the check's logic"""
|
||||
```
|
||||
- GitHub
|
||||
|
||||
### Using the audit config
|
||||
- Resource ID — `report.resource_id`.
|
||||
- The ID of the Github resource. This is a system-generated integer that uniquely identifies the resource within the Github platform.
|
||||
- Resource Name — `report.resource_name`.
|
||||
- The name of the Github resource. In the case of a repository, this is just the repository name. For full repository names use the resource `full_name`.
|
||||
|
||||
Prowler has a [configuration file](../tutorials/configuration_file.md) which is used to pass certain configuration values to the checks, like the following:
|
||||
### Using the Audit Configuration
|
||||
|
||||
Prowler has a [configuration file](../tutorials/configuration_file.md) which is used to pass certain configuration values to the checks. For example:
|
||||
|
||||
```python title="ec2_securitygroup_with_many_ingress_egress_rules.py"
|
||||
class ec2_securitygroup_with_many_ingress_egress_rules(Check):
|
||||
def execute(self):
|
||||
findings = []
|
||||
|
||||
# max_security_group_rules, default: 50
|
||||
max_security_group_rules = ec2_client.audit_config.get(
|
||||
"max_security_group_rules", 50
|
||||
)
|
||||
for security_group_arn, security_group in ec2_client.security_groups.items():
|
||||
```
|
||||
|
||||
```yaml title="config.yaml"
|
||||
# AWS Configuration
|
||||
aws:
|
||||
# AWS EC2 Configuration
|
||||
We use the `audit_config` object to retrieve the value of `max_security_group_rules`, which is the default value of 50 if the configuration value is not present.
|
||||
|
||||
# aws.ec2_securitygroup_with_many_ingress_egress_rules
|
||||
# The default value is 50 rules
|
||||
max_security_group_rules: 50
|
||||
The configuration file is located at [`prowler/config/config.yaml`](https://github.com/prowler-cloud/prowler/blob/master/prowler/config/config.yaml) and is used to pass certain configuration values to the checks. For example:
|
||||
|
||||
```yaml title="config.yaml"
|
||||
aws:
|
||||
max_security_group_rules: 50
|
||||
```
|
||||
|
||||
As you can see in the above code, within the service client, in this case the `ec2_client`, there is an object called `audit_config` which is a Python dictionary containing the values read from the configuration file.
|
||||
|
||||
In order to use it, you have to check first if the value is present in the configuration file. If the value is not present, you can create it in the `config.yaml` file and then, read it from the check.
|
||||
This `audit_config` object is a Python dictionary that stores values read from the configuration file. It can be accessed by the check using the `audit_config` attribute of the service client.
|
||||
|
||||
???+ note
|
||||
It is mandatory to always use the `dictionary.get(value, default)` syntax to set a default value in the case the configuration value is not present.
|
||||
Always use the `dictionary.get(value, default)` syntax to ensure a default value is set when the configuration value is not present.
|
||||
|
||||
## Metadata Structure for Prowler Checks
|
||||
|
||||
## Check Metadata
|
||||
Each Prowler check must include a metadata file named `<check_name>.metadata.json` that must be located in its directory. This file supplies crucial information for execution, reporting, and context.
|
||||
|
||||
Each Prowler check has metadata associated which is stored at the same level of the check's folder in a file called A `check_name.metadata.json` containing the check's metadata.
|
||||
### Example Metadata File
|
||||
|
||||
???+ note
|
||||
We are going to include comments in this example metadata JSON but they cannot be included because the JSON format does not allow comments.
|
||||
Below is a generic example of a check metadata file. **Do not include comments in actual JSON files.**
|
||||
|
||||
```json
|
||||
{
|
||||
# Provider holds the Prowler provider which the checks belongs to
|
||||
"Provider": "aws",
|
||||
# CheckID holds check name
|
||||
"CheckID": "ec2_ami_public",
|
||||
# CheckTitle holds the title of the check
|
||||
"CheckTitle": "Ensure there are no EC2 AMIs set as Public.",
|
||||
# CheckType holds Software and Configuration Checks, check more here
|
||||
# https://docs.aws.amazon.com/securityhub/latest/userguide/asff-required-attributes.html#Types
|
||||
"CheckType": [
|
||||
"Infrastructure Security"
|
||||
],
|
||||
# ServiceName holds the provider service name
|
||||
"CheckID": "example_check_id",
|
||||
"CheckTitle": "Example Check Title",
|
||||
"CheckType": ["Infrastructure Security"],
|
||||
"ServiceName": "ec2",
|
||||
# SubServiceName holds the service's subservice or resource used by the check
|
||||
"SubServiceName": "ami",
|
||||
# ResourceIdTemplate holds the unique ID for the resource used by the check
|
||||
"ResourceIdTemplate": "arn:partition:service:region:account-id:resource-id",
|
||||
# Severity holds the check's severity, always in lowercase (critical, high, medium, low or informational)
|
||||
"Severity": "critical",
|
||||
# ResourceType only for AWS, holds the type from here
|
||||
# https://docs.aws.amazon.com/securityhub/latest/userguide/asff-resources.html
|
||||
# In case of not existing, use CloudFormation type but removing the "::" and using capital letters only at the beginning of each word. Example: "AWS::EC2::Instance" -> "AwsEc2Instance"
|
||||
# CloudFormation type reference: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html
|
||||
# If the resource type does not exist in the CloudFormation types, use "Other".
|
||||
"ResourceType": "Other",
|
||||
# Description holds the title of the check, for now is the same as CheckTitle
|
||||
"Description": "Ensure there are no EC2 AMIs set as Public.",
|
||||
# Risk holds the check risk if the result is FAIL
|
||||
"Risk": "When your AMIs are publicly accessible, they are available in the Community AMIs where everyone with an AWS account can use them to launch EC2 instances. Your AMIs could contain snapshots of your applications (including their data), therefore exposing your snapshots in this manner is not advised.",
|
||||
# RelatedUrl holds an URL with more information about the check purpose
|
||||
"RelatedUrl": "",
|
||||
# Remediation holds the information to help the practitioner to fix the issue in the case of the check raise a FAIL
|
||||
"Description": "Example description of the check.",
|
||||
"Risk": "Example risk if the check fails.",
|
||||
"RelatedUrl": "https://example.com",
|
||||
"Remediation": {
|
||||
# Code holds different methods to remediate the FAIL finding
|
||||
"Code": {
|
||||
# CLI holds the command in the provider native CLI to remediate it
|
||||
"CLI": "aws ec2 modify-image-attribute --region <REGION> --image-id <EC2_AMI_ID> --launch-permission {\"Remove\":[{\"Group\":\"all\"}]}",
|
||||
# NativeIaC holds the native IaC code to remediate it, use "https://docs.bridgecrew.io/docs"
|
||||
"CLI": "example CLI command",
|
||||
"NativeIaC": "",
|
||||
# Other holds the other commands, scripts or code to remediate it, use "https://www.trendmicro.com/cloudoneconformity"
|
||||
"Other": "https://docs.prowler.com/checks/public_8#aws-console",
|
||||
# Terraform holds the Terraform code to remediate it, use "https://docs.bridgecrew.io/docs"
|
||||
"Other": "",
|
||||
"Terraform": ""
|
||||
},
|
||||
# Recommendation holds the recommendation for this check with a description and a related URL
|
||||
"Recommendation": {
|
||||
"Text": "We recommend your EC2 AMIs are not publicly accessible, or generally available in the Community AMIs.",
|
||||
"Url": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/cancel-sharing-an-AMI.html"
|
||||
"Text": "Example recommendation text.",
|
||||
"Url": "https://example.com/remediation"
|
||||
}
|
||||
},
|
||||
# Categories holds the category or categories where the check can be included, if applied
|
||||
"Categories": [
|
||||
"internet-exposed"
|
||||
],
|
||||
# DependsOn is not actively used for the moment but it will hold other
|
||||
# checks wich this check is dependant to
|
||||
"Categories": ["example-category"],
|
||||
"DependsOn": [],
|
||||
# RelatedTo is not actively used for the moment but it will hold other
|
||||
# checks wich this check is related to
|
||||
"RelatedTo": [],
|
||||
# Notes holds additional information not covered in this file
|
||||
"Notes": ""
|
||||
}
|
||||
```
|
||||
|
||||
### Remediation Code
|
||||
### Metadata Fields and Their Purpose
|
||||
|
||||
For the Remediation Code we use the following knowledge base to fill it:
|
||||
- **Provider** — The Prowler provider related to the check. The name **must** be lowercase and match the provider folder name. For supported providers refer to [Prowler Hub](https://hub.prowler.com/check) or directly to [Prowler Code](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers).
|
||||
|
||||
- Official documentation for the provider
|
||||
- https://docs.prowler.com/checks/checks-index
|
||||
- https://www.trendmicro.com/cloudoneconformity
|
||||
- https://github.com/cloudmatos/matos/tree/master/remediations
|
||||
- **CheckID** — The unique identifier for the check inside the provider, this field **must** match the check's folder and python file and json metadata file name. For more information about the naming refer to the [Naming Format for Checks](#naming-format-for-checks) section.
|
||||
|
||||
### RelatedURL and Recommendation
|
||||
- **CheckTitle** — A concise, descriptive title for the check.
|
||||
|
||||
The RelatedURL field must be filled with an URL from the provider's official documentation like https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharingamis-intro.html
|
||||
- **CheckType** — *For now this field is only standardized for the AWS provider*.
|
||||
- For AWS this field must follow the [AWS Security Hub Types](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-required-attributes.html#Types) format. So the common pattern to follow is `namespace/category/classifier`, refer to the attached documentation for the valid values for this fields.
|
||||
|
||||
Also, if not present you can use the Risk and Recommendation texts from the TrendMicro [CloudConformity](https://www.trendmicro.com/cloudoneconformity) guide.
|
||||
- **ServiceName** — The name of the provider service being audited. This field **must** be in lowercase and match with the service folder name. For supported services refer to [Prowler Hub](https://hub.prowler.com/check) or directly to [Prowler Code](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers).
|
||||
|
||||
- **SubServiceName** — The subservice or resource within the service, if applicable. For more information refer to the [Naming Format for Checks](#naming-format-for-checks) section.
|
||||
|
||||
### Python Model
|
||||
The following is the Python model for the check's metadata model. We use the Pydantic's [BaseModel](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel) as the parent class.
|
||||
- **ResourceIdTemplate** — A template for the unique resource identifier. For more information refer to the [Prowler's Resource Identification](#prowlers-resource-identification) section.
|
||||
|
||||
As per August 5th 2023 the `Check_Metadata_Model` can be found [here](https://github.com/prowler-cloud/prowler/blob/master/prowler/lib/check/models.py#L34-L56).
|
||||
```python
|
||||
class Check_Metadata_Model(BaseModel):
|
||||
"""Check Metadata Model"""
|
||||
- **Severity** — The severity of the finding if the check fails. Must be one of: `critical`, `high`, `medium`, `low`, or `informational`, this field **must** be in lowercase. To get more information about the severity levels refer to the [Prowler's Check Severity Levels](#prowlers-check-severity-levels) section.
|
||||
|
||||
Provider: str
|
||||
CheckID: str
|
||||
CheckTitle: str
|
||||
CheckType: list[str]
|
||||
ServiceName: str
|
||||
SubServiceName: str
|
||||
ResourceIdTemplate: str
|
||||
Severity: str
|
||||
ResourceType: str
|
||||
Description: str
|
||||
Risk: str
|
||||
RelatedUrl: str
|
||||
Remediation: Remediation
|
||||
Categories: list[str]
|
||||
DependsOn: list[str]
|
||||
RelatedTo: list[str]
|
||||
Notes: str
|
||||
# We set the compliance to None to
|
||||
# store the compliance later if supplied
|
||||
Compliance: list = None
|
||||
```
|
||||
- **ResourceType** — The type of resource being audited. *For now this field is only standardized for the AWS provider*.
|
||||
|
||||
- For AWS use the [Security Hub resource types](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-resources.html) or, if not available, the PascalCase version of the [CloudFormation type](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html) (e.g., `AwsEc2Instance`). Use "Other" if no match exists.
|
||||
|
||||
- **Description** — A short description of what the check does.
|
||||
|
||||
- **Risk** — The risk or impact if the check fails, explaining why the finding matters.
|
||||
|
||||
- **RelatedUrl** — A URL to official documentation or further reading about the check's purpose. If no official documentation is available, use the risk and recommendation text from trusted third-party sources.
|
||||
|
||||
- **Remediation** — Guidance for fixing a failed check, including:
|
||||
|
||||
- **Code** — Remediation commands or code snippets for CLI, Terraform, native IaC, or other tools like the Web Console.
|
||||
|
||||
- **Recommendation** — A textual human readable recommendation. Here it is not necessary to include actual steps, but rather a general recommendation about what to do to fix the check.
|
||||
|
||||
- **Categories** — One or more categories for grouping checks in execution (e.g., `internet-exposed`). For the current list of categories, refer to the [Prowler Hub](https://hub.prowler.com/check).
|
||||
|
||||
- **DependsOn** — Currently not used.
|
||||
|
||||
- **RelatedTo** — Currently not used.
|
||||
|
||||
- **Notes** — Any additional information not covered by other fields.
|
||||
|
||||
### Remediation Code Guidelines
|
||||
|
||||
When providing remediation steps, reference the following sources:
|
||||
|
||||
- Official provider documentation.
|
||||
- [Prowler Checks Remediation Index](https://docs.prowler.com/checks/checks-index)
|
||||
- [TrendMicro Cloud One Conformity](https://www.trendmicro.com/cloudoneconformity)
|
||||
- [CloudMatos Remediation Repository](https://github.com/cloudmatos/matos/tree/master/remediations)
|
||||
|
||||
### Python Model Reference
|
||||
|
||||
The metadata structure is enforced in code using a Pydantic model. For reference, see the [`CheckMetadata`](https://github.com/prowler-cloud/prowler/blob/master/prowler/lib/check/models.py).
|
||||
|
||||
@@ -1,14 +1,16 @@
|
||||
# Debugging
|
||||
# Debugging in Prowler
|
||||
|
||||
Debugging in Prowler make things easier!
|
||||
If you are developing Prowler, it's possible that you will encounter some situations where you have to inspect the code in depth to fix some unexpected issues during the execution.
|
||||
Debugging in Prowler simplifies the development process, allowing developers to efficiently inspect and resolve unexpected issues during execution.
|
||||
|
||||
## VSCode
|
||||
## Debugging with Visual Studio Code
|
||||
|
||||
In VSCode you can run the code using the integrated debugger. Please, refer to this [documentation](https://code.visualstudio.com/docs/editor/debugging) for guidance about the debugger in VSCode.
|
||||
The following file is an example of the [debugging configuration](https://code.visualstudio.com/docs/editor/debugging#_launch-configurations) file that you can add to [Virtual Studio Code](https://code.visualstudio.com/).
|
||||
Visual Studio Code (also referred to as VSCode) provides an integrated debugger for executing and analyzing Prowler code. Refer to the official VSCode debugger [documentation](https://code.visualstudio.com/docs/editor/debugging) for detailed instructions.
|
||||
|
||||
This file should inside the *.vscode* folder and its name has to be *launch.json*:
|
||||
### Debugging Configuration Example
|
||||
|
||||
The following file is an example of a [debugging configuration](https://code.visualstudio.com/docs/editor/debugging#_launch-configurations) file for [Virtual Studio Code](https://code.visualstudio.com/).
|
||||
|
||||
This file must be placed inside the *.vscode* directory and named *launch.json*:
|
||||
|
||||
```json
|
||||
{
|
||||
|
||||
@@ -1,8 +1,28 @@
|
||||
## Contribute with documentation
|
||||
## Contributing to Documentation
|
||||
|
||||
We use `mkdocs` to build this Prowler documentation site so you can easily contribute back with new docs or improving them. To install all necessary dependencies use `poetry install --with docs`.
|
||||
Prowler documentation is built using `mkdocs`, allowing contributors to easily add or enhance documentation.
|
||||
|
||||
1. Install `mkdocs` with your favorite package manager.
|
||||
2. Inside the `prowler` repository folder run `mkdocs serve` and point your browser to `http://localhost:8000` and you will see live changes to your local copy of this documentation site.
|
||||
3. Make all needed changes to docs or add new documents. To do so just edit existing md files inside `prowler/docs` and if you are adding a new section or file please make sure you add it to `mkdocs.yaml` file in the root folder of the Prowler repo.
|
||||
4. Once you are done with changes, please send a pull request to us for review and merge. Thank you in advance!
|
||||
### Installation and Setup
|
||||
|
||||
Install all necessary dependencies using: `poetry install --with docs`.
|
||||
|
||||
1. Install `mkdocs` using your preferred package manager.
|
||||
|
||||
2. Running the Documentation Locally
|
||||
Navigate to the `prowler` repository folder.
|
||||
Start the local documentation server by running: `mkdocs serve`.
|
||||
Open `http://localhost:8000` in your browser to view live updates.
|
||||
|
||||
3. Making Documentation Changes
|
||||
Make all needed changes to docs or add new documents. Edit existing Markdown (.md) files inside `prowler/docs`.
|
||||
To add new sections or files, update the `mkdocs.yaml` file located in the root directory of Prowler’s repository.
|
||||
|
||||
4. Submitting Changes
|
||||
|
||||
Once documentation updates are complete:
|
||||
|
||||
Submit a pull request for review.
|
||||
|
||||
The Prowler team will assess and merge contributions.
|
||||
|
||||
Your efforts help improve Prowler documentation—thank you for contributing!
|
||||
|
||||
@@ -0,0 +1,133 @@
|
||||
# Google Cloud Provider
|
||||
|
||||
This page details the [Google Cloud Platform (GCP)](https://cloud.google.com/) provider implementation in Prowler.
|
||||
|
||||
By default, Prowler will audit all the GCP projects that the authenticated identity can access. To configure it, follow the [getting started](../index.md#google-cloud) page.
|
||||
|
||||
## GCP Provider Classes Architecture
|
||||
|
||||
The GCP provider implementation follows the general [Provider structure](./provider.md). This section focuses on the GCP-specific implementation, highlighting how the generic provider concepts are realized for GCP in Prowler. For a full overview of the provider pattern, base classes, and extension guidelines, see [Provider documentation](./provider.md).
|
||||
|
||||
### Main Class
|
||||
|
||||
- **Location:** [`prowler/providers/gcp/gcp_provider.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/gcp/gcp_provider.py)
|
||||
- **Base Class:** Inherits from `Provider` (see [base class details](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/common/provider.py)).
|
||||
- **Purpose:** Central orchestrator for GCP-specific logic, session management, credential validation, project and organization discovery, and configuration.
|
||||
- **Key GCP Responsibilities:**
|
||||
- Initializes and manages GCP sessions (supports Application Default Credentials, Service Account, OAuth, and impersonation).
|
||||
- Validates credentials and sets up the GCP identity context.
|
||||
- Loads and manages configuration, mutelist, and fixer settings.
|
||||
- Discovers accessible GCP projects and organization metadata.
|
||||
- Provides properties and methods for downstream GCP service classes to access session, identity, and configuration data.
|
||||
|
||||
### Data Models
|
||||
|
||||
- **Location:** [`prowler/providers/gcp/models.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/gcp/models.py)
|
||||
- **Purpose:** Define structured data for GCP identity, project, and organization info.
|
||||
- **Key GCP Models:**
|
||||
- `GCPIdentityInfo`: Holds GCP identity metadata, such as the profile name.
|
||||
- `GCPOrganization`: Represents a GCP organization with ID, name, and display name.
|
||||
- `GCPProject`: Represents a GCP project with number, ID, name, organization, labels, and lifecycle state.
|
||||
|
||||
### `GCPService` (Service Base Class)
|
||||
|
||||
- **Location:** [`prowler/providers/gcp/lib/service/service.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/gcp/lib/service/service.py)
|
||||
- **Purpose:** Abstract base class that all GCP service-specific classes inherit from. This implements the generic service pattern (described in [service page](./services.md#service-base-class)) specifically for GCP.
|
||||
- **Key GCP Responsibilities:**
|
||||
- Receives a `GcpProvider` instance to access session, identity, and configuration.
|
||||
- Manages clients for all services by project.
|
||||
- Filters projects to only those with the relevant API enabled.
|
||||
- Provides `__threading_call__` method to make API calls in parallel by project or resource.
|
||||
- Exposes common audit context (`project_ids`, `projects`, `default_project_id`, `audit_config`, `fixer_config`) to subclasses.
|
||||
|
||||
### Exception Handling
|
||||
|
||||
- **Location:** [`prowler/providers/gcp/exceptions/exceptions.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/gcp/exceptions/exceptions.py)
|
||||
- **Purpose:** Custom exception classes for GCP-specific error handling, such as credential, session, and project access errors.
|
||||
|
||||
### Session and Utility Helpers
|
||||
|
||||
- **Location:** [`prowler/providers/gcp/lib/`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/gcp/lib/)
|
||||
- **Purpose:** Helpers for argument parsing, mutelist management, and other cross-cutting concerns.
|
||||
|
||||
## Specific Patterns in GCP Services
|
||||
|
||||
The generic service pattern is described in [service page](./services.md#service-structure-and-initialisation). You can find all the currently implemented services in the following locations:
|
||||
|
||||
- Directly in the code, in location [`prowler/providers/gcp/services/`](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers/gcp/services)
|
||||
- In the [Prowler Hub](https://hub.prowler.com/) for a more human-readable view.
|
||||
|
||||
The best reference to understand how to implement a new service is following the [service implementation documentation](./services.md#adding-a-new-service) and taking other services already implemented as reference. In next subsection you can find a list of common patterns that are used accross all GCP services.
|
||||
|
||||
### GCP Service Common Patterns
|
||||
|
||||
- Services communicate with GCP using the Google Cloud Python SDK, you can find the documentation with all the services [here](https://cloud.google.com/python/docs/reference).
|
||||
- Every GCP service class inherits from `GCPService`, ensuring access to session, identity, configuration, and client utilities.
|
||||
- The constructor (`__init__`) always calls `super().__init__` with the service name, provider, region (default "global"), and API version (default "v1"). Usually, the service name is the class name in lowercase, so it is called like `super().__init__(__class__.__name__, provider)`.
|
||||
- Resource containers **must** be initialized in the constructor, typically as dictionaries keyed by resource ID and the value is the resource object.
|
||||
- Only projects with the API enabled are included in the audit scope.
|
||||
- Resource discovery and attribute collection can be parallelized using `self.__threading_call__`, typically by region/zone or resource.
|
||||
- All GCP resources are represented as Pydantic `BaseModel` classes, providing type safety and structured access to resource attributes.
|
||||
- Each GCP API calls are wrapped in try/except blocks, always logging errors.
|
||||
- Tags and additional attributes that cannot be retrieved from the default call should be collected and stored for each resource using dedicated methods and threading.
|
||||
|
||||
## Specific Patterns in GCP Checks
|
||||
|
||||
The GCP checks pattern is described in [checks page](./checks.md). You can find all the currently implemented checks:
|
||||
|
||||
- Directly in the code, within each service folder, each check has its own folder named after the name of the check. (e.g. [`prowler/providers/gcp/services/iam/iam_sa_user_managed_key_unused/`](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers/gcp/services/iam/iam_sa_user_managed_key_unused))
|
||||
- In the [Prowler Hub](https://hub.prowler.com/) for a more human-readable view.
|
||||
|
||||
The best reference to understand how to implement a new check is following the [GCP check implementation documentation](./checks.md#creating-a-check) and taking other similar checks as reference.
|
||||
|
||||
### Check Report Class
|
||||
|
||||
The `Check_Report_GCP` class models a single finding for a GCP resource in a check report. It is defined in [`prowler/lib/check/models.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/lib/check/models.py) and inherits from the generic `Check_Report` base class.
|
||||
|
||||
#### Purpose
|
||||
|
||||
`Check_Report_GCP` extends the base report structure with GCP-specific fields, enabling detailed tracking of the resource, project, and location associated with each finding.
|
||||
|
||||
#### Constructor and Attribute Population
|
||||
|
||||
When you instantiate `Check_Report_GCP`, you must provide the check metadata and a resource object. The class will attempt to automatically populate its GCP-specific attributes from the resource, using the following logic (in order of precedence):
|
||||
|
||||
- **`resource_id`**:
|
||||
- Uses the explicit `resource_id` argument if provided.
|
||||
- Otherwise, uses `resource.id` if present.
|
||||
- Otherwise, uses `resource.name` if present.
|
||||
- Defaults to an empty string if none are available.
|
||||
|
||||
- **`resource_name`**:
|
||||
- Uses the explicit `resource_name` argument if provided.
|
||||
- Otherwise, uses `resource.name` if present.
|
||||
- Defaults to an empty string.
|
||||
|
||||
- **`project_id`**:
|
||||
- Uses the explicit `project_id` argument if provided.
|
||||
- Otherwise, uses `resource.project_id` if present.
|
||||
- Defaults to an empty string.
|
||||
|
||||
- **`location`**:
|
||||
- Uses the explicit `location` argument if provided.
|
||||
- Otherwise, uses `resource.location` if present.
|
||||
- Otherwise, uses `resource.region` if present.
|
||||
- Defaults to "global" if none are available.
|
||||
|
||||
All these attributes can be overridden by passing the corresponding argument to the constructor. If the resource object does not contain the required attributes, you must set them manually.
|
||||
Others attributes are inherited from the `Check_Report` class, from that ones you **always** have to set the `status` and `status_extended` attributes in the check logic.
|
||||
|
||||
#### Example Usage
|
||||
|
||||
```python
|
||||
report = Check_Report_GCP(
|
||||
metadata=check_metadata,
|
||||
resource=resource_object,
|
||||
resource_id="custom-id", # Optional override
|
||||
resource_name="custom-name", # Optional override
|
||||
project_id="my-gcp-project", # Optional override
|
||||
location="us-central1" # Optional override
|
||||
)
|
||||
report.status = "PASS"
|
||||
report.status_extended = "Resource is compliant."
|
||||
```
|
||||
@@ -0,0 +1,116 @@
|
||||
# GitHub Provider
|
||||
|
||||
This page details the [GitHub](https://github.com/) provider implementation in Prowler.
|
||||
|
||||
By default, Prowler will audit the GitHub account - scanning all repositories, organizations, and applications that your configured credentials can access. To configure it, follow the [getting started](../index.md#github) page.
|
||||
|
||||
## GitHub Provider Classes Architecture
|
||||
|
||||
The GitHub provider implementation follows the general [Provider structure](./provider.md). This section focuses on the GitHub-specific implementation, highlighting how the generic provider concepts are realized for GitHub in Prowler. For a full overview of the provider pattern, base classes, and extension guidelines, see [Provider documentation](./provider.md).
|
||||
|
||||
### `GithubProvider` (Main Class)
|
||||
|
||||
- **Location:** [`prowler/providers/github/github_provider.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/github/github_provider.py)
|
||||
- **Base Class:** Inherits from `Provider` (see [base class details](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/common/provider.py)).
|
||||
- **Purpose:** Central orchestrator for GitHub-specific logic, session management, credential validation, and configuration.
|
||||
- **Key GitHub Responsibilities:**
|
||||
- Initializes and manages GitHub sessions (supports Personal Access Token, OAuth App, and GitHub App authentication).
|
||||
- Validates credentials and sets up the GitHub identity context.
|
||||
- Loads and manages configuration, mutelist, and fixer settings.
|
||||
- Provides properties and methods for downstream GitHub service classes to access session, identity, and configuration data.
|
||||
|
||||
### Data Models
|
||||
|
||||
- **Location:** [`prowler/providers/github/models.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/github/models.py)
|
||||
- **Purpose:** Define structured data for GitHub identity, session, and output options.
|
||||
- **Key GitHub Models:**
|
||||
- `GithubSession`: Holds authentication tokens and keys for the session.
|
||||
- `GithubIdentityInfo`, `GithubAppIdentityInfo`: Store account or app identity metadata.
|
||||
|
||||
### `GithubService` (Service Base Class)
|
||||
|
||||
- **Location:** [`prowler/providers/github/lib/service/service.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/github/lib/service/service.py)
|
||||
- **Purpose:** Abstract base class for all GitHub service-specific classes.
|
||||
- **Key GitHub Responsibilities:**
|
||||
- Receives a `GithubProvider` instance to access session, identity, and configuration.
|
||||
- Manages GitHub API clients for the authenticated user or app.
|
||||
- Exposes common audit context (`audit_config`, `fixer_config`) to subclasses.
|
||||
|
||||
### Exception Handling
|
||||
|
||||
- **Location:** [`prowler/providers/github/exceptions/exceptions.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/github/exceptions/exceptions.py)
|
||||
- **Purpose:** Custom exception classes for GitHub-specific error handling, such as credential and session errors.
|
||||
|
||||
### Session and Utility Helpers
|
||||
|
||||
- **Location:** [`prowler/providers/github/lib/`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/github/lib/)
|
||||
- **Purpose:** Helpers for argument parsing, mutelist management, and other cross-cutting concerns.
|
||||
|
||||
## Specific Patterns in GitHub Services
|
||||
|
||||
The generic service pattern is described in [service page](./services.md#service-structure-and-initialisation). You can find all the currently implemented services in the following locations:
|
||||
|
||||
- Directly in the code, in location [`prowler/providers/github/services/`](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers/github/services)
|
||||
- In the [Prowler Hub](https://hub.prowler.com/) for a more human-readable view.
|
||||
|
||||
The best reference to understand how to implement a new service is following the [service implementation documentation](./services.md#adding-a-new-service) and by taking other already implemented services as reference.
|
||||
|
||||
### GitHub Service Common Patterns
|
||||
|
||||
- Services communicate with GitHub using the PyGithub Python SDK. See the [official documentation](https://pygithub.readthedocs.io/).
|
||||
- Every GitHub service class inherits from `GithubService`, ensuring access to session, identity, configuration, and client utilities.
|
||||
- The constructor (`__init__`) always calls `super().__init__` with the service name and provider (e.g. `super().__init__(__class__.__name__, provider))`). Ensure that the service name in PyGithub is the same that you use in the constructor. Usually is used the `__class__.__name__` to get the service name because it is the same as the class name.
|
||||
- Resource containers **must** be initialized in the constructor, typically as dictionaries keyed by resource ID or name.
|
||||
- All GitHub resources are represented as Pydantic `BaseModel` classes, providing type safety and structured access to resource attributes.
|
||||
- GitHub API calls are wrapped in try/except blocks, always logging errors.
|
||||
|
||||
## Specific Patterns in GitHub Checks
|
||||
|
||||
The GitHub checks pattern is described in [checks page](./checks.md). You can find all the currently implemented checks in:
|
||||
|
||||
- Directly in the code, within each service folder, each check has its own folder named after the name of the check. (e.g. [`prowler/providers/github/services/repository/repository_secret_scanning_enabled/`](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers/github/services/repository/repository_secret_scanning_enabled))
|
||||
- In the [Prowler Hub](https://hub.prowler.com/) for a more human-readable view.
|
||||
|
||||
The best reference to understand how to implement a new check is the [GitHub check implementation documentation](./checks.md#creating-a-check) and by taking other checks as reference.
|
||||
|
||||
### Check Report Class
|
||||
|
||||
The `CheckReportGithub` class models a single finding for a GitHub resource in a check report. It is defined in [`prowler/lib/check/models.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/lib/check/models.py) and inherits from the generic `Check_Report` base class.
|
||||
|
||||
#### Purpose
|
||||
|
||||
`CheckReportGithub` extends the base report structure with GitHub-specific fields, enabling detailed tracking of the resource, name, and owner associated with each finding.
|
||||
|
||||
#### Constructor and Attribute Population
|
||||
|
||||
When you instantiate `CheckReportGithub`, you must provide the check metadata and a resource object. The class will attempt to automatically populate its GitHub-specific attributes from the resource, using the following logic (in order of precedence):
|
||||
|
||||
- **`resource_id`**:
|
||||
- Uses the explicit `resource_id` argument if provided.
|
||||
- Otherwise, uses `resource.id` if present.
|
||||
- Defaults to an empty string if not available.
|
||||
|
||||
- **`resource_name`**:
|
||||
- Uses the explicit `resource_name` argument if provided.
|
||||
- Otherwise, uses `resource.name` if present.
|
||||
- Defaults to an empty string if not available.
|
||||
|
||||
- **`owner`**:
|
||||
- Uses the explicit `owner` argument if provided.
|
||||
- Otherwise, uses `resource.owner` for repositories and `resource.name` for organizations.
|
||||
- Defaults to an empty string if not available.
|
||||
|
||||
If the resource object does not contain the required attributes, you must set them manually in the check logic.
|
||||
|
||||
Other attributes are inherited from the `Check_Report` class, from which you **always** have to set the `status` and `status_extended` attributes in the check logic.
|
||||
|
||||
#### Example Usage
|
||||
|
||||
```python
|
||||
report = CheckReportGithub(
|
||||
metadata=check_metadata,
|
||||
resource=resource_object
|
||||
)
|
||||
report.status = "PASS"
|
||||
report.status_extended = "Resource is compliant."
|
||||
```
|
||||
@@ -1,3 +1,3 @@
|
||||
# Integration Tests
|
||||
|
||||
Coming soon ...
|
||||
Coming soon ...
|
||||
@@ -2,66 +2,89 @@
|
||||
|
||||
## Introduction
|
||||
|
||||
Integrating Prowler with external tools enhances its functionality and seamlessly embeds it into your workflows. Prowler supports a wide range of integrations to streamline security assessments and reporting. Common integration targets include messaging platforms like Slack, project management tools like Jira, and cloud services such as AWS Security Hub.
|
||||
Integrating Prowler with external tools enhances its functionality and enables seamless workflow automation. Prowler supports a variety of integrations to optimize security assessments and reporting.
|
||||
|
||||
* Consult the [Prowler Developer Guide](https://docs.prowler.com/projects/prowler-open-source/en/latest/) to understand how Prowler works and the way that you can integrate it with the desired product!
|
||||
* Identify the best approach for the specific platform you’re targeting.
|
||||
### Supported Integration Targets
|
||||
|
||||
- Messaging Platforms – Example: Slack
|
||||
|
||||
- Project Management Tools – Example: Jira
|
||||
|
||||
- Cloud Services – Example: AWS Security Hub
|
||||
|
||||
### Integration Guidelines
|
||||
To integrate Prowler with a specific product:
|
||||
|
||||
Refer to the [Prowler Developer Guide](https://docs.prowler.com/projects/prowler-open-source/en/latest/) to understand its architecture and integration mechanisms.
|
||||
|
||||
* Identify the most suitable integration method for the intended platform.
|
||||
|
||||
## Steps to Create an Integration
|
||||
|
||||
### Identify the Integration Purpose
|
||||
### Defining the Integration Purpose
|
||||
|
||||
* Clearly define the objective of the integration. For example:
|
||||
* Sending Prowler findings to a platform for alerts, tracking, or further analysis.
|
||||
* Review existing integrations in the [`prowler/lib/outputs`](https://github.com/prowler-cloud/prowler/tree/master/prowler/lib/outputs) folder for inspiration and implementation examples.
|
||||
* Before implementing an integration, clearly define its objective. Common purposes include:
|
||||
|
||||
### Develop the Integration
|
||||
* Sending Prowler findings to a platform for alerting, tracking, or further analysis.
|
||||
* For inspiration and implementation examples, please review the existing integrations in the [`prowler/lib/outputs`](https://github.com/prowler-cloud/prowler/tree/master/prowler/lib/outputs) folder.
|
||||
|
||||
### Developing the Integration
|
||||
|
||||
* Script Development:
|
||||
|
||||
* Write a script to process Prowler’s output and interact with the target platform’s API.
|
||||
* For example, to send findings, parse Prowler’s results and use the platform’s API to create entries or notifications.
|
||||
* If the goal is to send findings, parse Prowler’s results and use the platform’s API to create entries or notifications.
|
||||
|
||||
* Configuration:
|
||||
* Ensure your script includes configurable options for environment-specific settings, such as API endpoints and authentication tokens.
|
||||
|
||||
* Ensure the script supports environment-specific settings, such as:
|
||||
|
||||
- API endpoints
|
||||
|
||||
- Authentication tokens
|
||||
|
||||
- Any necessary configurable parameters.
|
||||
|
||||
### Fundamental Structure
|
||||
|
||||
* Integration Class:
|
||||
* Create a class that encapsulates attributes and methods for the integration.
|
||||
Here is an example with Jira integration:
|
||||
|
||||
* To implement an integration, create a class that encapsulates the required attributes and methods for interacting with the target platform. Example: Jira Integration
|
||||
|
||||
```python title="Jira Class"
|
||||
class Jira:
|
||||
"""
|
||||
Jira class to interact with the Jira API
|
||||
|
||||
[Note]
|
||||
This integration is limited to a single Jira Cloud, therefore all the issues will be created for same Jira Cloud ID. We will need to work on the ability of providing a Jira Cloud ID if the user is present in more than one.
|
||||
This integration is limited to a single Jira Cloud instance, meaning all issues will be created under the same Jira Cloud ID. Future improvements will include the ability to specify a Jira Cloud ID for users associated with multiple accounts.
|
||||
|
||||
Attributes:
|
||||
- _redirect_uri: The redirect URI
|
||||
- _client_id: The client ID
|
||||
Attributes
|
||||
- _redirect_uri: The redirect URI used
|
||||
- _client_id: The client identifier
|
||||
- _client_secret: The client secret
|
||||
- _access_token: The access token
|
||||
- _refresh_token: The refresh token
|
||||
- _expiration_date: The authentication expiration
|
||||
- _cloud_id: The cloud ID
|
||||
- _cloud_id: The cloud identifier
|
||||
- _scopes: The scopes needed to authenticate, read:jira-user read:jira-work write:jira-work
|
||||
- AUTH_URL: The URL to authenticate with Jira
|
||||
- PARAMS_TEMPLATE: The template for the parameters to authenticate with Jira
|
||||
- TOKEN_URL: The URL to get the access token from Jira
|
||||
- API_TOKEN_URL: The URL to get the accessible resources from Jira
|
||||
|
||||
Methods:
|
||||
- __init__: Initialize the Jira object
|
||||
- input_authorization_code: Input the authorization code
|
||||
- auth_code_url: Generate the URL to authorize the application
|
||||
- get_auth: Get the access token and refresh token
|
||||
- get_cloud_id: Get the cloud ID from Jira
|
||||
- get_access_token: Get the access token
|
||||
- refresh_access_token: Refresh the access token from Jira
|
||||
- test_connection: Test the connection to Jira and return a Connection object
|
||||
- get_projects: Get the projects from Jira
|
||||
- get_available_issue_types: Get the available issue types for a project
|
||||
- send_findings: Send the findings to Jira and create an issue
|
||||
Methods
|
||||
__init__: Initializes the Jira object
|
||||
- input_authorization_code: Inputs the authorization code
|
||||
- auth_code_url: Generates the URL to authorize the application
|
||||
- get_auth: Gets the access token and refreshes it
|
||||
- get_cloud_id: Gets the cloud identifier from Jira
|
||||
- get_access_token: Gets the access token
|
||||
- refresh_access_token: Refreshes the access token from Jira
|
||||
- test_connection: Tests the connection to Jira and returns a Connection object
|
||||
- get_projects: Gets the projects from Jira
|
||||
- get_available_issue_types: Gets the available issue types for a project
|
||||
- send_findings: Sends the findings to Jira and creates an issue
|
||||
|
||||
Raises:
|
||||
- JiraGetAuthResponseError: Failed to get the access token and refresh token
|
||||
@@ -128,9 +151,17 @@ Integrating Prowler with external tools enhances its functionality and seamlessl
|
||||
|
||||
# More properties and methods
|
||||
```
|
||||
|
||||
* Test Connection Method:
|
||||
* Implement a method to validate credentials or tokens, ensuring the connection to the target platform is successful.
|
||||
The following is the code for the `test_connection` method for the `Jira` class:
|
||||
|
||||
* Validating Credentials or Tokens
|
||||
|
||||
To ensure a successful connection to the target platform, implement a method that validates authentication credentials or tokens.
|
||||
|
||||
#### Method Implementation
|
||||
|
||||
The following example demonstrates the `test_connection` method for the `Jira` class:
|
||||
|
||||
```python title="Test connection"
|
||||
@staticmethod
|
||||
def test_connection(
|
||||
@@ -142,8 +173,8 @@ Integrating Prowler with external tools enhances its functionality and seamlessl
|
||||
"""Test the connection to Jira
|
||||
|
||||
Args:
|
||||
- redirect_uri: The redirect URI
|
||||
- client_id: The client ID
|
||||
- redirect_uri: The redirect URI used
|
||||
- client_id: The client identifier
|
||||
- client_secret: The client secret
|
||||
- raise_on_exception: Whether to raise an exception or not
|
||||
|
||||
@@ -215,9 +246,15 @@ Integrating Prowler with external tools enhances its functionality and seamlessl
|
||||
)
|
||||
return Connection(is_connected=False, error=error)
|
||||
```
|
||||
|
||||
* Send Findings Method:
|
||||
|
||||
* Add a method to send Prowler findings to the target platform, adhering to its API specifications.
|
||||
The following is the code for the `send_findings` method for the `Jira` class:
|
||||
|
||||
#### Method Implementation
|
||||
|
||||
The following example demonstrates the `send_findings` method for the `Jira` class:
|
||||
|
||||
```python title="Send findings method"
|
||||
def send_findings(
|
||||
self,
|
||||
@@ -321,16 +358,19 @@ Integrating Prowler with external tools enhances its functionality and seamlessl
|
||||
)
|
||||
```
|
||||
|
||||
### Testing
|
||||
### Testing the Integration
|
||||
|
||||
* Test the integration in a controlled environment to confirm it behaves as expected.
|
||||
* Verify that Prowler’s findings are accurately transmitted and correctly processed by the target platform.
|
||||
* Simulate edge cases to ensure robust error handling.
|
||||
* Conduct integration testing in a controlled environment to validate expected behavior. Ensure the following:
|
||||
|
||||
* Transmission Accuracy – Verify that Prowler findings are correctly sent and processed by the target platform.
|
||||
* Error Handling – Simulate edge cases to assess robustness and failure recovery mechanisms.
|
||||
|
||||
### Documentation
|
||||
|
||||
* Provide clear, detailed documentation for your integration:
|
||||
* Setup instructions, including any required dependencies.
|
||||
* Configuration details, such as environment variables or authentication steps.
|
||||
* Example use cases and troubleshooting tips.
|
||||
* Good documentation ensures maintainability and simplifies onboarding for team members.
|
||||
* Ensure the following elements are included:
|
||||
|
||||
* Setup Instructions – List all necessary dependencies and installation steps.
|
||||
* Configuration Details – Specify required environment variables, authentication steps, etc.
|
||||
* Example Use Cases – Provide practical scenarios demonstrating functionality.
|
||||
* Troubleshooting Guide – Document common issues and resolution steps.
|
||||
* Comprehensive and clear documentation improves maintainability and simplifies onboarding.
|
||||
|
||||
@@ -1,75 +1,166 @@
|
||||
# Developer Guide
|
||||
# Introduction to developing in Prowler
|
||||
|
||||
You can extend Prowler Open Source in many different ways, in most cases you will want to create your own checks and compliance security frameworks, here is where you can learn about how to get started with it. We also include how to create custom outputs, integrations and more.
|
||||
Extending Prowler
|
||||
|
||||
## Get the code and install all dependencies
|
||||
Prowler can be extended in various ways, with common use cases including:
|
||||
|
||||
First of all, you need a version of Python 3.9 or higher and also `pip` installed to be able to install all dependencies required.
|
||||
- New security checks
|
||||
- New compliance frameworks
|
||||
- New output formats
|
||||
- New integrations
|
||||
- New proposed features
|
||||
|
||||
Then, to start working with the Prowler Github repository you need to fork it to be able to propose changes for new features, bug fixing, etc. To fork the Prowler repo please refer to [this guide](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo?tool=webui#forking-a-repository).
|
||||
All the relevant information for these cases is included in this guide.
|
||||
|
||||
Once that is satisfied go ahead and clone your forked repo:
|
||||
## Getting the Code and Installing All Dependencies
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Before proceeding, ensure the following:
|
||||
|
||||
- Git is installed.
|
||||
- Python 3.9 or higher is installed.
|
||||
- `poetry` is installed to manage dependencies.
|
||||
|
||||
### Forking the Prowler Repository
|
||||
|
||||
To contribute to Prowler, fork the Prowler GitHub repository. This allows you to propose changes, submit new features, and fix bugs. For guidance on forking, refer to the [official GitHub documentation](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo?tool=webui#forking-a-repository).
|
||||
|
||||
### Cloning Your Forked Repository
|
||||
|
||||
Once your fork is created, clone it using the following commands:
|
||||
|
||||
```
|
||||
git clone https://github.com/<your-github-user>/prowler
|
||||
cd prowler
|
||||
```
|
||||
For isolation and to avoid conflicts with other environments, we recommend using `poetry`, a Python dependency management tool. You can install it by following the instructions [here](https://python-poetry.org/docs/#installation).
|
||||
|
||||
Then install all dependencies including the ones for developers:
|
||||
### Dependency Management and Environment Isolation
|
||||
|
||||
To prevent conflicts between environments, we recommend using `poetry`, a Python dependency management solution. Install it by following the [instructions](https://python-poetry.org/docs/#installation).
|
||||
|
||||
### Installing Dependencies
|
||||
|
||||
To install all required dependencies, including those needed for development, run:
|
||||
|
||||
```
|
||||
poetry install --with dev
|
||||
eval $(poetry env activate) \
|
||||
eval $(poetry env activate)
|
||||
```
|
||||
> [!IMPORTANT]
|
||||
> Starting from Poetry v2.0.0, `poetry shell` has been deprecated in favor of `poetry env activate`.
|
||||
>
|
||||
> If your poetry version is below 2.0.0 you must keep using `poetry shell` to activate your environment.
|
||||
> In case you have any doubts, consult the Poetry environment activation guide: https://python-poetry.org/docs/managing-environments/#activating-the-environment
|
||||
|
||||
## Contributing with your code or fixes to Prowler
|
||||
???+ important
|
||||
Starting from Poetry v2.0.0, `poetry shell` has been deprecated in favor of `poetry env activate`.
|
||||
If your poetry version is below 2.0.0 you must keep using `poetry shell` to activate your environment.
|
||||
In case you have any doubts, consult the [Poetry environment activation guide](https://python-poetry.org/docs/managing-environments/#activating-the-environment).
|
||||
|
||||
## Contributing to Prowler
|
||||
|
||||
### Ways to Contribute
|
||||
|
||||
Here are some ideas for collaborating with Prowler:
|
||||
|
||||
1. **Review Current Issues**: Check out our [GitHub Issues](https://github.com/prowler-cloud/prowler/issues) page. We often tag issues as `good first issue` - these are perfect for new contributors as they are typically well-defined and manageable in scope.
|
||||
|
||||
2. **Expand Prowler's Capabilities**: Prowler is constantly evolving, and you can be a part of its growth. Whether you are adding checks, supporting new services, or introducing integrations, your contributions help improve the tool for everyone. Here is how you can get involved:
|
||||
|
||||
- **Adding New Checks**
|
||||
Want to improve Prowler's detection capabilities for your favorite cloud provider? You can contribute by writing new checks. To get started, follow the [create a new check guide](./checks.md).
|
||||
|
||||
- **Adding New Services**
|
||||
One key service for your favorite cloud provider is missing? Add it to Prowler! To add a new service, check out the [create a new service guide](./services.md). Do not forget to include relevant checks to validate functionality.
|
||||
|
||||
- **Adding New Providers**
|
||||
If you would like to extend Prowler to work with a new cloud provider, follow the [create a new provider guide](./provider.md). This typically involves setting up new services and checks to ensure compatibility.
|
||||
|
||||
- **Adding New Output Formats**
|
||||
Want to tailor how results are displayed or exported? You can add custom output formats by following the [create a new output format guide](./outputs.md).
|
||||
|
||||
- **Adding New Integrations**
|
||||
Prowler can work with other tools and platforms through integrations. If you would like to add one, see the [create a new integration guide](./integrations.md).
|
||||
|
||||
- **Proposing or Implementing Features**
|
||||
Got an idea to make Prowler better? Whether it is a brand-new feature or an enhancement to an existing one, you are welcome to propose it or help implement community-requested improvements.
|
||||
|
||||
3. **Improve Documentation**: Help make Prowler more accessible by enhancing our documentation, fixing typos, or adding examples/tutorials. See the tutorial of how we write our documentation [here](./documentation.md).
|
||||
|
||||
4. **Bug Fixes**: If you find any issues or bugs, you can report them in the [GitHub Issues](https://github.com/prowler-cloud/prowler/issues) page and if you want you can also fix them.
|
||||
|
||||
Remember, our community is here to help! If you need guidance, do not hesitate to ask questions in the issues or join our [Slack workspace](https://goto.prowler.com/slack).
|
||||
|
||||
### Pre-Commit Hooks
|
||||
|
||||
This repository uses Git pre-commit hooks managed by the [pre-commit](https://pre-commit.com/) tool, it is installed with `poetry install --with dev`. Next, run the following command in the root of this repository:
|
||||
|
||||
This repo has git pre-commit hooks managed via the [pre-commit](https://pre-commit.com/) tool. [Install](https://pre-commit.com/#install) it how ever you like, then in the root of this repo run:
|
||||
```shell
|
||||
pre-commit install
|
||||
```
|
||||
You should get an output like the following:
|
||||
|
||||
Successful installation should produce the following output:
|
||||
|
||||
```shell
|
||||
pre-commit installed at .git/hooks/pre-commit
|
||||
```
|
||||
|
||||
Before we merge any of your pull requests we pass checks to the code, we use the following tools and automation to make sure the code is secure and dependencies up-to-dated:
|
||||
### Code Quality and Security Checks
|
||||
|
||||
Before merging pull requests, several automated checks and utilities ensure code security and updated dependencies:
|
||||
|
||||
???+ note
|
||||
These should have been already installed if you ran `poetry install --with dev`
|
||||
These should have been already installed if `poetry install --with dev` was already run.
|
||||
|
||||
- [`bandit`](https://pypi.org/project/bandit/) for code security review.
|
||||
- [`safety`](https://pypi.org/project/safety/) and [`dependabot`](https://github.com/features/security) for dependencies.
|
||||
- [`hadolint`](https://github.com/hadolint/hadolint) and [`dockle`](https://github.com/goodwithtech/dockle) for our containers security.
|
||||
- [`Snyk`](https://docs.snyk.io/integrations/snyk-container-integrations/container-security-with-docker-hub-integration) in Docker Hub.
|
||||
- [`clair`](https://github.com/quay/clair) in Amazon ECR.
|
||||
- [`vulture`](https://pypi.org/project/vulture/), [`flake8`](https://pypi.org/project/flake8/), [`black`](https://pypi.org/project/black/) and [`pylint`](https://pypi.org/project/pylint/) for formatting and best practices.
|
||||
- [`hadolint`](https://github.com/hadolint/hadolint) and [`dockle`](https://github.com/goodwithtech/dockle) for container security.
|
||||
- [`Snyk`](https://docs.snyk.io/integrations/snyk-container-integrations/container-security-with-docker-hub-integration) for container security in Docker Hub.
|
||||
- [`clair`](https://github.com/quay/clair) for container security in Amazon ECR.
|
||||
- [`vulture`](https://pypi.org/project/vulture/), [`flake8`](https://pypi.org/project/flake8/), [`black`](https://pypi.org/project/black/), and [`pylint`](https://pypi.org/project/pylint/) for formatting and best practices.
|
||||
|
||||
You can see all dependencies in file `pyproject.toml`.
|
||||
Additionally, ensure the latest version of [`TruffleHog`](https://github.com/trufflesecurity/trufflehog) is installed to scan for sensitive data in the code. Follow the official [installation guide](https://github.com/trufflesecurity/trufflehog?tab=readme-ov-file#floppy_disk-installation) for setup.
|
||||
|
||||
Moreover, you would need to install [`TruffleHog`](https://github.com/trufflesecurity/trufflehog) on the latest version to check for secrets in the code. You can install it using the official installation guide [here](https://github.com/trufflesecurity/trufflehog?tab=readme-ov-file#floppy_disk-installation).
|
||||
### Dependency Management
|
||||
|
||||
Additionally, please ensure to follow the code documentation practices outlined in this guide: [Google Python Style Guide - Comments and Docstrings](https://github.com/google/styleguide/blob/gh-pages/pyguide.md#38-comments-and-docstrings).
|
||||
All dependencies are listed in the `pyproject.toml` file.
|
||||
|
||||
For proper code documentation, refer to the following and follow the code documentation practices presented there: [Google Python Style Guide - Comments and Docstrings](https://github.com/google/styleguide/blob/gh-pages/pyguide.md#38-comments-and-docstrings).
|
||||
|
||||
???+ note
|
||||
If you have any trouble when committing to the Prowler repository, add the `--no-verify` flag to the `git commit` command.
|
||||
If you encounter issues when committing to the Prowler repository, use the `--no-verify` flag with the `git commit` command.
|
||||
|
||||
### Repository Folder Structure
|
||||
|
||||
Understanding the layout of the Prowler codebase will help you quickly find where to add new features, checks, or integrations. The following is a high-level overview from the root of the repository:
|
||||
|
||||
```
|
||||
prowler/
|
||||
├── prowler/ # Main source code for Prowler SDK (CLI, providers, services, checks, compliances, config, etc.)
|
||||
├── api/ # API server and related code
|
||||
├── dashboard/ # Local Dashboard extracted from the CLI output
|
||||
├── ui/ # Web UI components
|
||||
├── util/ # Utility scripts and helpers
|
||||
├── tests/ # Prowler SDK test suite
|
||||
├── docs/ # Documentation, including this guide
|
||||
├── examples/ # Example output formats for providers and scripts
|
||||
├── permissions/ # Permission-related files and policies
|
||||
├── contrib/ # Community-contributed scripts or modules
|
||||
├── kubernetes/ # Kubernetes deployment files
|
||||
├── .github/ # GitHub related files (workflows, issue templates, etc.)
|
||||
├── pyproject.toml # Python project configuration (Poetry)
|
||||
├── poetry.lock # Poetry lock file
|
||||
├── README.md # Project overview and getting started
|
||||
├── Makefile # Common development commands
|
||||
├── Dockerfile # SDK Docker container
|
||||
├── docker-compose.yml # Prowler App Docker compose
|
||||
└── ... # Other supporting files
|
||||
```
|
||||
|
||||
## Pull Request Checklist
|
||||
|
||||
If you create or review a PR in https://github.com/prowler-cloud/prowler please follow this checklist:
|
||||
When creating or reviewing a pull request in https://github.com/prowler-cloud/prowler, follow [this checklist](https://github.com/prowler-cloud/prowler/blob/master/.github/pull_request_template.md#checklist).
|
||||
|
||||
- [ ] Make sure you've read the Prowler Developer Guide at https://docs.prowler.cloud/en/latest/developer-guide/introduction/
|
||||
- [ ] Are we following the style guide, hence installed all the linters and formatters? Please check https://docs.prowler.cloud/en/latest/developer-guide/introduction/#contributing-with-your-code-or-fixes-to-prowler
|
||||
- [ ] Are we increasing/decreasing the test coverage? Please, review if we need to include/modify tests for the new code.
|
||||
- [ ] Are we modifying outputs? Please review it carefully.
|
||||
- [ ] Do we need to modify the Prowler documentation to reflect the changes introduced?
|
||||
- [ ] Are we introducing possible breaking changes? Are we modifying a core feature?
|
||||
## Contribution Appreciation
|
||||
|
||||
If you enjoy swag, we’d love to thank you for your contribution with laptop stickers or other Prowler merchandise!
|
||||
|
||||
## Want some swag as appreciation for your contribution?
|
||||
To request swag: Share your pull request details in our [Slack workspace](https://goto.prowler.com/slack).
|
||||
|
||||
If you are like us and you love swag, we are happy to thank you for your contribution with some laptop stickers or whatever other swag we may have at that time. Please, tell us more details and your pull request link in our [Slack workspace here](https://goto.prowler.com/slack). You can also reach out to Toni de la Fuente on Twitter [here](https://twitter.com/ToniBlyx), his DMs are open.
|
||||
You can also reach out to Toni de la Fuente on [Twitter](https://twitter.com/ToniBlyx)—his DMs are open!
|
||||
|
||||
@@ -0,0 +1,117 @@
|
||||
# Kubernetes Provider
|
||||
|
||||
This page details the [Kubernetes](https://kubernetes.io/) provider implementation in Prowler.
|
||||
|
||||
By default, Prowler will audit all namespaces in the Kubernetes cluster accessible by the configured context. To configure it, follow the [getting started](../index.md#kubernetes) page.
|
||||
|
||||
## Kubernetes Provider Classes Architecture
|
||||
|
||||
The Kubernetes provider implementation follows the general [Provider structure](./provider.md). This section focuses on the Kubernetes-specific implementation, highlighting how the generic provider concepts are realized for Kubernetes in Prowler. For a full overview of the provider pattern, base classes, and extension guidelines, see [Provider documentation](./provider.md).
|
||||
|
||||
### `KubernetesProvider` (Main Class)
|
||||
|
||||
- **Location:** [`prowler/providers/kubernetes/kubernetes_provider.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/kubernetes/kubernetes_provider.py)
|
||||
- **Base Class:** Inherits from `Provider` (see [base class details](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/common/provider.py)).
|
||||
- **Purpose:** Central orchestrator for Kubernetes-specific logic, session management, context and namespace discovery, credential validation, and configuration.
|
||||
- **Key Kubernetes Responsibilities:**
|
||||
- Initializes and manages Kubernetes sessions (supports kubeconfig file or content, context selection, and namespace scoping).
|
||||
- Validates credentials and sets up the Kubernetes identity context.
|
||||
- Loads and manages configuration, mutelist, and fixer settings.
|
||||
- Discovers accessible namespaces and cluster metadata.
|
||||
- Provides properties and methods for downstream Kubernetes service classes to access session, identity, and configuration data.
|
||||
|
||||
### Data Models
|
||||
|
||||
- **Location:** [`prowler/providers/kubernetes/models.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/kubernetes/models.py)
|
||||
- **Purpose:** Define structured data for Kubernetes identity and session info.
|
||||
- **Key Kubernetes Models:**
|
||||
- `KubernetesIdentityInfo`: Holds Kubernetes identity metadata, such as context, cluster, and user.
|
||||
- `KubernetesSession`: Stores the Kubernetes API client and context information.
|
||||
|
||||
### `KubernetesService` (Service Base Class)
|
||||
|
||||
- **Location:** [`prowler/providers/kubernetes/lib/service/service.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/kubernetes/lib/service/service.py)
|
||||
- **Purpose:** Abstract base class that all Kubernetes service-specific classes inherit from. This implements the generic service pattern (described in [service page](./services.md#service-base-class)) specifically for Kubernetes.
|
||||
- **Key Kubernetes Responsibilities:**
|
||||
- Receives a `KubernetesProvider` instance to access session, identity, and configuration.
|
||||
- Manages the Kubernetes API client and context.
|
||||
- Provides a `__threading_call__` method to make API calls in parallel by resource.
|
||||
- Exposes common audit context (`context`, `api_client`, `audit_config`, `fixer_config`) to subclasses.
|
||||
|
||||
### Exception Handling
|
||||
|
||||
- **Location:** [`prowler/providers/kubernetes/exceptions/exceptions.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/kubernetes/exceptions/exceptions.py)
|
||||
- **Purpose:** Custom exception classes for Kubernetes-specific error handling, such as session, API, and configuration errors.
|
||||
|
||||
### Session and Utility Helpers
|
||||
|
||||
- **Location:** [`prowler/providers/kubernetes/lib/`](https://github.com/prowler-cloud/prowler/blob/master/prowler/providers/kubernetes/lib/)
|
||||
- **Purpose:** Helpers for argument parsing, mutelist management, and other cross-cutting concerns.
|
||||
|
||||
## Specific Patterns in Kubernetes Services
|
||||
|
||||
The generic service pattern is described in [service page](./services.md#service-structure-and-initialisation). You can find all the currently implemented services in the following locations:
|
||||
|
||||
- Directly in the code, in location [`prowler/providers/kubernetes/services/`](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers/kubernetes/services)
|
||||
- In the [Prowler Hub](https://hub.prowler.com/) for a more human-readable view.
|
||||
|
||||
The best reference to understand how to implement a new service is following the [service implementation documentation](./services.md#adding-a-new-service) and taking other already implemented services as reference.
|
||||
|
||||
### Kubernetes Service Common Patterns
|
||||
|
||||
- Services communicate with Kubernetes using the Kubernetes Python SDK. See the [official documentation](https://github.com/kubernetes-client/python/blob/master/kubernetes/README.md/).
|
||||
- Every Kubernetes service class inherits from `KubernetesService`, ensuring access to session, identity, configuration, and client utilities.
|
||||
- The constructor (`__init__`) always calls `super().__init__` with the provider object, and initializes resource containers (typically as dictionaries keyed by resource UID or name).
|
||||
- Resource discovery and attribute collection can be parallelized using `self.__threading_call__`.
|
||||
- All Kubernetes resources are represented as Pydantic `BaseModel` classes, providing type safety and structured access to resource attributes.
|
||||
- Kubernetes API calls are wrapped in try/except blocks, always logging errors.
|
||||
- Additional attributes that cannot be retrieved from the default call should be collected and stored for each resource using dedicated methods and threading.
|
||||
|
||||
## Specific Patterns in Kubernetes Checks
|
||||
|
||||
The Kubernetes checks pattern is described in [checks page](./checks.md). You can find all the currently implemented checks in:
|
||||
|
||||
- Directly in the code, within each service folder, each check has its own folder named after the name of the check. (e.g. [`prowler/providers/kubernetes/services/rbac/rbac_minimize_wildcard_use_roles/`](https://github.com/prowler-cloud/prowler/tree/master/prowler/providers/kubernetes/services/rbac/rbac_minimize_wildcard_use_roles))
|
||||
- In the [Prowler Hub](https://hub.prowler.com/) for a more human-readable view.
|
||||
|
||||
The best reference to understand how to implement a new check is following the [Kubernetes check implementation documentation](./checks.md#creating-a-check) and taking other checks as reference.
|
||||
|
||||
### Check Report Class
|
||||
|
||||
The `Check_Report_Kubernetes` class models a single finding for a Kubernetes resource in a check report. It is defined in [`prowler/lib/check/models.py`](https://github.com/prowler-cloud/prowler/blob/master/prowler/lib/check/models.py) and inherits from the generic `Check_Report` base class.
|
||||
|
||||
#### Purpose
|
||||
|
||||
`Check_Report_Kubernetes` extends the base report structure with Kubernetes-specific fields, enabling detailed tracking of the resource, name, and namespace associated with each finding.
|
||||
|
||||
#### Constructor and Attribute Population
|
||||
|
||||
When you instantiate `Check_Report_Kubernetes`, you must provide the check metadata and a resource object. The class will attempt to automatically populate its Kubernetes-specific attributes from the resource, using the following logic (in order of precedence):
|
||||
|
||||
- **`resource_id`**:
|
||||
- Uses `resource.uid` if present.
|
||||
- Otherwise, uses `resource.name` if present.
|
||||
- Defaults to an empty string if none are available.
|
||||
|
||||
- **`resource_name`**:
|
||||
- Uses `resource.name` if present.
|
||||
- Defaults to an empty string if not available.
|
||||
|
||||
- **`namespace`**:
|
||||
- Uses `resource.namespace` if present.
|
||||
- Defaults to "cluster-wide" for cluster-scoped resources.
|
||||
|
||||
If the resource object does not contain the required attributes, you must set them manually in the check logic.
|
||||
|
||||
Other attributes are inherited from the `Check_Report` class, from which you **always** have to set the `status` and `status_extended` attributes in the check logic.
|
||||
|
||||
#### Example Usage
|
||||
|
||||
```python
|
||||
report = Check_Report_Kubernetes(
|
||||
metadata=check_metadata,
|
||||
resource=resource_object
|
||||
)
|
||||
report.status = "PASS"
|
||||
report.status_extended = "Resource is compliant."
|
||||
```
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user