Take a list of domains and scan for endpoints, secrets, api keys, file extensions, tokens and more...

Overview


Take a list of domains and scan for endpoints, secrets, api keys, file extensions, tokens and more...

go-report-card workflows ubuntu-build win10-build pr-welcome
Mainteinance yes ask me anything gobadge license-GPL3
Coded with 💙 by edoardottt.
Share on Twitter!

PreviewInstallGet StartedExamplesContributing

Preview 📊

asciicast

Installation 📡

You need Go.

  • Linux

    • git clone https://github.com/edoardottt/cariddi.git
    • cd cariddi
    • go get
    • make linux (to install)
    • make unlinux (to uninstall)

    Or in one line: git clone https://github.com/edoardottt/cariddi.git; cd cariddi; go get; make linux

  • Windows (executable works only in cariddi folder.)

    • git clone https://github.com/edoardottt/cariddi.git
    • cd cariddi
    • go get
    • .\make.bat windows (to install)
    • .\make.bat unwindows (to uninstall)

Get Started 🎉

cariddi help prints the help in the command line.

Usage of cariddi:
  -c int
    	Concurrency level. (default 20)
  -d int
    	Delay between a page crawled and another.
  -e	Hunt for juicy endpoints.
  -ef string
    	Use an external file (txt, one per line) to use custom parameters for endpoints hunting.
  -examples
    	Print the examples.
  -ext int
    	Hunt for juicy file extensions. Integer from 1(juicy) to 7(not juicy).
  -h	Print the help.
  -oh string
    	Write the output into an HTML file.
  -ot string
    	Write the output into a TXT file.
  -plain
    	Print only the results.
  -s	Hunt for secrets.
  -sf string
    	Use an external file (txt, one per line) to use custom regexes for secrets hunting.
  -version
    	Print the version.

Examples 💡

  • cat urls | cariddi -version (Print the version)

  • cat urls | cariddi -h (Print the help)

  • cat urls | cariddi -s (Hunt for secrets)

  • cat urls | cariddi -d 2 (2 seconds between a page crawled and another)

  • cat urls | cariddi -c 200 (Set the concurrency level to 200)

  • cat urls | cariddi -e (Hunt for juicy endpoints)

  • cat urls | cariddi -plain (Print only useful things)

  • cat urls | cariddi -ot target_name (Results in txt file)

  • cat urls | cariddi -oh target_name (Results in html file)

  • cat urls | cariddi -ext 2 (Hunt for juicy (level 2 of 7) files)

  • cat urls | cariddi -e -ef endpoints_file (Hunt for custom endpoints)

  • cat urls | cariddi -s -sf secrets_file (Hunt for custom secrets)

  • For Windows use powershell.exe -Command "cat urls | .\cariddi.exe"

Contributing 🛠

Just open an issue/pull request. See also CONTRIBUTING.md and CODE OF CONDUCT.md

Help me building this!

A special thanks to:

To do:

  • Tests ( 😂 )

  • Tor support

  • Proxy support

  • Plain output (print only results)

  • HTML output

  • Build an Input Struct and use it as parameter

  • Output color

  • Endpoints (parameters) scan

  • Secrets scan

  • Extensions scan

  • TXT output

License 📝

This repository is under GNU General Public License v3.0.
edoardoottavianelli.it to contact me.

Comments
  • Cant parse url list with ports

    Cant parse url list with ports

    Describe the bug If you provide url with port (not every web app stay on 80 and 443 port =) ) - cariddi cant parse it

    like as

    echo http://ya.ru:80 | cariddi
    The URL provided is not built in a proper way: http://ya.ru:80
    
    
    opened by kiriknik 8
  • slice bounds out of range

    slice bounds out of range

    Describe the bug After a while we (for some reason) get a slice out of bounds errors

    goroutine 25981 [running]:
    github.com/edoardottt/cariddi/pkg/crawler.New.func15(0xc072bd1380)
    	/opt/hacking/repository/cariddi/pkg/crawler/colly.go:215 +0x950
    github.com/gocolly/colly.(*Collector).handleOnResponse(0xc072bd1380?, 0xc072bd1380)
    	/home/cyb3rjerry/go/pkg/mod/github.com/gocolly/[email protected]/colly.go:936 +0x1be
    github.com/gocolly/colly.(*Collector).fetch(0xc00018d520, {0x0?, 0x0?}, {0x8c1eaa, 0x3}, 0x1, {0x0?, 0x0}, 0x0?, 0xc0304a96e0, ...)
    	/home/cyb3rjerry/go/pkg/mod/github.com/gocolly/[email protected]/colly.go:621 +0x69b
    created by github.com/gocolly/colly.(*Collector).scrape
    	/home/cyb3rjerry/go/pkg/mod/github.com/gocolly/[email protected]/colly.go:532 +0x645
    

    To Reproduce Steps to reproduce the behavior:

    1. run echo "https://mapbox.com/" | cariddi -s -intensive

    Desktop (please complete the following information): Linux cyb3rjerry 5.19.0-23-generic 24-Ubuntu SMP PREEMPT_DYNAMIC Fri Oct 14 15:39:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

    bug Go 
    opened by cyb3rjerry 6
  • Print only the currently visited URL rather than the URL it scraped

    Print only the currently visited URL rather than the URL it scraped

    Hey there, this fix should solve issue #79 ! To solve it I'm using (*Collector).OnRequest method to print the URL that is actively being visited. Colly's doc also shows this method being used for this specific use case so I'd be tempted to say this is the best way of doing this. image

    I've also slipped in some simplifications of some for .. range loops that were proposed by go-staticcheck

    Let me know what you think of it!

    opened by cyb3rjerry 6
  • help

    help

    internal/unsafeheader

    compile: version "go1.17.6" does not match go tool version "go1.18.2"

    internal/cpu

    compile: version "go1.17.6" does not match go tool version "go1.18.2"

    internal/abi

    how to fix it

    thx.

    opened by sasholy 5
  • Panic while compiling some regex during a find the secrets run

    Panic while compiling some regex during a find the secrets run

    Describe the bug Panic while compiling some regex during a find the secrets (-s) run. It also happens with the -e flag as well.

    panic: regexp: Compile(`*`): error parsing regexp: missing argument to repetition operator: `*`
    
    goroutine 1 [running]:
    regexp.MustCompile(0x14fb20a, 0x1, 0x0)
    	/usr/local/Cellar/go/1.16.4/libexec/src/regexp/regexp.go:311 +0x157
    github.com/edoardottt/cariddi/crawler.Crawler(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x2, 0x14, 0x1, 0x0, ...)
    	/Users/sean/repos/cariddi/crawler/colly.go:54 +0x1ce
    main.main()
    	/Users/sean/repos/cariddi/main.go:91 +0x3cf
    

    To Reproduce Steps to reproduce the behavior:

    1. Create urls file with a valid url in it
    2. Run the following command: cat urls|./cariddi -d 2 -s
    3. See stack trace shortly after launching

    Expected behavior Cariddi should process the provided site and find any/all secrets

    Desktop (please complete the following information):

    • OS: Mac OS
    • Version: 11.4 (Bug Sur)
    opened by CSBaum 5
  • "domain formatted in a bad way" kills scan and debug doesn't give any info on the URL that caused this

    Describe the bug Hey there, so after running the scanner on an endpoint, I noticed that out of the blue I get a domain formatted in a bad way error. However, even when enabling -debug, I get no additional info as to what URL has killed the scan. Is this intended?

    To Reproduce Steps to reproduce the behavior:

    1. Run echo "https://xxxxxx.com" | cariddi -s -sf regex.txt -intensive -ot results.txt
    2. Wait
    3. Observe domain formatted in a bad way error

    Expected behavior Would it not be better to simply skip that URL rather than killing the scan via os.Exit(1)?

    Screenshots image

    Desktop (please complete the following information):

    • OS: Linux XXXXXXXX 5.19.0-23-generic 24-Ubuntu SMP PREEMPT_DYNAMIC Fri Oct 14 15:39:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
    • Version: Cariddi v1.1.9

    Additional context

    Let me know if you'd appreciate a hand in any way!

    bug Go 
    opened by cyb3rjerry 4
  • The URL provided is not built in a proper way: ${URL}

    The URL provided is not built in a proper way: ${URL}

    The URL provided is not built in a proper way: www.edoardoottavianelli.it

    I tried with the sample I saw on the video but still not working. I have all packages and libraries, GO included so I don't know why is not working. Have a nice Sunday!

    *EDIT It works using powershell but not terminal. Also some endpoints don't work, maybe because they are private e.g.: https://api.nike.com

    opened by westorm-alt 4
  • -i docs doesn't ignore subdomains containing

    -i docs doesn't ignore subdomains containing "docs"

    Describe the bug When crawling https://mapbox.com/ we notice that "docs.mapbox.com" gets crawled

    To Reproduce Steps to reproduce the behavior:

    1. Run echo "https://mapbox.com/" | cariddi -s -intensive -i docs
    2. Look at output

    Expected behavior docs.* not to be crawler

    Desktop (please complete the following information):

    • OS: Linux WKS-001772 5.19.0-23-generic 24-Ubuntu SMP PREEMPT_DYNAMIC Fri Oct 14 15:39:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
    • Browser [e.g. chrome, safari]
    • Version: v1.1.9
    bug help wanted good first issue Go 
    opened by cyb3rjerry 3
  • init tests

    init tests

    Hello! 👋 By this PR I would like to start adding tests to the project, if you don't mind :)

    utils package 0 -> 17% 🎉 ok github.com/edoardottt/cariddi/utils 0.209s coverage: 17.6% of statements

    This if a draft PR, just for example, I'll be updating it.

    And some questions

    1. Do you want to use any libraries for tests? like https://github.com/stretchr/testify
    2. Don't you mind if I make several changes within this PR? as in the first commit - there are some changes for reducing allocations, and simplifying code
    opened by w1kend 3
  • Regex (or other method) for Intensive mode

    Regex (or other method) for Intensive mode

    Bug description When cariddi runs without -intensive flag set it works fine, all the urls crawled on the target(s) belong to the input domain(s).
    This is the method used for normal behaviour (line 108). As you can notice on line 108 there is a colly option to restrict the crawler to crawl only urls belonging to the inputted domain(s).

    Instead, when you use the -intensive flag set, this is the method used: from line 114 to 119.
    The problem with this method is that there could be some false positives, like facebook.com?q=c.target.com.
    I'm trying to figure out how pick only target urls even if cariddi is running in intensive mode.

    Just to be clear:

    • The normal behaviour (without -intensive) is that if you pass as input target.com, cariddi will crawl only (and strictly) urls belonging to target.com.
    • If the -intensive mode is set, cariddi should crawl all the urls belonging to *.target.com.

    Additional context Go-colly is the module used for crawling.

    bug enhancement help wanted Go Regex 
    opened by edoardottt 3
  • Initial call to robots.txt and sitemap.xml don't enforce ignored words

    Initial call to robots.txt and sitemap.xml don't enforce ignored words

    After review the code I've noticed that these two files don't follow the enforced ignored list that's passed via -i

    @edoardottt Since it's only two calls at the very beginning do you feel like we should enfored the check or we should leave it as is? As the creator I feel like you would have the best insight on what behavior should/should not be enforced :)

    https://github.com/edoardottt/cariddi/blob/f52de6c99d7689876aff55662d70f216407be6c4/pkg/crawler/colly.go#L263-L286

    opened by cyb3rjerry 2
  • Problem with Crawling POST Parameters

    Problem with Crawling POST Parameters

    Hello Developers,,

    The Tool is great after many scans I've discover and be sure that the tool not crawling and catch all parameters in pages specially the "parameters" in the "Filter Categories" most of this "Filters" are with POST requests

    Here a live example for the Filter Categories when you check the filter the parameter will be added so u can see it in the URL

    i try to scan with many mode -ext -e -c all of them didnt catch the POST parameters

    ?manufacturer=1-batella&c=1-baby-food&label=1-new&s[flavour]=mango&price_from=2&price_to=3

    Step 1 https://i.ibb.co/0KNTH20/1.png

    Step 2 https://i.ibb.co/y69q7x7/2.png

    Step 3 https://i.ibb.co/YpD7vWJ/3.png

    Hope my explain is clear and I hope the developer's find a solution to fix the crawl techniques to make the tool Crawl like this POST requests to make the tool extract more "parameters"

    Best Regards,, and keep this tool UP!

    opened by CostyCrypto 0
  • Second ctrl+c should force quit the program

    Second ctrl+c should force quit the program

    Describe the bug CTRL+C once initiates the "smooth" exit correctly. However, pressing it a second time should "force quit" as it currently simply hangs

    To Reproduce Steps to reproduce the behavior:

    1. Run any long scan
    2. Press CTRL+C twice
    3. See error

    Expected behavior Force quit on second CTRL+C

    Screenshots image

    Desktop (please complete the following information): Linux cyb3rjerry 5.19.0-23-generic 24-Ubuntu SMP PREEMPT_DYNAMIC Fri Oct 14 15:39:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

    bug Go 
    opened by cyb3rjerry 4
Releases(v1.2.1)
  • v1.2.1(Nov 15, 2022)

  • v1.2.0(Nov 15, 2022)

    cariddi v1.2.0 🥳

    • Add Ctrl-C handle
    • Closed Initial call to robots.txt and sitemap.xml don't enforce ignored words #81
    • Closed -i docs doesn't ignore subdomains containing "docs" #79
    • Closed "domain formatted in a bad way" kills scan and debug doesn't give any info on the URL that caused this #78
    • Minor code improvements
    • Minor changes and fixes
    go install -v github.com/edoardottt/cariddi/cmd/[email protected]
    

    Thanks @cyb3rjerry

    If you encounter a problem, just open an issue

    Source code(tar.gz)
    Source code(zip)
  • v1.1.9(Oct 14, 2022)

    cariddi v1.1.9 🥳

    • Complete Refactoring
    • Updated CodeQL to v2
    • Removed Dockerfile
    • Minor code improvements
    • Minor changes and fixes

    If you encounter a problem, just open an issue

    Source code(tar.gz)
    Source code(zip)
  • v1.1.8(Sep 4, 2022)

    cariddi v1.1.8 🥳

    • Add AWS cognito pool regex by @rodnt
    • Add -insecure flag to ignore invalid HTTPS certificates by @mrnfrancesco
    • Updated golangci-lint action configuration
    • Updated dependencies
    • Minor code improvements
    • Minor changes and fixes

    If you encounter a problem, just open an issue

    Source code(tar.gz)
    Source code(zip)
  • v1.1.7(May 3, 2022)

    cariddi v1.1.7 🥳

    • Added -debug option
    • Added golangci-lint action
    • Added make lint action
    • Ignore robots.txt rules by default
    • Add -info option and info regexes
    • Minor changes and fixes

    If you encounter a problem, just open an issue

    Source code(tar.gz)
    Source code(zip)
  • v1.1.6(Mar 14, 2022)

  • v1.1.5(Dec 24, 2021)

    cariddi v1.1.5 🥳

    • Added general errors detection
    • Fix slice out of bounds issue
    • Fix Ignore match issue
    • MInor changes and fixes

    If you encounter a problem, just open an issue

    Source code(tar.gz)
    Source code(zip)
  • v1.1.4(Oct 31, 2021)

  • v1.1.3(Oct 30, 2021)

  • v1.1.2(Oct 6, 2021)

  • v1.1.1(Oct 5, 2021)

    cariddi v1.1.1 🥳

    • Added event on action attribute of form tag (url crawling)
    • Added event on xml sitemaps (url crawling)
    • Code refactoring
    • Add make update command that updates everything
    • Minor changes
    Source code(tar.gz)
    Source code(zip)
  • v1.1(Sep 28, 2021)

    cariddi v1.1 🎉

    • Add event on action attribute of form HTML tag https://github.com/edoardottt/cariddi/commit/c1ae2036de81991df06b112958fb164a17d19216
    • Add user agent creation to avoid the colly default one https://github.com/edoardottt/cariddi/commit/55fdca038452844bb9463bd91b7bcb246dde5452
    • Add onXML event to catch sitemaps https://github.com/edoardottt/cariddi/commit/41c3c38b5a01dfd5faff7ae6fc43cf4de31b4f11
    • Minor changes and fixes
    Source code(tar.gz)
    Source code(zip)
  • v1.0(Sep 27, 2021)

    🎉 First release ! 🎉

    Take a list of domains, crawl urls and scan for endpoints, secrets, api keys, file extensions, tokens and more...
    Try it out, I know there is still work to do, but I think it's ready for the first official release :)
    Crawled more than 130.000 urls and searched for interesting endpoints, secrets and file extensions on 50+ targets in less than 10 minutes.
    Open an issue for everything. Installation, usage and more details in the README file.

    Enjoy and happy recon! ⚔️

    Source code(tar.gz)
    Source code(zip)
Owner
gilfoyle97
MSc Cybersecurity Student | @python | @golang | Linux | Bash
gilfoyle97
Ah shhgit! Find secrets in your code. Secrets detection for your GitHub, GitLab and Bitbucket repositories: www.shhgit.com

shhgit helps secure forward-thinking development, operations, and security teams by finding secrets across their code before it leads to a security br

Paul 3.6k Dec 23, 2022
Secretsmanager - Secrets management that allows you to store your secrets encrypted in git

I created secretsmanager to store some secrets within a repository. The secrets are encrypted at rest, with readable keys and editable JSON, so you can rename a key or delete it by hand. The cli tool handles the bare minumum of requirements.

Tit Petric 20 May 6, 2022
Auto scan log4j bug with excel of server list

Log4JCheck Auto scan log4j bug with excel of server list. Please read https://ww

Yilong Li 0 Dec 24, 2021
Find secrets and passwords in container images and file systems

Find secrets and passwords in container images and file systems

null 1.9k Jan 1, 2023
Allows you to replace a secret in a file using secrets manager

secrets inserter Allows you to replace a secret in a file using secrets manager. ::SECRET:secret-name:SECRET:: will be replaced with your secret-name

null 0 Dec 12, 2021
This repository intends to have a set of tools to take advantage of features on the Burp Enterprise

This repository intends to have a set of tools to take advantage of features on the Burp Enterprise

Ewerson Guimaraes (Crash) 8 Jan 22, 2022
Extract endpoints marked as disallow in robots files to generate wordlists.

roboXtractor This tool has been developed to extract endpoints marked as disallow in robots.txt file. It crawls the file directly on the web and has a

Josué Encinar 39 Dec 21, 2022
A simple, modern and secure encryption tool (and Go library) with small explicit keys, no config options, and UNIX-style composability.

age age is a simple, modern and secure file encryption tool, format, and library. It features small explicit keys, no config options, and UNIX-style c

Filippo Valsorda 12.4k Dec 28, 2022
Small utility package for stealing tokens from other processes and using them in current threads, or duplicating them and starting other processes

getsystem small utility for impersonating a user in the current thread or starting a new process with a duplicated token. must already be in a high in

Alex Flores 12 Dec 24, 2022
Scan and analyze OSS dependencies and licenses from compiled Go binaries

golicense - Go Binary OSS License Scanner golicense is a tool that scans compiled Go binaries and can output all the dependencies, their versions, and

Mitchell Hashimoto 664 Nov 6, 2022
Scan your pictures and videos for corruption, and sort them by EXIF or modification time

scanogram Scan your pictures and videos for corruption, and sort them by EXIF or modification time. Introduction This tool is a fast and lightweight s

Victor 5 Dec 2, 2022
🌰 encrypt/decrypt using ssh keys

ssh-vault ?? encrypt/decrypt using ssh private keys Documentation https://ssh-vault.com Usage $ ssh-vault -h Example: $ echo "secret" | ssh-vault -u

ssh-vault 364 Dec 30, 2022
Package csrf is a middleware that generates and validates CSRF tokens for Flamego

csrf Package csrf is a middleware that generates and validates CSRF tokens for Flamego.

Flamego 8 Nov 25, 2022
A fast tool to mass scan for a vulnerability on Microsoft Exchange Server that allows an attacker bypassing the authentication and impersonating as the admin (CVE-2021-26855).

proxylogscan This tool to mass scan for a vulnerability on Microsoft Exchange Server that allows an attacker bypassing the authentication and imperson

dw1 145 Dec 26, 2022
Git watchdog will scan your public repository and find out the vulnerabilities

Dependencies Docker Go 1.17 MySQL 8.0.25 Bootstrap Run chmod +x start.sh if start.sh script does not have privileged to run Run ./start.sh --bootstrap

Quang Nguyen 2 Dec 30, 2021
A port scan and service weakpass brute tool build by golang.

A port scan and service weakpass brute tool build by golang.

M1ku 76 Jan 5, 2023
Scan systems and docker images for potential spring4shell vulnerabilities.

Scan systems and docker images for potential spring4shell vulnerabilities. Will detect in-depth (layered archives jar/zip/tar/war and scans for vulnerable Spring4shell versions. Binaries for Windows, Linux and OsX, but can be build on each platform supported by supported Golang.

null 10 Nov 9, 2022
A fast tool to scan CRLF vulnerability written in Go

CRLFuzz A fast tool to scan CRLF vulnerability written in Go Resources Installation from Binary from Source from GitHub Usage Basic Usage Flags Target

dw1 903 Jan 1, 2023