top of page

Oopsie - Linux

  • justinblawitz
  • Oct 31, 2025
  • 5 min read
  • First ping and Nmap the target IP using nmap -sC -sv {target IP}

  • We find 2 open tcp ports, 22 running OpenSSH, and 80 running an Apache server, we’ll mainly be looking into port 80 or the web server.

  • To begin looking into the web server, we’ll search the target IP in a browser and find a web page for an automotive business

  • After scrolling to the services section, we find some explanation that we need to login to gain access to their services. With this information, we can deduce that there is a login page on the website that we haven’t found how to access yet; to try and find this page we can try to map the website using a proxy in Burpsuite, which can be used as a web crawler.

  • A web crawler is a program or automated script which browses the World Wide Web in a methodical, automated manner. This process is called Web crawling or spidering. Many legitimate sites, in particular search engines, use spidering as a means of providing up-to-date data.

  • To configure our browser to send traffic through our proxy first open the settings in our browser, in this case FireFox

  • We’ll then search for the keyword proxy which leads us to the Network Settings which we’ll open.

  • Then select manual proxy configuration and enter 127.0.0.1 in the IP field, and change the port on the right to 8080 where Burpsuite is listening.

  • Next, we’ll open our terminal again and enter burpsuite to open the application.

  • After opening Burpsuite, we’ll need to disable interception in Burpsuite as its enabled by default in the Proxy tab, under the intercept subtab.

  • Now we’ll enter the Target tab and refresh the web page in our browser, under the site map subtab where we can view the get request that was made. We can also enter the target IP in the integrated browser in the Burpsuite application to also get the request.

  • Looking closer at the requests, we can see a URL of interest under /cdn-cgi/login/script.js

  • Entering the URL http://{target IP}/cdn-cgi/login in our browser gives us a login page for the site

  • Although entering some common default credentials like “admin” “admin” or “admin” “password” doesn’t give us access, we can login as a guest using the option below the login field.

  • Looking at the different options on the web page, the Uploads tab could be exploited in some way, but when clicking on that tab we get a error saying we need super admin rights.

  • To escalate our privileges, we’ll be checking if cookies and sessions can be manipulated.

  • Cookies are text files with small pieces of data created by the web server, stored by the browser into the computer file system and being used to identify a user while browsing a website.

  • In FireFox, we can view and change cookies using developer tools, we can use them to examine, edit, and debug HTML, CSS, and JavaScript. To enter the dev tool panel, right click on the webpage and select inspect element.

  • After navigating to the storage tab in the dev tools panel, and the Account tab on the web page, we can view the role we have, or “guest” and the user, or “2233”

  • We also see at the end of the URL that our id=2, if we try changing this to id=1 we can view the account ID and name of an admin account, but don’t actually have access to it yet.

  • To gain access to the admin account, we need to change our cookie in the dev tools panel to match this information and swap to the admin user.

  • After editing the cookie and changing back to the uploads tab, we see that we now can upload files.

  • The specific shell we’ll be using in this repo is the php-reverse-shell.php file.

  • We’ll need to edit this file and enter our IP and port number using nano php-reverse-shell.php

  • We can then upload this file to the website in the upload tab.

  • To locate the folder where the uploaded files are stored, we’ll use gobuster to find the names of a couple directories. The wordlist we’ll use in this case is 2.3 small.txt, which we can easily find using the locate command, specifically locate directory-list-2.3-small.txt.

  • After locating our wordlist, we’ll run the command gobuster dir –url http://{target IP} –wordlist /file/path/directory-list-2.3-small.txt -x php

  • Gobuster reveals the /uploads directory which is most likely the directory we’re looking for, after searching {target IP}/uploads/ in our browser, we see we don’t have permission to access this directory.

  • Next, we’ll set up a netcat connection using nc -lvnp 1234, and request the shell through our browser by searching http://{target IP}/uploads/php-reverse-shell.php, and successfully connect to the reverse shell in netcat where we can run commands like ls and whoami.

  • Now we’ll use the command python3 -c ‘import pty;pty.spawn(“/bin/bash”) to spawn a fully functional shell.

  • After searching the server, we find some interesting files in the login directory, which we’ll change to. Then we’ll run the command cat | grep -I passw.  Grep searches for patterns in each file and prints lines that match the pattern, in this case cat reads all files and pipes the output to grep where we give the pattern of a string that starts with passw followed by any string or *. The -I switch is used to ignore case sensitive words like Password. This command gives us the password for the admin user.

  • Next, we’ll check available users on the system by running cat /etc/passwd to find other users this password could possibly be reused on.

  • Near the bottom of /etc/passwd, we find the username Robert. We can attempt to login using the command su robert using the admin password we found. In this case the login attempt will fail.

  • After a bit more searching on the server, we find the file db.php, after reading it using cat db.php we find the credentials for the user Robert.

  • Using this password, we can gain access to the users profile and capture the user flag by reading the user.txt file.

  • We can also check the permissions for this specific user by using sudo -l and entering the password, in this case we do not have the permissions to use sudo. We can also run id which reveals the user is part of a group called bugtracker.

  • To look for any files associated with the bugtracker group, we’ll use the command find / -group bugtracker 2>/dev/null, where find is used to search for files and directories, / specifies to search the root directory, -group bugtracker limits the search to files and directories that belong to the group named bugtracker, and 2>/dev/null is used to suppress errors like permission denied. This search reveals the file path /usr/bin/bugtracker.

  • Next we’ll want to check the privileges associated with this file by using ls -la /usr/bin/bugtracker && file /usr/bin/bugtracker, which is a combination of two commands, the first is an ls command with the -la listing all files in long format which displays permissions, owner, etc, and the file command which determines the type of a file by examining its contents.

  • In the output we notice that there is a suid set in the binary, suid (set owner user ID) always executes as the user who owns the file, regardless of the user passing the command. In this case the binary is owned by root, so when we execute it, we will be executing it as root even though we are on a user account.

  • When running the command /usr/bin/bugtracker and entering a random number as the bug ID, we see that the command is using cat on a specific file on the root user’s account.

  • We can easily use this functionality to navigate the root user’s directories and navigate to the root flag by using ../ and root.txt.

 
 
 

Comments


bottom of page