CCDC Nationals in San Antonio, TX

back · home · blog · posted 2022-04-23 · summary of the collegiate cyber defense competition
Table of Contents

CCDC blog post! We ended up getting 2nd place at Nationals.

Team picture holding trophy

Many good vibes received and useful lessons learned. If you're not familiar with CCDC, just read this:

The mission of the Collegiate Cyber Defense Competition (CCDC) system is to provide institutions with an information assurance or computer security curriculum a controlled, competitive environment to assess their student's depth of understanding and operational competency in managing the challenges inherent in protecting a corporate network infrastructure and business information systems. (via nccdc.org)

What? That's nonsense? Welcome to the real world. That's what CCDC tries to be-- the 'real world'-- except, you have twenty disgusting computers to manage by yourself, and you have anywhere from -5 to 10 minutes to secure them before being kerplunked by twelve instances of Cobalt Strike.

I am confused and angry. What are you talking about?

CCDC is an event for college students that simulates working as a computer system administrator, that is, making sure they keep working, and performing IT tasks. The trick is that your computers are intentionally easy to hack, and there's an active team of hackers trying to delete your hard drive.

The event is run by University of Texas at San Antonio, who started it to address the perceived need for more real-world education ("where education meets operation"). Think of it like, a computer science degree will teach you how to implement a red-black tree, but not how to cope when your ESXi reboots and starts beaming Nyan Cat.

If you're not technically inclined and don't care about CCDC, the value you'll get from this post is nominal.

CCDC is a smelly degen-fest where corposlaves dunk on freshies and teach them to delete iptables!!!!

In case it wasn't apparent, I really like CCDC. I think it's fun. People used to complain about it, mainly due to the lack of realism in the event, which teaches students bad habits. This is valid point (I've renamed iptables and rotated passwords on my VPS multiple times a second in the past), but year over year, the competition gets more realistic.

There wasn't even any Solaris in the environment this time around! The weirdest operating system we had was OpenSUSE. And we can use scripts. It's kind of like The Real World.

It has its issues, which I'll discuss later, but I think CCDC is the "canonical" collegiate cyber event for good reason.

I've been trying to qualify for CCDC Nationals for three years, since my first "season" in 2019-2020. Finally, this year, UCF stayed in their region, and we didn't red team ourselves to death with SLAs. It's harder than it looks.

The National event was ton of fun, and we did pretty alright. Of course, we also made a bunch of mistakes.

Strategy

Generally, the strategy is: change passwords ASAP, automate everything, and cheese as much as you can.

We had two people (including myself) on Linux, so we had two first priorities: database/webapp password changes, and root password changes. I wrote a tool to run bash scripts on all systems (kind of like an off-brand Ansible, you can see it here), which I used to run password change scripts and run through basic box securing (e.g., if php.ini, disable these functions, change shells to rbash, and so forth). The other Linux person worked on the database, and changing all creds related to it.

I would've written a lot more scripts, and improved the tooling for automatically configuring new boxes (e.g., correctly escalate to root with sudo/doas). BUT, we were prohibited from making changes to our script repository since 30 days before the regional event. The amount of foresight they expect college students to have is baffling. I didn't even know where I would be or what I would be doing in two months, let alone making sure my scripts were "nats ready." If you qualify, they should let you make changes until 30 days before Nationals as well.

As an ancillary task, I manage one or two of the (typically) three ESXi boxes. For Nationals, this became my first priority. (This turned out to be a waste of time-- ESXI systems were out of scope. I'll talk about this more later, but this severely reduces the red team skill ceiling, and makes things less fun.)

Giga Genius Cheese

Our strategy above intentionally prioritizes root password changes and non-scored credential changes, because scored credentials are only needed for a small number of checks, and you can often make them entirely "secure" without even changing the creds! Case in point, I never changed SMTP credentials and we had 100% uptime on it (though I will admit that wasn't intentional).

Consider this. After you change all non-scored password services, what's left?

Scored services on fileshares are always read-only. FTP and SAMBA done. SMTP/POP3, they could delete emails, but I backed them up, and it's scored on sending new emails, not retrieving old ones. Also consider that you could chattr +i the mail directories. That leaves SSH, as the hardest one.

However, if you change all shells to rbash, and export an empty path in the bashrc for non-root users... the experiment is complete. Dwayne's engine, Green. My SSH, essentially nonfunctional. Scored credentials, unchanged. CIAS_Alex, out of a job?

Absolute buffoonery. This won't stand!

Recently, with the realism bender that the organizers have been on, there have been additional checks that verify your systems are really working. These are only at Nationals AFAIK, but the pertinent ones would be:

Team Composition

We chose to adopt a slightly unconventional team composition. We had three M$ caretakers, two Linux (like I mentioned above), an inject handler, and a red teamer(!). The red team person was in charge of scanning our network, testing our fixes, and otherwise finding additional flaws and holes that may be easier to find from the "outside."

If I were to compose a team again, I would probably just go for three Linux people where one is in charge of red-team-esque duties, rather than having a separate job, especially given how Linux-focused the environments are.

Regionals

Services

The regionals environment was themed as "Malachor", an energy company.

Environment details table.

Network Name OS Service Ports IP
Core Palo VM Palo - -
Core ESXi ESXi - 10.30.30.3
Warehouse ESXi ESXi - 172.16.30.3
Monitoring ESXi ESXi - 172.16.35.3
Core Telos Ubuntu POP3/SMTP 110, 25 10.30.30.10
Core Peragus Mint SSH 22 10.30.30.20
Core Draethos Fedora HTTP 80 10.30.30.30
Core Ordyn Ubuntu HTTP 80 10.30.30.35
Core Despayre Debian 8 SSH 22 10.30.30.101
Core Eriadu Ubuntu HTTP 80 10.30.30.103
Core Callos Debian 9 SSH 22 10.30.30.105
Warehouse Mustafar Linux HTTPS 8443 172.16.30.5
Warehouse Sembia Linux HTTP 80 172.16.30.10
Warehouse Utapau Linux SSH 22 172.16.30.15
Monitoring Zephyros Linux HTTP 81 172.16.35.98
Monitoring Solara Linux HTTP 81 172.16.35.99
Warehouse Cristophsis Linux - 172.16.30.200
Monitoring Kali Kali - - -
Core Datooine Windows 2016 DNS 53 10.30.30.5
Core Korriban Windows 2016 Standard HTTP 80 10.30.30.31
Core Cadomai Windows 2016 HTTP 80 10.30.30.100
Core Taris Windows 2016 Standard HTTP 80 10.30.30.15
Core Flashpoint Windows 2012 AD, DHCP - 10.30.30.25
Core Kessel Windows 10.0 - 10.30.30.200
Core Nadir Windows 10.0 - 10.30.30.201
Core Adinorr Windows 7 - 10.30.30.202
Core Raxus Windows XP ? 10.30.30.102
Core Felucia Windows Vista FTP FTP stuff 10.30.30.104
Monitoring Endor Windows 10.0 HTTP, SSH 80, 22 172.16.35.97
Warehouse Geonosis Windows 6.1 ? 172.16.30.20
Warehouse Muunilist Windows 2012 AD, DHCP - 172.16.30.25
Warehouse Nelvaan Windows 10.0 - 172.16.30.201

As with previous years, the vast majority of scored services are hosted on Linux (thankfully). We were given two injects to migrate services, and we chose to use them on DOLIBARR and WWW. The database was mostly centralized on peragus, which simplified database/webapp password changes. The only two sites that were not connected to that database were DOLIBARR and BLOG (Wordpress), both of which were using local databases.

Red Team

Historically, we have not felt too much heat from the red team. During the competition, I noticed two malicious binaries running as bfett (because we didn't change the scored user passwords immediately). After killing those sessions and moving the binaries, I didn't hear anything from them again.

Alas, when we received our red team report, we saw that they were able to dump two databases and we lost another 200 points for an exposed web server with PII. I'm almost certain this is DOLIBARR and WWW (Opencart)-- both of these Windows machines had databases, and DOLIBARR had a lot of employee PII. We also lost another ~1000 points from red team login access, which I would guesstimate around 200 of is on Linux, because we deprioritized scored service password changes for user-level accounts.

Full regionals red team feedback.

Red Team Activity Summary – Team 3

Password were heavily targeted by the Red Team however it should be noted it took nearly 45 minutes for them to obtain the initial administrator/root password. Admin and root accounts that were duplicated on web applications were also a popular target using those same default credentials. It is strongly recommended that teams change all admin/root level passwords for web applications as the Red Team will certainly attack those entry points.

In your case, the Red Team was able to crack those admin level web logins and used that access to penetrate your organization’s customer, financial, and personnel databases. This PII loss accounted for 400 points of the Red Team penalties listed below. Your team was compromised with MS17-010 on 10.10.10.25 (the Active Directory box). Between MS17-010 and Zerologon the Red Team was able to very quickly obtain credentials which allowed them to access other systems as well.

Recommendations from the Red Team: Change web app passwords, change database passwords, change admin/root passwords. Disable unnecessary services. Use host-based firewalls immediately to secure systems while working on network level protections. Double check all web configuration file permissions. Monitor outbound connections.

Total loss: -1550

Point loss summary: -850 for user or root logins

-100 for exploited systems

-200 for sensitive PII exposure in a browsable directory

-400 for database dump with PII exposure


(We thought we did better than that based on the scoreboard, so we asked Dwayne to confirm the scores):

Looks like part of this is a cut and paste error and part of it is my fault for not explaining the score better. The Red Team discussion is correct. The IP address didn’t get changed due to a copy and paste error – that exploit (along with quite a few of the others) was used against all teams. So they missed changing the IP when they were putting together the writeups for each team.

The Red Team score category on the scorecard also includes two things which I didn’t break out:

So key feedback there is to make sure you are looking for the Red Team and kicking them completely out and make sure your CCS clients can talk to the scoring servers.

1550+300+265 should get you back to 2115.

Injects

Injects were handled very well. Having one or two inject people definitely pays off. I only had to do a couple things so I can't complain. Last year, where I tried and failed to set up Elastic for the entire competition, was a different story.

Full regionals inject feedback.

(I don't have a list of injects from regionals, so here's some feedback on the ones we missed points for).

Inject Discussion - we will only discuss injects where you could have earned additional points.

03 - RDP Box - Jump box authentication error occurred "function requested not supported, could be due to CredSSP encryption oracle remediation". The judges were never able to get past this error despite multiple attempts.

07 - TCP/IP Driver - said none of the systems were affected, but Endor was running Windows Server, version 20H2 (Server Core Installation) which is an OS that was affected by this issue.

08 - Honeypot - no discussion about what was found, really needed to see some discussion of early findings

09 - SSH Lockdown - this was tested 3 times across 4 services, in your case the blocking worked on 4 of those 12 tests. To receive credit the judges needed to be able to connect to the SSH service (ie it had to be working), then they would run the brute force attack, and then they'd attempt to connect to it again. If your block was working correctly they shouldn't be able to connect to the SSH service after running the brute force attack.

10 - Domain Login Report - really want some type of list, not just examples, no real clear indication if there was anything to worry about or not

11 - Odd Email - really nice writeup with some good suggestions, but the data really was from our database and it was a customer of ours.

12A - This was tested 3 times across 3 services. The keys worked on 6 of those attempts.

12 - SSHKeyTake_2 - Instructions contained multiple errors. First attempt your FW was blocking. had 0.0.0.0 for address in pscp instructions, not bad after that, were able to get this working after consulting with the team

13 - Panel Presentation - 4:58, intro'd team, mentioned issues discovered using vulnerability scans, greatest challenge is malicious actors using vulnerable services and pivoting to other systems, recommend containerize systems and services to isolate them more, missing timeline or associated costs discussion

14 - Migration 1 - migrated Dolibar, good writeup /(that doesn't seem like an inject we missed points on, Dwayne)/

15 - VPN - put ovpn up for download, could get to it but connecting produced an error indicating issues with certificates. Worked with team to get file hosted on HTML page, but still get errors after they put html page up. Were never able to connect to OpenVPN server.

18 - Migrate another - migrated WWW on Taris, good writeup

21 - Honeypot Follow Up - response indicated there were no new connections

23 - SSN Search - did some searches, but said didn't find SSNs. There were some in HR system, your response didn't recommend encrypting databases which we expected to see as a security best practice.

Lessons Learned

Overall, I think we did really well, and my Linux partner and I especially got pretty close to optimal score.

However, we could improve many things. Just focusing on Linux, most notably, we could've improve our tooling and scripts. I was typing the root password (which was the same for every machine...) every time I wanted to run a script. If I were able to use pubkey authentication, it would be impossible to replay my password or otherwise keylog like it would be with password authentication. (I added this to our script, but as I was complaining about before, we had to use the older version that was submitted before regionals).

Finally, we didn't prioritize ESXI security at all, since we knew the passwords were unique, which I figured wouldn't be the same at the national event.

Nationals

Finally, after three years of trying, here's the juice... Nationals. All-expenses-paid trip to a unnecessarily large hotel/resort in San Antonio, Texas. It used to be in Orlando, this one might be easier for UTSA's travel.

On-site lazy river.

Travel wasn't too bad, only one of our flights got delayed, so I'll call it a win. Going with this team of people was a lot of fun. Like with all (most) of the events I do, I feel super lucky to have them.

We got in early in the day (arriving around 1PM) on Thursday, so we spent the day screwing around. The actual competition is ~10AM to ~6PM on Friday and Saturday, awards on Sunday.

Multi-story hotel sunroof(?).

Services

At the start of Day 1, we came into the big banquet hall after getting our team packets, and sat through the mandatory "don't be stupid, and here's how CCDC works" presentation. We then waited outside our rooms (we were fortunate to have one close to the ops room), and the competition started at 10AM sharp.

The first thing I did, before putting my bag down, or breathing, was to change the ESXI password. The ESXI was on a particularly chunky laptop. Of course, it turns out this was for naught, because ESXIs were out of scope.

Here's the team packet picture. Hard to read? Same IRL.

And the grind began. The first two hours are always super frantic, since it feels like a race, regardless of if people are actually in your machines or not. I thought I locked myself out of my PC in the first five minutes, but I was just typing the new root password wrong (typical panic and tunnel-vision).

I spent all of my time in tmux-- blessed it be-- hopping between different boxes, renaming tabs, having sessions die as our firewall guy moved the boxes behind the firewall. I ran a bunch of scripts through the tool I wrote (mentioned above), and it Just Worked. It was awesome having a simple capability to run scripts on all systems, and very satisfying that something I wrote actually worked.

Operations room, go here to complain. (via)

Passwords changed, firewall scripts ready to be customized. An occasional beep from the point-of-sale system. Green team wants to see the price of Golden Eye. Can you give me share access to drop the screenshot? Yeah, for the failed logins inject. You should have access now. FTP is down on the scoreboard! Did you turn the firewall back on? Yeah I did, I'll turn it off again, don't know why that isn't working. OrangeHRM DB password is changed, but I can't log in to the webapp. Just change the DB hashes by one letter, they don't actually need login access. Blog is down! Restarted, fixed, was some weird PHP thing. Can you look at this, is this process legit? Yeah that's normal. Can you take care of firewall on Osirus? He's using GUI on Airen, don't kill GDM. Anything not working? Nope, Palo looks good. New inject, 'reverse this binary.' I can take a look at that later, remind me in an hour? Why can I not change passwords on Valentine? Shop is down? You have it? Yeah I'll fix it. Look at this, super sus, a systemd user service running as brettc. I killed and changed creds, you want to set up rbash for that box? Screenshot share is still not working, did you add my IP? I'm working on fixing SSH on the suse box, I'll run them all at once. Did you find any new boxes in the environment? Nope, looks like the map has everything. All webapps are up. For now, don't jinx it.

The eight hour days go by quick.

Red Team

Each team at Nationals has three red teamers, but they'll "spread the love" (as they say) and help one another, point out new vulnerabilities found, share credentials, share exploits, and such.

The stereotypical CCDC experience, the one they want and publicize, is that Day 1 is extremely quiet-- the thought being that the red team acts like a real APT and is extremely stealthy-- and Day 2 is when they take destructive action.

In reality, it was pretty noisy Day 1 on Linux, but Windows did see basically no activity after the first hour. On Linux, we saw constant login attempts, and about an hour in, default cred abuse (for the ones we didn't change, due to our strategy).

This would've worked fine, but I got lazy, and it came back to bite us. The environment had a bunch of game servers with docker containers, and since they were only checked by orange team (human volunteers), I didn't care enough to prioritize them. If orange team needed to check it, it'd probably be up (since I didn't think they had a way to escalate), and we could always revert the image if they did find a way. But it wasn't worth the time to dig out all the persistence for a trash game server.

But in the last hour, all of traffic from the ESXi (where the game servers were hosted) started to drop, like we'd only get maybe one out of every four packets. The clever thing the red teamers did was change the IP addresses of the game servers to be the same as the gateway, meaning all ESXi traffic was sent between four different boxes. They didn't have any access to our critical services, but they still took them down. Which is a lesson in humility and thoroughness, I suppose. I still don't know how they escalated.

We were talking to our red teamers after, and they said that they were only able to do one thing (i.e., shut off a service) before we kicked them out entirely. I was proud of that outcome, but also felt at some level that they shouldn't have been able to do that one thing at all. Since it turned out that one thing was very destructive.

Red team rant.

The "drama" at CCDC is that the red teamers don't want to write custom malware/tooling, or red team at all for Windows, since it will get picked up by MS Defender, VirusTotal, or burnt when the white team releases the competition pcaps.

Windows was quiet after the first hour because there was literally nobody trying to get into them (our caretakers disabled SMBv1 quickly). All three of our red teamers were focused on Linux.

I get where they're coming from. I also see how Dwayne/CIAS wants to prioritize transparency. I don't know. If it was my competition, I would throw a couple bones to the red team, so they keep coming back, since competent attackers are the real allure of the event. Although personally I've always been a Windows hater, so I wouldn't mind a *nix migration.

I will point out though, that since ESXis are out of scope, it's essentially impossible for the red team to "win". We can revert forever, one at a time, all at once, as many times as we need to. The boxes are not /that/ vulnerable. If I revert, maybe even take it offline, fix the couple flaws, change webapp credentials, and throw up a firewall, there's literally no attack surface for the red team. They can't poison backups. They can't mess with VM consoles. Then it becomes, drop an 0-day on this webapp or go home. And if you do, I'll just make the filesystem read-only, or put it in a container!

The red team debrief this year, while fun, smelled a lot like defeat. I think the tide is turning in this medium-scale collegiate computer security event, in favor of the defenders.

Injects

Like regionals, I didn't have to think about injects too much. Thanks inject handler!

Full list of injects.

  1. Check DNS
    • Set all DNS servers on all machines to [insert domain controller IP] with a backup of [insert white team infrastructure DNS IP]. (Or maybe public DNS). Pretty easy.
  2. Ecommerce Server Down
    • Actually a very interesting inject, and something they've never done before. They started the event with a service down, and an inject was to get it running and do incident response. I shouldn't have been so lazy with this one, I found the bash history file and didn't look any further. I should've ran find /home -iname "*_history*" on every machine, at least. The expected solution was to look for that users' login to the machine, then go to that machine (the email server), and find the "initial access" there. Nobody got full points on this.
  3. RE Billy Goat
    • Again, should have made this a priority. It wasn't a hard RE challenge, but I couldn't do it in thirty minutes without any tools. So we missed out on the points. And for all the RE challenges, really. The only one I allocated time to was the keyboard challenge, and nobody solved that one.
  4. Ransom Demand
    • This was a get-out-of-jail-(not)-free card, where you could pay 750 points to get them to restore your Ecommerce server. It wasn't actually ransomwared, so if you responded by telling them that, and the info from inject 2, you get full points.
  5. Threat Modeling
  6. New Admin
    • This is a typical inject, where they ask you to add a user with a key. When we run competitions, we always give the key they add to the red teamers, so we were very suspicious of this inject. It turns out, even if they key had the hostname kali, there was nothing to be worried about, the organizer (Dwayne) doesn't not give the red team any free access.
  7. Central Logging
    • Somehow got full points on this. Thanks Graylog teammate!
  8. RE Keygen
  9. RE State Machine
  10. MAC Addresses
  11. Senior Exec Briefing
    • Present on the ecom/ransomware situation. Thanks inject handler!
  12. IP Search
  13. Lockdown SSH
  14. Panel Presentation Schedule
  15. Deploy Skyplex
    • Add a box to ESXi. Not sure what the point of this one was, really.
  16. RE Key Bohrd
    • If anyone solved this one, it should've been me, because I literally had a QMK firmware keyboard with me. I flashed the firmware onto my keyboard, for some real-world dynamic analysis (I knew you had to type zeus to print the flag-- I found the hook in the macro entry table and everything), and it just bricked it. No points given for partial solution. (I guess the Ergodox firmware was not compatible with my Moonlander?)
  17. Container or Migrate
  18. RE Trophy Room
    • A typo in the flag of this one (not me for once) meant we didn't get the points, lol.
  19. Top 10 Malicious Addresses
  20. Bitcoin
    • No!
  21. Zero Day
    • It's not Spectre/Meltdown, but there's usually a 'explain this CVE' inject.
  22. Data Classification Standard
  23. Unknown Files
    • Another CTF-type inject, password cracking zips.
  24. Email Search
    • grep on the email server.

Fun

Just like with HackThePort, I think one of the best parts was meeting all the other competitors. I recognized a lot of faces, and they were all super nice. Even my mortal enemies from UCF were friendly and fun to talk to. Chatting with another team's coach until midnight on Day 1 was definitely one of the trip highlights.

San Antonio itself was a good experience too. Besides the resort pools/slides and such, there were a bunch of interesting restaurants and tourist attractions (not that we had time for many).

Hotel was pretty close to "the Riverwalk," which is basically an outdoor mall.

I also really enjoyed taking to the red teamers at the Day 1 dinner and the Day 2 social. It's hard to portray or give a summary of these intangibles in blog-form (or perhaps I lack the literary skill), but it's like a computer convention, but more focused and enjoyable.

Panoply

There's a just-for-funsies event on Sunday, after the competition closes, called Panoply. It's king of the hill, where they use their previous environments' images. You got two hours to be the red team. The credentials aren't easily guessable (with the exception of some webapps), but you do know what services are scored. At 192.168.100.100 internally, you get a page that looks just like the public one, except there are scores for each of the ~50 boxes-- either up, up and compromised (each team has a different icon), or down. You get scored if your flag is accessible (either in HTTP, FTP, SMB, whatever the check it). You don't get any points if it's down.

The ideal strategy here seems to be to find the easy boxes (anonymous write FTP, MS17-10, random CVE) and automate exploiting them, since chances are that they'll get taken down later by people fighting over them. Boxes that are down for a couple checks get automatically get reverted. That might be enough to win, but to lock it in, you should leverage your access to dump creds (Invoke-Mimikatz, /etc/shadow, whatever), and use that to get access to the "hard" boxes (the SSH and updated RDP only ones).

Of course, my words are weightless since we did not win this event. We got fourth, which I would blame on not automating or going for the easy boxes at all. We got some credentials from a Linux box, which I was able to get login on one machine for, and got some points towards the end. But the effect of getting a box early, or multiple boxes, and letting it compound, is much more impactful.

This event didn't have "prizes", but now that I think about it, CCDC doesn't offer any real prizes either. Weird that every other "prestigious" cyber security event has some kind of prize, typically monetary. I guess CCDC rides on its history.

Why Did We Lose?

So why did we lose? We got 2nd place, UCF got first.

Well, as usual, any number of things we could have not missed or done better would have contributed to making up the point gap (which was about 700 points). If I had to point at something, it'd be misprioritizing tasks combined with tunnel vision.

The first missed opportunity was in getting the ecommerce server up, which I very briefly looked at before moving on to the standard password changes/hardening stuff. Only when our inject handler asked me to look at it again did I find the first part, which was really simple to find (just .bash_history in a user's home folder). From there you were supposed to follow the source SSH IP and find the other scripts on a different machine, but I didn't do that, since I thought just the history was good enough, and the inject wasn't a priority for me. If we got that up earlier it wouldve been ~100 points. Second major source of points lost was not leaving enough time for the RE challenges, which we could have gotten if we left more than 30 minutes for them (~400-600 points). And last was just a bummer, we submitted the flag for the last RE challenge with one character typo'd, netting us zero points (200 points).

Other than that, standard less sloppy sysadmin work and better management of the docker servers, would have been more than enough to land first place. But UCF really did a great job, and didn't leave much of a margin. Although, I'm sure they have some (smaller than ours) list of mistakes as well.

Summary

Our scorecards.

Quite nosy of you.

As usual, everyone prefers Linux. Fun social event. Good educational experience. If you have the chance to do CCDC or something similar, I'd definitely recommend it.

If you have any questions or feedback, please email my public inbox at ~sourque/public-inbox@lists.sr.ht.