Good Work in Cyber Security

back · home · opinion · posted 2026-01-05 · good hacks and hack for good
Table of Contents

Most cyber security jobs suck.

Cyber security itself is awesome. I feel there's almost no technical subject more cathartically rewarding. The trouble is with cyber security jobs.

To demonstrate the problem, perform the following simple test: take a cyber career debutant(e), and task them with finding a job. Then will likely return from the mines of LinkedIndeed with one of the following:

Yes, shitty jobs exist in every field. But it almost feels like a non-shitty job is the exception in cyber security. Large swathes of cyber job categories are seemingly invented only to satiate the whim of some mechanical beast, whose maw swings wide for only Management Of Risk Perception, fashioned by committee and entirely immune to human reproach.

why??

"Security" is ambiguous-- it can be either a property of something, a verb, applied on top of an existing structure, e.g., this house has been secured, or it can be the subject of work itself, an adjective, e.g., fashioning a Secure lock.

The market for cyber security almost entirely comes from the first category (applying security to systems as a desirable property that one can augment or exploit), whereas there is comparatively much less $$$ in the pursuit of capital-s-Security.

To abuse the lock analogy, you could be tasked with securing a home, or neighborhood, or town, which would likely be composed primarily of putting up scary "Secured By Futile Security Inc" signs and circulating fliers about the latest Master lock getting popped with nothing more than a stern look. Your goal is not to actually "solve" any security problem, but rather to ratchet up the perception of some measure of security, safety, or risk, whilst not actually solving any technical problems at all!

Compare this to actually working on Security itself, in this case, the locks: you could create a lock that is nigh invulnerable if not totally immune to picking and other forms of forced entry. It would be novel, technically rewarding, and more impactful to security.1

So just work on the locks, right? Unfortunately, your locks cost ~10% more than the submissive Master locks, so almost nobody buys them. And there's like 10 lock companies but a million neighborhoods, so in terms of jobs, the market is flooded with ineffectual openings.

In short, "security" is most often applied to a problem rather than being the problem to solve. For example, it's applied:

  1. as a bandaid, a temporary fix to a problem that only serves to stave off consequences and does nothing to solve the root problem, or
  2. as a weapon, used to shift or confirm power and control

Computer security as whole would improve if we were able to consciously shift the application of skill towards solving root problems, even in the absence of direct economic incentives.

I argue for the idea of "Good" work, which means work in computing that is novel, elegant, attacks a problem at its root, and has a positive impact on people, typically based in systems design and making things.

the forces

Obviously nobody wants to sit at work and run Nessus scans until their quivering inbox gives out and they contract meningitis. Nonetheless, people do. So what unholy forces facilitate this?

The biggest factor is misaligned incentives: your security role was not created to make a positive impact nor foster some feeling of personal satisfaction, it was created to "solve" a business problem, in the cheapest way possible and with minimal foresight. Working as intended!

Secondly, organizational incompetence: if the company does not know anything about cyber security, the chance that they are positioned to solve problems effectively is... slim. Incompetence hires incompetence, and poor structural decisions are typically a negative feedback loop.

three columns

Please permit me an oversimplified model; I mentally classify cyber security work into three columns:

Bandaid Work Good Work Destructive Work
Checktesting Security consulting Merc red teaming
Scanners Secure systems dev Crime
Frameworks Implementations Grift

I visualize it like this because "Bandaid" and "Destructive" work are aberrations of "Good" work, one skewed towards ineffective bolted-on computer work, and the other using computers against people.

"bandaid" work: security is added on top

Computer security is just systems design. If the art of computer security were to be perfected, it would be considered an integral part of digital design, much like aviation safety is to aviation, rather than being glued on top.

Of course, this is rather inconvenient for most people. In "Bandaid" positions, the company's primary goal is not to further the field of security, nor to make more secure and empowering devices for the people of the world, and definitely not to foster trust in computing. It's to sell product.

For "Bandaid" roles, you are adding "security" on top of an existing structure, simply because it is shortest path to being satisfiably "secure", as a business requirement. Companies, as a construct, are not interested in developing genuinely more "secure" systems; they are only interested in doing the car crash calculus to make the price of lawsuits less than price of doing an entire recall.

It's easy to divide up the services these businesses need into two categories, both of which are focused on managing risk:

  1. Do you want to quantify your "risk" (red teaming)?
    • Much like the gamers, companies live in a society. Sometimes they need certification, validation, or other boxes to be checked.
  2. Do you want to reduce your "risk" (blue teaming)?
    • Prevent leaks, manage all the messy computer nonsense, minimize what bubbles up to the C-suite.

They think that computers are a tool to drive like a car, which carries unchanging, inherent risk, and operating costs. They are uninterested in computation outside of it being a tool to more efficiently sell.

Here, the prïmary objective of cyber security employees is to continually manage problems rather than quantifying or truly fixing them. Fixing them is slow, expensive, and has dubious monetary incentive. We need bandaids.

Hey man, are you gonna be an asshole? Obviously I'm going to make a living by doing something that ACTUALLY fulfills a business need rather than, like, diddling with memory allocators and operating systems that nobody uses.

Very true. It's totally fine to pursue a career just to make a living or fund other pursuits. I picture the situation like this:

A graphic showing two people, one who desires fulfilling work and the other who is pursuing a job to pay rent and buy food

"Bandaid" work IS necessary. For now. Otherwise, organizations wouldn't be paying people to do it. The subtlety is that, if everyone does bandaid work forever, nothing ever improves. We collectively are Sisyphus responding to the economic call for "That Rock, Needs to Be, Higher, Like, Right Now".

For a lot of people, there is no need to imagine, Sisyphus is happy. Sisyphus gets a fat paycheck and can buy a PS5 to play Cyberpunk 2077 and Madden 2026. But other Sisyphuses are pissed that their work is endlessly repetitive or lack a greater impact, and want to pull some sense of fulfillment from the dark clutches of the void.

"bandaid" work in: vulnerability management

Imagine instead of writing software, we are baking mixed berry pies. When they collect berries from the hypothetical forest, some of them are poisonous. In fact, pretty frequently, we sell pies with some amount of poison in them, which could end up harming the consumer in some rare cases, or in large, infrequent scandals. Upper management sees this as a real issue to tackle in Q3.5 FY29, so they hire a poison master to hire more underlings.

Anyway, the pies, we cookin' em, and you bet they are poisonous. The pie poison management team sets up some metrics to report up the chain, number of poisons found, severity of poisons. They set up some poison dashboards to try and quantify how much poison there is.

Some of the poison experts have noticed that their efforts have not been effective in actually reducing the amount of poison in the pies. Unfortunately, the poison experts are viewed as disparate from the berry pickers; security is a "property" to add on top of "developer" work, and the two shan't ever intermix.

Poison experts also internalize this divide, and don't suggest solutions that involve them joining the berry picking team. Instead, they try to solve the problem from the outside with long sticks of causation, like making a ship in a bottle, constructing ever more convoluted layers, filters, and roadblocks for the developers, in an attempt to filter out security bugs.

But, they need to be careful with their filters. Anything too invasive would not be well received, because that would negatively impact the bottom line (production of pies).

They know that people will still buy the pies, even if they have some poison in them. Poisonous pies don't meaningfully harm the bottom line. The customers aren't exactly poison experts either.

Poison experts frustrated with this situation may leave or be fired. Those who stay probably don't really understand berry picking at a low level, and more importantly, they aren't interested in that. They understand that their role is to manage the perception of risk. They are interested in checking boxes, providing a social safety net for the company, and validating their role by having something to point to when evaluating risk charts and qualitatively analyzing and processing those pie heirarchial analyses.

After all, if the security experts ARE better at writing secure code, why are they not doing it? They should be! Or at the very least, the developers, er, bakers, should be incentivized to put fewer bugs in their pies, even at the cost of baking velocity, so the baking company can actually get the risk reduction they wanted. Then, the poison experts who don't actually want to bake pies can find a better perception-of-risk-management job, and we can all focus on the question of whether or not baking pies is "Good" work.

"bandaid" work in: frameworks

Those who make cyber security frameworks have an admirable mission. They see swarms of companies trying their hardest to be "secure" despite their own lack of experience in the area. They also realize their personal influence is disproportionately small. So: make a PDF that has all the right details, and all the right checkboxes!

This starts sucking when they are taken as gospel. On top of being incredibly dry and boring, and they almost always, by design, miss all of the nuance in real systems.

Frameworks often take an ideal system and work backwards, describing all the components with none of the first-principles reasoning. It's not their fault, the task is impossible. Of the all the secure systems to create, how can you make a list of boxes to check that describes them all? At a certain point, you either describe every single detail of the system, and thus, you have written the system. Or, you describe everything it isn't, and thus you have also, written the system.

Here's a bad analogy. You're trying to help people be better bodybuilders. It's complicated. But by looking at the winning competitors, you know a few things that work:

And then you can sit around and argue with people about whether you should build a matrix on level of sweat or size of water bottle. And release a 600 page PDF that some poor soul is responsible for "implementing" (due to the new federal regulation for bodybuilding contractors to be NIST-800-1337 compliant) with little to no actual chance of improving their bodybuilding. The resultant "system" produced by the framework, if taken at face value, is likely to be entirely synthol injections and motor oil.

Frameworks and massive PDFs can be useful, when used in the proper context. Implementing something to spec, (maybe) financial and health system requirements... But many are just quick cyber tips dressed up with undeserved authority.

"bandaid" work in: checkbox pentesting

Pentesting is a very appealing proposition. It's fun, you get a lot of (relative) freedom, good compensation, and you're in-and-out.

In contrast to the "Good" red teaming, we have checkbox pentesting, or as I now portmanteau it, checktesting. The scope is unlimited, a huge network, all services, anything goes, no problems are actually fixed at the root, the findings are common and many, and of course the same type of issues and misconfigurations will pop up around the network in the future. The security of the company is not being meaningfully affected.

The company is trying to quantify their cyber security risk posture for their stakeholders; it's practical for improving optics or meeting regulatory requirements, but it's ultimately a hollow and lifeless procedure, not to mention, ineffective.

"bandaid" work in: SOC and IR

Security Operations Center, the SOC. It's the quintessential high-level blue team job. The wizards ascend unto splunk heaven on the wings of fourty line queries and event IDs.

If you are a business that is SERIOUS about reducing your "risk," the SOC is one of the best bandaids you can choose. It requires a lot of skill and technical knowledge to effectively run one. But, in the end... it's "Bandaid" work. In an ideal world, we wouldn't need one, at least not at the same scale that is required today.2

But yes, we don't live in an ideal world, and businesses do need bandaids. Work in SOC may feel repetitive and unfulfilling, but at least the waters aren't usually muddied by leadership incompetence.

Incident Response, IR, is kind of like SOC+. When done as a third party service, you get all the fun exploration of the SOC, with less stress from being constantly responsible for breaches. But it still falls into the same category: highly demanding and efficiently applied "Bandaid" work, subject to the same mindless Sisyphean repetition.

from "bandaid" to "good" work

To turn "Bandaid" work into "Good" work, focus on fixing the problem closer to the root, and work on building systems that are inherently secure. Sandboxing, allowlisting, capabilities, memory safety, dependency reduction and verification, facilitating simple design, effectively composing networks...

I don't know what any of those things are. How can you go about finding "Good" work if you don't know enough to tell charlatans from chads?

It is tough to feel out what you don't know. It requires a lot of stumbling and rethinking. The easiest way is to have a competent and patient mentor, but those are hard to come by.

A fair amount of people don't actually know what they want to do ("writing tooling", "software dev", "webapp dev..."). When it gets down to it, they either want to maximize their salary to effort ratio (which again, is fine), or they want to do "Good" work. Writing webapps or software or tools are simply the mechanisms with which you get there. That's like having a dream to do something with hammers, when really you want to build sustainable or beautiful houses.

Here is a process that can be used to slowly build up your worldview:

  1. Realize that you are a monkey on a floating rock
  2. Evaluate everything you use every day (maybe your phone, laptop, pc)
    • Does anything stand out as Incredibly Well Made?
    • Does anything smell like you would be interested in making stuff like it?
    • Has anything made a legitimately big positive impact in your life?
  3. Read history of whatever you selected (you don't have to be an expert)
  4. Set a goal to work on something similar to whatever is top of your list

Let's say you are a student entering university and you really enjoyed math in high school, so your degree is in math. But you also really enjoy computer and computer security from the stuff you read online, so overall you're just not really sure what you want to do.

  1. AHHHHHHHHHHHHHHHH
  2. Evaluating...
    • My phone is awesome, it's an interactive piece of glass
    • It's so crazy how I can message anyone in the world at any time
    • How do cell towers work? Is that magic?
  3. That's wild how Xerox PARC pioneered the modern user interface, and how cell towers are literally in "cells"
  4. I'm going to make a chat app that only works when you have poor service

After the gag chat app, they might become interested in radio frequency stuff or digital signal processing, read data from a weather satellite, venture into cryptography, complete cryptopals and cryptohack, play CTFs, end up finding some flaw in a cryptosystem or inventing their own.

Of course a real person's evaluation of the world wouldn't be so clear-cut. And the insight of them finding the overlap between math and computer security interesting would probably only be realized in hindsight.

It's rough, because the end projects they come up with to begin bridging the gap, very likely won't be "Good" work. But it will give you an idea of if you intrinsically enjoy the technology, the creation, as well. Trying to go from hammers to houses only works if you enjoy the construction process, and the only real way to know if you like construction is to try building something.

For example, zero trust in networking (if applied correctly and doesn't suck) reduces a lot of unnecessary overhead and "Bandaid" work by creating a system that is inherently more secure. For personal use, think something like Tailscale or Zerotier or Yggdrasil.

Another example, the proliferation of memory-safe languages (that aren't slow as dirt) have vastly reduced the number of insane memory corruption bugs that people have to deal with in development, and eliminated huge swaths of exploitable bugs that the developers didn't catch.

"Red teaming," when applied as "security consulting," can be fruitful as well. You help projects and companies come up with better designs (either by writing software for them, or exploiting their vulnerabilities). Or, security consultants can provide third-party audits for transparency (Cure53 does a lot of this work). But, whether this is "Good" really depends on the project in question, and the manner in which you fix vulnerabilities-- depending on the organization, approach, and depth, this could fall into "Bandaid" work as well.

What in the name of Fabrice Bellard is a "Good" project? I thought "Good" projects was just what you listed? Are you trying to con me into thinking the "Good" term applicable to any field, or are you just incapable of writing a coherent thought?

In my humble honest honkin' opinion, "Good" work in any field, can be found by measuring highly on metrics of technical elegance, novelty, and impact (which often comes with good people, for free). Contrary to the main-character-syndrome hustle grindset, there are a huge number of these projects.

For related musings, the "Hamming Question" is, paraphrased, the following: what is the most important problem in your field, and are you working on it? This speech transcript focuses on like, research trying to win the Nobel prize or whatever, but a lot of the social and psychological discussion applies to our lower-stakes environment.

Last one, that hasn't caught on (yet), is the use of immutable and reproducible desktop environments (Nix, Guix), sandboxing, and "swarm" methodologies for managing enterprise networks at scale. Currently we have massive ugly AD trees and GPOs that would blow your socks and possibly pants off, and a corresponding team of belt-wielding people to manage it, just for the pentester of the month to run Bloodhound and find a series of 31/n jumps that lets them change DA creds, where n is the likelihood that someone has the password Spring2026!.

I'm under no illusions that the technology is ready right now, or that the deeply entrenched enterprise networks would change any time soon, but ~imagine~ how much better the experience and network security would be if all policies and applications were installed simply, auditably, and declaratively rather than a hodgepodge of XMLs, custom software centers, and patch Tuesdays.

"destructive" work: you make power

On the other hand, some organizations are hyper-focused on "red team" security, because selling product is not their goal. Their goal is power. These are the nation states, the zero day brokers, the ransomware and botnet groups, and anyone else whose business is influence. In this case, the "Good" work metrics depend on whether that power is being used in ways you agree with.

If you have no agency in choosing what you work on, no visibility into the outcome, no belief in the goals of your organization, and are not working towards creating more secure systems in any way, your role is closer to that of a mercenary. Probably one who is having a lot of fun, but feeling hollow in the long term. Think, NSO Group, where you make an epic exploit that ends up being used to assassinate journalists.

On the other hand, you can just straight up do crime with computers! It's a novel concept, I know. Crime is pretty obviously a destructive job, so I won't bother elaborating. In any case, I can't imagine that ransomware gangs are reading blog posts on fulfilling work while exploiting the latest version of the UPnP Asswipe daemon to mine Bitcon on your toaster.

There are more subtle ways to be destructive, and unfortunately it's usually done with good intentions. From doubling down on inefficient cryptographic proof of work schemes in the name of personal privacy to unsolicited advertising of your startup's webapp scanning services, there's a lot of honest hard work in the world that only makes it a worse place.

the appeal of hacking

"Hacking" (red teaming, exploitation) in a vacuum, is super fun. It's an intellectual puzzle, spanning across the entirety of systems design. What is the social appeal? What shining star is that twinkle in the cyber security noob's eye, actually reflecting?

As a subculture, from the Tech Model Railroad Club to Phineas Fisher, "hacking" serves as a method of using human ingenuity to undermine existing power structures. The trick is, undermining power structures is only "Good" work if you believe the existing one is evil and/or apathetic to the benefit you receive from exploiting it.

A lot of people get into "hacking"/cyber security/infosec because they love this picture. They love the story of the underdog, the hacker codenames, the cyberpunk aesthetics. But they don't actually have an enemy, or a power structure they can subvert via technology, or even an environment where being good at computers gives you immediate superpowers (like it did with phreaking). Even if they do have one of those, they understandably aren't willing to sacrifice their life and freedom to do vigilante hacktivism like Phineas Fisher (which for the record, I am not encouraging and I don't think is rewarding in the long term).

Too often these are also the same people to say, "I don't know how to code, I only do hacking," or, "I never understood networking, I just use nmap." They've missed out on huge parts of the joy of computer security, namely, deeply understanding the systems, and actually making things, both of which are crucial in "Good" work.

The original "hacker" culture languishes, and these people aren't going to find much of it. Computers aren't a new frontier for the quirky outcasts to make their home, they're "critical business systems," and messing with them makes you a (quirky) criminal. Manipulating systems of power, commerce, or connectivity can no longer be done at scale and in good faith. The vestiges of "hacker" spirit thrive only in niche in-between spaces, writing free software, cranking on demos, playing CTF, sailing the ocean, or modding games.

from "destructive" to "good" work

I'm doing "destructive" work, but I don't really care, I just want to have fun and do big hacks...

I don't think preaching personal responsibility is very persuasive. So, if that's really your only goal, sure, whatever. Do big hacks indiscriminately. But if you love computer security, and feel like you aren't having a positive effect on people, or can't see the results of your work, just know that there are better options out there.

To do "Good" work rather than destructive, leverage your skillset to do some kind of security consulting for "Good" work projects, design inherently secure systems, create new ways to reason about computation, write tools to find bugs, and so forth. Exploiting systems requires the same deep knowledge that is required for solving complex problems closer to their root, or for developing intelligent abstractions and new ways to think about computers.

"good" work: the myth of red vs blue

"Hackers" usually divide themselves into a dichotomy-- are you an red team offensive hacker, or a blue team defender (who probably also wants to be called a hacker)?

These labels aren't helpful for doing "Good" work. They are only useful for organizations hiring people: do you want to exert influence (red team), or what flavor of risk management do you want (blue team, pentesting)?

creation; cyber security is inherently destructive

"But Mr. cringe.sourque.com," you say, "Hackers don't actually MAKE anything! 'Good' work requires understanding and crafting? Isn't being an epic cyberer a purely destructive act? Even the 'blue teams' don't actually CREATE anything! How can we expect to foster prosperity for our progeny when we spend the majority of our time revelling in the carnal joys of rapid and unplanned disassembly?"

"Shut up, nerd," I beam from my obelisk.

Yes! No matter which clan within "bandaid" or "destructive" work you assign yourself to, you are shackled and unable to meaningfully create as your day job. You serve business functions: managing risk, or creating power.

But what if we could go above the labels? Cyber security is improving computers, usually through systems design. You create and critique systems.

"Red" teaming is only useful insofar as illuminating system design defects. "Blue" teaming is only useful insofar as building an understanding of how people currently use systems. The rest, above and in between the labels, requires creation, and probably "Good" work. For example, you might red team to find a persistent flaw in some authentication scheme, blue team to understand why that scheme is required, and transcend labels in creating a better one.

where do you find 'Good' work?

📞 (ring) Reality is calling: people need money to live!

The ideal situation is to be gainfully employed somewhere that allows you to engage in "Good" work, and experiment in tangentially related work, be it purely "offensive", purely "defensive", either for its intrinsic fun, or the value you see in it when applied to secure systems work. Like:

  1. Research labs
    • Probably underpaid, research labs can be a great choice for finding "Good" work, since they often afford more intellectual freedom and are poised to do R&D that isn't necessarily going to end up as a sellable product. It really depends on their incentives and current staff.
  2. Small consulting firms
    • Same deal as research labs. More commercial. Excellent choice if their ideals align with both "Good" work and financial motives.
  3. Big Famous Cool Companies
    • Sometimes large companies have leadership that recognizes the value of "Good" work that doesn't necessarily align with their business incentives, and intentionally facilitates it in order to retain their talent.
  4. Start your Own Company
    • If you find existing opportunties constraining, and think you have gumption along with ideas and ideals that align with "Good" work (or whatever work) as well as market incentives, it could be a good choice to start your own company.
  5. Become an Effigy of The People
    • A growing number of online denizens have found comfort in the arms of the anonymous masses. Using some funding platform (namely patreon or github sponsors) they can make a living. Notable examples include Justine Tunney and Andreas Kling.

The world isn't black and white, there is a universe in which you could do "Bandaid" or "Destructive" work just to make rent, or because it's fun, and maybe apply that skillset to "Good" work later. Or maybe not! It's your life. But I think with a bit of elbow grease and some Good work, we could get to a better place.


  1. I realize this analogy may be unconvincing since you can always open a door if you just kick hard enough. ("This house is vulnerable to missile", as my friend put it). The beauty of cyber is that it is possible to develop entirely secure systems. Networking is magic and provides an alternate layer of reality in which, modulo any actual physical interaction, we can achieve honestly 100% secure systems. We just almost never do! ↩︎
  2. How much of SOC work boils down to dealing with someone else's bad decision, if you go deep enough? Sure, you're ten layers into investigating some spearphishing campaign with a malicious Word doc. Question: why do we tolerate Microsoft Office docs being little executable VBS bombs? Or you're patching Exchange against the latest unauth RCE. Are we seriously so entrenched in corporate software that we can't make anything better? Oh, this user installed some adware that harvests their PKI signing key. Why don't we have reasonably sandboxed software by default? I wish SOC analysts would harness their rage from dealing with the same attack vector four hundred thousand times to write something better. ↩︎
If you have any questions or feedback, please email my public inbox at ~sourque/public-inbox@lists.sr.ht.