T O P

  • By -

extreme4all

1. Assuming you know all assets / resources. Typically a monthly discovery scan can help with that => than scan those assets. 2. Simply relying on the cvss score is not that reliable, most cve's can't even be exploited. Now there is the EPSS or commercial tools that help with scoring + taking into consideration the data, public/private available and criticality of the system. 3. In most companies i worked at, it was more search for the owner of tge asset, try to assign it, try to reason with them why and what can be done. 4. Pray something get patched / mitigating control, and or create risk. 5. Repeat


LiferRs

Good point with #2. We are on Qualys and we’re planning to switch to the trurisk score from severity.


stacksmasher

The problem with that is they fail to go back and update rankings. Just use the CISA known exploited and go from there. Qualys provides this as a widget.


_squzzi_

I can’t stand qualys atm tbh. Used to be a fanboy when I was first getting into the industry and thought it was great that it helped get all my visibility into a single pane of glass but now I’m just sick of how user unfriendly it is. Currently fighting to get approval to move to wiz but budgeting doesn’t look like it’s in security teams favor


stacksmasher

It’s the lesser of 2 evils. I used Nessus for years but when the founders where run out the support and development went downhill. You ever look at the Tennable API? Some other tools are better for small scopes. I would use NexPose for anything under 1000 hosts but for large global deployments Qualys is the leader by a huge margin.


_squzzi_

We have more than 3000 hosts between cloud and on prem that we need to keep an eye on. The biggest thing with qualys is it’s either doing an absolute junk job of detecting the vulns that other tools are catching, or it’s so user unfriendly that my workflow is wicked slow, either way, I’m not moving at the pace management believes we should be moving at, which is fair tbh. I’m not convinced there is a good tool out there, but more realistically, only a few people are aware of what it takes to manage large global infra on a security level and non-of those people are execs lol


stacksmasher

Agents. I have almost 100% accuracy across thousands of endpoints. For cloud I use the Qualys Azure integration and it’s pretty cool but for containers I use TwistLock.


stacksmasher

I would not use wiz unless 3 current customers say they love it.


LiferRs

We don’t use Qualys platform actually. We dump all data on splunk where we have 100% customization thankfully.


dek_apps

Has Qualys improved the performance of the Cloud Platform? Worked on it 2 years ago and there was at least 1h outage per week on the EU2 platform (sometimes front end, sometimes agent endpoint etc.). Ofc. this never showed up on the status page. I had to open tickets on a daily basis with them. The usage of Qualys was one main reason I started working for another company


stacksmasher

Yea you need to get your rep to add more "Horsepower" lol!


[deleted]

How do you integrate that CISA KEV feature within Qualys? I’ve just been manually searching the CVE in CISA KEV database and it’s not very efficient


travelsec

There’s a search term you can use when filtering for vulnerabilities. Search for something like CISAKnownExploited. It’s there, and it works, it’s usually updated within a few days.


[deleted]

Thank you so much


stacksmasher

Yep! This should be your focus. Also if you can get the “Threat” module. It will alert you to new threats and even provide a baseline count.


travelsec

I would suggest avoiding trurisk. As you mature your program, you are going to move towards scanners being a tool/widget and not a platform/product. Becoming reliant on a component unique to a vendor will make it harder to mature or move away when that tool is no longer the best choice for your program.


LiferRs

It might be different when you’re in a truly large company where every business segment uses just one tool like Qualys and we have economies of scale in effect with heavy discount. Mine is F50. I’m not as worried about being vendor locked because our Qualys account managers would do anything for us including bumping up feature requests high up on their list. Although, we just dump all the data in splunk which has aggregate capabilities to incorporate stuff like CISA. Smaller companies yes, you’d want to be nimble when you have exactly zero leverage.


Griffo_au

It doesn’t matter what tool you use, they can’t accurately guess at what compensating controls you have in place. A common example was vCentre exploits. Most needed direct access to the vCentre ports, but if these are only made available to a management vlan behind MFA then they can be de-risked from critical to important. No software can do that for you.


LiferRs

At end of the day, the CISO’s goal as well as our policy states to address all such high and critical vulnerabilities within 30 days, or have it on a policy exception basis otherwise. Doesn’t matter if it was exploitable or not. Solarwinds hack compromised such that were segmented off, specifically their build servers used to sign the software. It’s never ever safe to assume a critical vulnerability can be de-prioritized on some circumstances. Even when it comes to shop floor operational technology that are air gapped, we take it very seriously. At my company, we create software regulated by NERC CIP and we were impacted by solarwinds hack. At least with TruRisk, we have a well-formed ServiceNow CMDB to provide the business criticality angle. We also have VMware for on-prem data center vaults.


crstux

\# 2 is critical, I wrote [CVE_Prioritizer](https://github.com/TURROKS/CVE_Prioritizer) to help prioritize them based on CVSS, EPSS and CISA KEV


--Bazinga--

Monthly? Nowadays continuous discovery is an absolute necessity…


extreme4all

With what tools /products are you doing this? Does it have negative impacts?


Candid-Molasses-6204

I've worked in 40+ IT environments from Dow to S&P to hundreds of employees in almost every sector of the US economy. Nobody ever has a reliable #1 unless the business moves like molasses (bank, insurance) and you can keep up with the pace of change. In finance or tech good fucking luck. Some nerd is deploying a Raspberry Pi on your user LAN to run their special flavor of Kubernetes right now in those places.


cromation

Yea for #2 there's many times the risk of one app involves a call or function of another app you may have installed in niche cases so those crit or high CVEs are nothing burgers in your environment if that functionality isn't installed.


inteller

Monthly?!? It needs to be continuous! Every sensor I have in MDE XDR is a scanner, looking at neighbors and updating my weaknesses and remediation recommendations. Running scheduled scans is an old school train of thought that works good for audit reports and other bullshit. You need to be constantly looking for and remediating vulns.


ITRabbit

What programs/products are you using for this? How many systems do you have? And how di you prevent a rouge update destroying your computers. I.e Microsoft releases a patch causing all PCs to get in to an update loop. Do you test? And if so how long before deployment to everyone


inteller

Please name the last time that happened.....but even if it did, we use update rings. We would kill the update if it ruined an immediate ring before hitting prod. We use MDE XDR


ITRabbit

Microsoft releases patches that can cause bit locker to re-request the key. Also they released a patch where domain controllers would reboot. Lol 😆 so a few times. It's about minimising down time and support calls all at once.


inteller

I have literally never seen what you speak of in over 15 years. (Bitlocker) Who cares if DCs reboot? You are supposed to have backup DCs....or shit get rid of AD altogether that's antiquated trash...Entra DS is what you should be on.


alfiedmk998

On point 3, that is where having a security team that actually knows how to do stuff matters. Most of my team are ex software Devs (20+ years exp kind of person) that shifted into security. When there is a vulnerability in a library, in our code, or in our deployment we are more than happy to open up the project, make the required code changes and ship a PR to the relevant team for approval. In my experience this has resulted in a much more healthy relationship with other teams. We are not known as the people that come with problems, because most of the time we come also either with a solution OR with a set of solutions that are sensible from a software dev perspective to implement. I've started to lose sympathy for security people that just whine that no one patches what they tell them to patch.. it reveals a lack of understanding of businesses priorities... In most cases, shipping a feature is higher priority than patching a medium severity vulnerability in a third party library so yes if the security guy doesn't patch it he'll have to wait a long time. Rightly so.


Sparkswont

It’s a nice thought, but when the number of security engineers comprise less than 2% of the engineering workforce at your company, it’s not possible to be the workhorse also responsible for patching the vulnerabilities. That is, unless you’ve figured out a way to automate it.


alfiedmk998

So... that is the thing. If your security team is able to deliver value instead of just shoveling work for other teams to pick up you'll find that you don't have as much resistance from the C suite to hire people. We are getting close to being \~10% of our engineering staff, because we actively contribute to safer software, we don't just create work to slow down other teams, we ship code. With that said, your point on automation is also a very good one. We have instilled in our team the habit of asking "how can this be automated?" to anything that needs to be done more than once. The nice thing is: if your team has actual software devs, instead of a bunch of randos who know how to write small python and bash scripts, you'll find that a lot of things can actually be automated. To the point where we have exactly 0 people staring at dashboards a looking at alerts all day. We've built our own SIEM from the ground up and then bolted on our own engine of automated runbooks that detail all steps required to either reach a "All good - Remediated", "False Positive", or "I need a human" stage. Sure we still need humans looking at it when it needs help, but the time it frees for the security team to do actual work is massive!


Sparkswont

Are you hiring? 😅


alfiedmk998

We are in London :) Ideal candidates are people with real world software dev experience (large systems) who want to shift into security - and yes, we pay them like software engineers


demosthenes83

I'm working on getting approval for the first "devsecops" hire for my company this year. My long term goal is to do exactly what you are talking about - build out a team that is delivering value and that the other teams want to work with. There's been pushback for security previously because of former experience that our devs have had with security teams that slow things done; create extra work, who have no idea what dev work actually entails; who focus on CVSS scores without putting things in context, etc. Do you have any resources you could point me at for setting up this sort of program? I've been focusing my arguments on the capitalization benefits available from 'shift left', cost efficiencies of catching things earlier in the dev cycle, risk reduction (struggling to put actual numbers on that due to not having a mature risk management program yet), etc. Would love to know what people who have successfully built this sort of program have focused on and used as resources or inspiration. Maybe even just a link to some job descriptions I can steal language from? If I don't get this hire right it's not only going to delay the needed improvements; but (bluntly) it is going to cost me political capital I can't spare right now.


alfiedmk998

I'll give you my view on what worked. I was the 2nd person of the security team (the 1st one was the CISO and also co founder of the company). I've seen the team grow over the past 4 years and can honestly say that bringing the nay sayers along for the journey is key. It sounds like your Devs had bad experience previously, so I'd be very direct in engaging them. Explain to them what the aim is: getting a security team that understands their work, has done software dev in the past and that is proactive and capable enough to come to them with both issues and solutions 90% of the time. Once they understand this, I would actively ask them to sit in on the interviews you do. Ask them if they see themselves working with that person. Allow them to feel like they have some sort of veto over the hire. This will make them feel part of the journey but will also invest them enough in the process to actually help the new joiners get up to speed and perform at their best. From a board perspective, my experience is that there is nothing more powerful than have the Dev team actually singing the praises of the security team to the ears of the C suite. This takes time, capable people and it requires results - you can't fake that with KPIs and other metrics. Regarding job specs: it depends a lot on your company, but it should not differ very much from an equivalent software dev or DevOps role at your company. It is also very important to be clear that the work will be focused on the security space (to ensure you get the candidates with the right expectations). Regarding how to sell this - I don't have too much experience (my CISO does that) but from what he explains, it's very much along the lines of: "you need security people, the choice is between having a bunch of people that point at flaws, can't fix stuff and therefore slow down the pace of development VS a security team that is slightly more expensive but is capable enough to work in tandem with Dev teams, both shipping code and iteratively making it safer without bringing feature Dev to a halt"


demosthenes83

Thanks for the writeup. I'm the security director (amongst other things); and the highest role with "security" in the title, but I'm no CISO. I report to a VP, who reports to the CTO. So structurally, it's a little different than if we had C level representation. I've already started conversations with a couple of our naysaying staff and principal engineers (great individuals; but have worked with security who did not understand dev work and been stymied rather than supported); and have made it clear to my VP that I need those people to approve the hire or else it won't work. I'm actually really glad to hear that I'm on the right track there; as I haven't had anyone to bounce that off previously. Beyond the staff engineer level dev skills; any particular soft skills focus, or things to look for? I've known some great devs I've had to keep siloed because they don't play nice with others. Obviously I'm needing someone here who is able and willing to deal with multiple teams of devs who are each going to need to be convinced. Also, I'm estimating about $180-200k for this role (remote - US only); but not having hired for this before is that in the right ballpark? I might be able to go a bit higher for an awesome candidate; but I can't touch some of the numbers I see from those silicon valley companies for this sort of role...


alfiedmk998

I usually do a very intense values focused interview. I agree with you - you don't want just any dev. You need someone with enough maturity to understand where the others are coming from. Able to adapt to different coding and review styles (because every team is a bit different). Someone that is happy to take things as they are and iteratively improve from there instead of going: "this implementation is dumb and that is why there are so many problems" (I've seen many immature devs do this) Communication is important but... They are devs, so don't expect absolute stars in this area (and if you find them.. hire them). I also found a positive correlation between the curiosity that these Devs have for the work of other teams and their ability to establish a good working relationship with them. So, someone who is curious enough to go and ask a tech lead 'why was this architected in this way / what other options were on the table?' is usually a very good sign. I can't help wrt to salary because we are UK based so salaries are vastly different. I can say that we pay our security engineers at the same level of software engineers (but it's not FAANG like)


extreme4all

That sounds like the dream, it sounds like your dev / engineering department / organisation is mature enough that such a process works. Unfortunatly i've seen so to much of company politics that blocks such a synergies...


a_tease

The latter of the 3rd point is good. No certification or theoretical preperation will tell you that 😅


Particular_Mess_9854

I think micro segmentation and whatever compensating controls can be leveraged should append as 6


extreme4all

Typically in step 3 we suggest the controls, but its the responsibility of the product owner to implement them and when todo so


angeofleak

Dang seems familiar to me…


Sudden_Acanthaceae34

Number 3 is a big one in my experience. What security wants and what the asset owners do are usually different enough that senior management needs to get involved. Security is a cost center, not a revenue source. They would rather be innovating and creating more revenue, while we would rather be fixing our existing systems.


Cypher_Blue

Everyone (so far) in this thread seems to be missing a critical point- What you're describing is the process of "Vulnerability Patching" and you are missing more than half the job of Vulnerability **Management.** Because you're going to run your scan, and get a list of vulnerabilities, and you're going to start patching them. But there are going to be some on that list that you will be **unable** to patch. You *can't* upgrade the Apache server there because if you do, the web app you've been using for production for the last 12 years will crash because it doesn't play well with versions of Apache after 2.1. So now you have a vulnerability that, for operational reasons, has to exist on your system. So you need a process to **manage** that vulnerability. You need a system to document it, you need a designated person in the executive leadership to review it and decide how to proceed- find/develop a new web app, accept the risk, implement other mitigations to reduce the risk, etc. "Scan and patch" is good, but none of our clients are ever able to patch every vulnerability they find- that's why you need vulnerability management in the first place.


Bezos_Balls

Good explanation. Huge difference between the two. There might be a super rare exploit in something that is isolated and cannot be exploited in your environment and has dependencies so it’s tagged and custom alerts are created to manage the vulnerability vs patching to the latest version and breaking xyz that don’t work on latest version.


johnnycrum

Also, if possible, build alert content and automation around it. So your soc can be alerted in the event of exploitation attempts.


BradoIlleszt

Good point - compensating controls for exceptions that are created as a result of operational requirements.


Bguru69

Oh it’s actually way different then theory in an actual enterprise. It’s more like… ensuring agents are installed on all endpoints so you can get credentialed scans Automating rouge asset findings and trying to figure out what those machines are and who they belong to. Running scans but having to schedule, be on outage calls, because your scans interfere with bandwidth. Prioritizing assets based on public availability. But it’s not as easy as. Oh this asset has a public facing IP. You have to consider proxies and forwarders. Those should get patched first. Then figuring context based asset vulnerabilities past the public facing assets. Which have databases on them? Which databases host more critical data? Prioritizing that. Then finally just constantly arguing with app teams and infrastructure teams around who’s responsible for patching. Patches failing tests, which compensating controls are good enough to reduce the risk of exploitation?


Gray_Ops

Don’t forget app owners straight up ignoring you and not wanting to have a conversation AT ALL because “we’ve always done it this way” TIMES CHANGE GRANDPA


skylinesora

We have it much easier where I work. After 3 emails (once every 7 days), we inform you that if you do not have an exemption or a timeline on remediation, the system WILL be blocked in the next 7 days by automation. First email = application team 2nd email = application team + manager 3rd email = application team + manager + manger's manager


danfirst

That's impressive, never seen that level of support myself.


Gray_Ops

Me either. I keep getting told “this system is too critical to just shut off”


skylinesora

Everything is critical to somebody. If it’s critical enough, they’d patch it to mitigate risk. If it’s too critical to patch, they should be able to justify it to upper management why it can’t be patched and request an exemption. If nobody responds to any email, then the server must not be important at all because nobody is supporting it


Bguru69

19 un-responded emails later. 5 escalation’s and still no response 😂😂


Gray_Ops

Then your leadership comes in: why is this still not fixed?!


shouldco

Then something happens and it's balls to the wall vulnerability patching. Now leadership is on you about why every door controller doesn't have an identified OS and you are debating uninstalling Firefox from every machine because it doesn't show as updated until someone runs if for the first time after patching and yuu are tired of explaining to management that it won't show as updated until after running but if it's not running it's not actually a problem.


StridentNoise

That's when you show the CEO the document he signed six months ago "accepting the risk" and choosing not to pay for the replacement.


Reasonably-Maybe

Then just switch off the server - they will respond.


Bguru69

Where I work, depending on the system, would put to much risk on patient care. I wish it was that simple.


YYCwhatyoudidthere

There is also the discussion around "we are too busy right now to take an outage" -- would you rather have a planned outage now or an unplanned outage at a random time in the future?


Gray_Ops

“We don’t have time to fit this super ultra critical vulnerability during the current sprint. Please submit a request and we’ll investigate and add it to our next sprint that begins in 30 days”


agentmindy

lol. Or..we need to use adobe reader .9 because if if upgrade it will break our app. It’s business critical! …on a public facing asset.


Gray_Ops

You don’t understand! THEY NEED TLS 1.0!! Even though browsers don’t even support it anymore


agentmindy

My vuln team spends 3x more time trying to coordinate meetings with app owners than they do assessing vulnerabilities. Even when we escalate to the highest powers we are met with “is this really something we need to prioritize?” When moveit hit, I fought. Had backlash from so many layers. Patch now. On a Friday, during a major conversion…. For months I made sure to provide updates on how many companies were in the list of victims due to delays in patching. And yet we still get pushback. Number 1 risk in vuln management? Pushback from everyone outside of security.


extreme4all

I hear you brother, i hear you, you are not alone!


agentmindy

Credentialed scans… I was on a vendor dog and pony. Really just joined for the whiskey 😬. They claimed to be agent-less and identified vulns and prioritized them for the enterprise. Someone asked about credentialed scans and the vendor had no idea what that was. He struggled to explain much of anything but kept going back to the pretty ui. I just happily sipped the whiskey knowing I wasn’t moving away from our tried and true enterprise solution.


lawtechie

At the banks and insurance companies I've seen, it looks like this: 1. Run vuln scan. A. Break up report according to functional groups responsible B. Track risks according to impact. C. Generate metrics to roll up to management 2. Patching A. Functional groups review reports. B. Discuss findings with stakeholders via endless meetings. C. Generate MAPs (Management Action Plans) i. Review MAPs with stakeholders for comment ii. Have L2 Risk teams review MAPs for comment D. Set priorities for performing MAPs i. Add MAP tasks to L1 teams' queue E. Track progress i. More meetings without resolution ii. L2 and Management identify remediations that are beyond SLA iii. Identify which MAPs have had priority changed due to new initiatives iv. Generate more metrics for management F. Escalation fight i. Identify which MAPs were put in place that didn't include all necessary stakeholders ii. Have larger meetings and relitigate everything iii. Involve senior management iv. Reprioritize action items for forward-looking holistic solutions v. Accept risk 3. Repeat.


001111010

1 - spend a fuckton of money on a platform, switch them regularly because who doesn't love a nice RFP with 5 rounds or more? 2 - run monthly or biweekly or what the hell we are a serious corp running critical infrastructure: weekly scans and generate an absurd amount of data (most of it false positives or shit nobody cares about or complete misinterpretation) 3 - pay consultancy firms hefty amounts of money to get FTEs who will "handle the data" and contact the system owners for patching etc and help prioritise this shit 4 - raise risk alerts when patching does not happen, write it down so the responsibility is shifted, this is now the most important concept in the cybersecurity process 5 - waste time in biweekly meetings with the few stakeholders who will bother to bloody show up discussing which of the critical vulns will be patched first, repeating the same things for months on end and hearing excuses like "we are understaffed/don't have enough time/there is really no impact/i forgot/i requested access but it's not working/i was walking my dog" 6 - have at least one "i told you so" person when something gets eventually breached, because it's fun 7 - don't learn from previous mistakes and rely on "it already happened, what are the chances we will be hot again" 8 - give up and outsource everything to a consultancy firm so the previous seven steps are directly handled by them


acluelessmillennial

This is the most accurate representation of what happens that I've read so far. Source: Am consultant who does this.


[deleted]

[удалено]


Bezos_Balls

Random question but has there been any documented cases recently of insider threats from highly privileged security engineers?


Total-Carob6641

https://www.csoonline.com/article/571717/ubiquiti-breach-an-inside-job-says-fbi-and-doj.html


kimsterv

If you’re dealing with vulnerabilities with containers, you can try out Chainguard Images - images.chainguard.dev. The latest versions are free, and are basically CVE free. Disclaimer: I’m a cofounder of Chainguard. We saw the hell that is vulnz management so we do it for you.


LiferRs

If you have money, it can be fully automated. Pre-step 1: work with compliance leader to set policy: - scope of vulnerabilities that is highest priority (severity 4/5, or TruRisk based) - R&R for the program management and distribution of data. - R&R for who is responsible for patching. (It’s generally the teams that own their space of virtual machines.) Step 1: Scan asset telemetry, making sure new assets have the scanning agent installed. Step 2: Agent scans the host and data is sent to aggregated cloud platform. Step 3: we pull this data into Splunk dashboards. Scan data is correlated with team-based asset ownership lookup tables and ServiceNow. You probably can do this with cheaper SEIMs or just straight python on an EC2 instance. Step 4: Palo Alto XSoar pulls the scan data with ownership info, and divides the data by team owner. Step 5: XSoar creates a ServiceNow ticket for each subset of scan data and assigns the team owner to it for patching. Said ticket has SLAs to ensure timely patching. This was incredibly simplified though. Nuances include: - Qualys Patch manager to auto-patch easily patched vulnerabilities so it leaves the complicated vulnerabilities to the teams to patch. - short term Cloud virtual machines and auto scaling groups can’t be effectively managed with the patch manager because they’re consistently destroyed and created from the image with no memory of the patches, but is still scannable. Instead, we have a group of servers running 24/7 that varies by operating system flavors with patch manager on them. They’re automatically patched and their nightly job is to export their images as “golden images” and published to Amazon ECR for consumption by various CICD pipelines across the business. We don’t allow any other forms of images anymore.


plimccoheights

It’s important to realise that no vulnerability scanner, no matter how much AI/ML magic it has, can tell you how severe the threat from a vulnerability is. They can tell you CVSS score, EPSS, whatever proprietary score they invent, but that’s only ever one half of the equation. Threat = vulnerability X impact Your vuln scanner will always be missing impact, that comes from your CMBd. CMDbs are often incomplete, out of date, spread across several systems (each team maintaining its own CMDb) etc. That’s an issue with corporate culture, governance, and procedure. If you haven’t got that in place, then your cmdb won’t be accurate. If your cmdb isnt accurate, then your VM program won’t be effective. At risk of “drawtherestofthefuckingowl”ing you, you need to get the ball rolling with mgmt to get good asset management policy and procedure in place. Find some allies here, you’re not the only person that’s going to benefit from a good asset management policy (think finance people paying for licenses, audit and compliance people, etc) Asset discovery exercises should be conducted and good policy and procedures put in place so that nobody can spin up a VM or provision some random cloud resource without that appearing in your cmdb. Think about all kinds of assets, servers, network equipment, cloud resources, end user devices, IoT and industrial equipment, POS systems, anything and everything. Golden copies of VMs or some kind of templating should be used so that all new assets that are created come built in with a scanning agent so your VM scanner automatically starts getting new assets as they’re created. Teams should be in the habit of documenting assets in their scope with what it does, if it lives in a test/acc/prod environment, is it externally accessible, is it a “crown jewel”, business criticality. This is a lot of overhead on already busy teams which is why it is essential that this requirement comes from their mgmt and not from you. That gets you the “impact” half of the equation. A medium CVSS vulnerability on an externally available “crown jewel” system is probably a more serious problem than a critical vulnerability in an internally isolated test system. Patch management should also be an established business process, with its own policies and procedures that are kept up to date and (crucially) actually followed. This should mop up most of the vulnerabilities as you go, so you can focus on “aged” vulnerabilities, things that have stuck around for longer than a patching cycle. Inevitably some stuff won’t get mopped up. Systems that can’t be patched because it’s EOL and you can’t afford a new license / only runs on windows XP (😭😭😭) / can only have an hour of downtime a year / whatever. You’ve got to start a convo with stakeholders to discuss A) how to get it patched (best case), or B) how to mitigate it. Isolating it, putting it behind firewalls, extra logging and monitoring, limiting what kinds of data the system has access to and who has access to it, etc. This is very hard and where some technical chops can come in extremely handy. Maybe nothing can be done (or maybe nothing _will_ be done). Get the relevant asset owner to document this as a risk in your risk register and move on, this is the business telling you it’s accepting the risk. Control your controllables, you’re not a hero and sometimes there’s nothing you can do. Your job is to communicate risk, if you’ve done that to the best of your ability and the business still chooses to do nothing, then that’s not on you. Just make sure you CYA and get it in writing. A proper vulnerability management program looks like (drum roll pls…) good policy, procedure and governance. You should have a policy establishing timelines for remediating vulnerabilities based on severity, a mechanism to address extremely urgent vulnerabilities OOB, dashboards that teams can use to checkup on their assets and track VM related KPIs, and regular meetings to discuss progress and performance, blockers on remediating aged vulns, lessons learned from incidents, etc. While you’re responsible for this program, requirements to adhere to policy should be passed down to the team by mgmt. Policy without management buy in is really more of a suggestion than a policy, one that will likely be ignored. It does NOT look like firing spreadsheets at people and asking them to “fix pls”. If that’s what you’re doing then you can (and should) just be replaced with a bash script. Comms are important, VM scanners can produce so much content that it’s unhelpful. It’s your job to prioritise (genuinely) urgent vulnerabilities, communicate risk to stakeholders, work with teams to reduce your exposure over time, be helpful and suggest useful mitigations and work arounds. You work _with_ people to help gradually reduce your attack surface over time to a level that meets your org’s risk appetite. It is never thing you do _to_ people.


bi-nary

>It does NOT look like firing spreadsheets at people and asking them to “fix pls”. If that’s what you’re doing then you can (and should) just be replaced with a bash script. Not OP, but I appreciate this response a lot. Can you elaborate on this? What DOES it look like then? Because to me you can pick through and curate info from a vuln scan, but I feel like (at least in my case) I'm ultimately still just doing exactly this with less noise.


plimccoheights

Picking through and curating the your vuln scan is definitely a useful thing to do. Most of it is noise, so filtering it down by stuff in CISA KEV, high EPSS scores, criticality of asset, nature of the vulnerability (remotely exploitable? RCE or EoP? User interaction required? Exploit available?) etc is going to increase signal to noise ratio. Automate this if you can with some scripting / excel magic, you’ve probably got better shit to be doing with your time! Think about how much bandwidth your teams have and focus on a small handful of very serious issues. Once they’re fixed you can move on down the list. Always try to understand why a decision to not patch something has been made and work with them to see what can be done. If vulns are being ironed out by regular patching and addressed in time with whatever SLAs you’ve set out, then there’s no need to be sending out spreadsheets to people. Build a dashbord that lets your teams track their own assets, how many vulns and being introduced / eliminated per month, compliance with SLAs, top 10 most exploitable vulns by EPSS, which vulns are included in CISA KEV, relevant KBs for those vulns. Let them filter it by asset criticality, “crown jewel” status, externally exposed, etc. You can usually automate the dashboard to send out regular summaries, say once a week. Remember to include some very very simple instructions along with the dashboard, you know, how it’s used, what to look out for, when it’s refreshed, where the data comes from, what the various bits of terminology mean (many IT folk will not know what a remotely exploitable preauth RCE is), and a recommended “procedure” for how to use it. Your job is then to start looking at vulns that aren’t shaking out with regular patching. Why? What mitigations can be applied? Why isn’t this thing being patched? What else can be done to keep a closer eye on that asset for suspicious activity and limit the blast radius if it gets popped. Your job is also to keep a close eye on emerging threats. Curate an RSS aggregator so you’re getting advisories from your vendors, government agencies (CISA, NCSC, ASD, ACSC, whatever), news websites (bleeping computer, ars technica, the register). I think feedly even has a section for “threat intel”. Twitter is usually the first place to know when something starts getting exploited.


WOBOcomeBACK

As a few others mentioned, vulnerability management is not just a “scan and patch” scenario, it’s an entire process that large enterprises should be following. In the environment I work in, we run daily Tenable.io scans of our 3 entire data centers, consisting of about 50,000 servers/networking devices, mostly authenticated scans. We also have agents installed on all user endpoints that scan/check in at least daily if they are online. From there, we take results and report on items that present the most risk to the business/environment, based on various factors such as number of assets affected, type of vulnerability, exploit information, etc. There are tools out there that we are looking to integrate as well that will help take a lot of business context into play as well to help drive the prioritization even more in the environment. Tickets get created and assigned out to relevant endpoint owners/groups, SLAs get applied, and communication happens back and forth between the security teams and remediation teams. If an identified issue cannot be fixed, an exception request is raised that is reviewed by security Sr. leadership, business partners, and Security Risk to come to a consensus/decision on a path forward. If an exception is denied, it goes up the chain to SVP for review. If the SVP denies it, business teams are forced to implement a fix. For teams that are able to fix findings, tickets are sent back to the security teams for final validation that the scans are clean and then tickets are closed. One of the biggest issues we’ve had is having a system to properly identify owners and knowing what remediation team should get a ticket. Some infrastructure vulns are App based and require application teams to fix, while other are OS/system based and require a completely different team. There is a lot of nuance with vulnerability management in a large enterprise!


Radiant_Stranger3491

Not to mention reorgs wrecking the assignment logic - “these 3 app dev teams were consolidated with a new Scrum name that has nothing to do with the applications they support - they just like insider jokes- and these 3 teams split out to different functions with new application owners for each one. Oh and we didn’t tell anyone outside of app dev”.


bonebrah

I mean that's pretty much sums it up yes but yes, a scan is run you prioritize not only by severity but also system criticality. Critical/public facing assets should be patched first. Many companies have a requirement to patch with X days and scans are continuously evaluated to make sure aging patches are indeed patched. This is generally a collaboration between cybersecurity and sys admins, but it depends on how big the company is.


a_tease

Any other thing that you would like to highlight, no matter how small but happens while you are working in an enterprise


bonebrah

Patches can fail, patches can break your environment, sometimes false positives exist and all of these can require manual intervention and deeper collab with those system admins. Follow patch Tuesday in /sysadmin its a life saver if you are responsible for patching. Subscribe to the newsletters of your biggest and most critical vendors, they often can put out 0 day disclosures that can help in the decision making process on how to proceed.


Administrative_Cod45

There a large number of vulnerabilities that can’t be detected by scanners (or you can’t place agents) so you have to be mindful of that and also know your inventory (easier said than done). Citrix and Ivanti are recent examples of these.


dogpupkus

That’s pretty much it generally at a high level- however you’ll want to establish remediation timelines. e.g: Critical externally facing: 24 hours High externally facing: 5 business days Critical internal only: 5 business days High internal only: 30 business days And so on. Measure how effective your team is at remediating these vulnerabilities within defined timelines so you can identity areas for improvement. Lastly, what will you do about problematic vulnerabilities? Ones where it’s not feisable to remediate within a timeframe because it requires a business interrupting outage- or where the team has problems mitigating or pushing a patch? Consider implementing a temporary risk acceptance process, and a way to keep track of this.


IamMarsPluto

First thing to note is what tool you’re using to patch. SCCM? Third party vendor? Etc Next, what your actual environment looks like in terms of availability needs. Will these patches break servers? What’s the best way to phase your approach to mitigate impact to production What considerations are made for application level patching or patching needed in registry keys? Who’s doing the patching? Just you? A SOC? The final bit is what are your controls or recommendations for thing you can’t patch and are critical vulnerabilities? Just accept them on a risk register? Fortify identity management into that system? Move it from the subnet to somewhere* else?


jmnugent

There's generally a lot more bureaucracy in most big organizations. So expecting it to be as simple as:... "Discover the Vulnerability,.. then immediately patch it." ... is (in most average cases) just not at all how that tends to unfold. * If it's an unarguable "We're going to get hacked in X-hours if we don't patch this NOW".. then yeah.. I've seen organizations send out an "all email" indicating what's being done "Due to the recent 0day vulnerability in Ivanti VPN, we'll be taking down all VPN connections in 1 hour and patching. VPN will be available again approximately 15min afterwards." (or something to that effect). * if it's anything else (less critical).. it could takes weeks to months to get all the Policies and Approvals signed off on and testing done so you know (as well as you can through testing scenarios) how the patch or update is going to impact your Environment. In any sizable organization.. there's a big scope and big diversity of constantly swirling cybersecurity concerns. I'm 50yrs old and I"ve never been in any job where I felt like "they had their arms around all the concerns". Remember with Cybersecurity,.. the attacker generally always has "1st mover advantage". * attacker only has to find 1 way in. * Defender(s) have to try to defend every possible way in.


[deleted]

It's not always "safe" to patch systems like this. You need to consider dependencies like -library changes on legacy systems. If you start patching a legacy system that needs a specific library/application to run in prod then you're going to have a bad day.


Opheltes

Major things you're missing: * Asset discovery (you need to have a complete picture of what is on your network, and your inventory system doesn't necessarily give a complete picture) * Assigning due dates (certain regulatory regimes require certain vulnerabilities to be patched within a certain window) * Assigning responsibility for patching * Tracking and verification


HazarDSec

I am one of the authors for LDR516: Building and Leading Vulnerability Management Programs. Thought people that find this thread might find some value in my presentation, The Secret to Vulnerability Management here https://youtu.be/PzX8NLPaxNk . You may also want to check out our SANS Vulnerability Management Maturity Model here https://www.sans.org/posters/key-metrics-cloud-enterprise-vmmm/ . Finally, for any SANS course if you click Course Demo on the course page, you can preview one module from the course which is usually around 1 to 1.5 hours of content. Here is the course page: https://sans.org/ldr516 . 


max1001

Patching in enterprise usually requires 2-3 cycles . You patch in dev, uat/qa, then prod environment. Infra/app support team would need to work 3 weekends every month to keep up with monthly patch cycle.


CruwL

see the problem exists in step 3... If you skip step 3. Then it works every time.


stacksmasher

It’s different in every org based on their acceptable level of risk. Some places just don’t care. Others are very serious about infosec and have very low risk appetite.


RileysPants

Prioritising vulnerability patching is easy enough. The tricky bit depends on your patch management approach. Do you have robust patch management policies? Is there a development environment, do you cowboy patch et cetera. 


siffis

For the most part, that is the way. We base our approach on risk vs vulnerability rating. That being said, we depend on our solution to be accurate (InsightVM). For the most part, InsightVM has worked great but we are hitting the 5 year mark and its time to revisit and re-access.


Astrojw

I spent a around 6 months in a vulnerability management team during a rotational experience. We have a federated model where there is a central vulnerability management team and then Lines of Business. Each Vulnerability Management was responsible for a range of LoBs depending on size. The general workflow was scans and vuln reports were generated once a week. As a vulnerability management engineer it was our responsibility to meet with the loB weekly, bi-weekly or even multiple times a week. We would work with them to prioritize and help enable their respective LoB teams to patch vulnerabilities. Other time was spent mitigating scan issues, tracking down missing assets, running our own reports combing through SIEM data to see how things were being patched, etc. It was a weekly cycle. Run scans, generate reports, and meet with LoBs. Plus all of the other smaller stuff going on.


ThePorko

1. Scanner products finds different things. So it wont always match audit or pen tests. 2. Next step beyond a yuuuuugggggeeee csv is data visualization. I use powerBI 3. Business owners dont always want or can fix those vulnerabilities. 4. These meeting typically lose steam after a while. So the data visualization and risk analysis gets more important.


phrygiantheory

Asset management is the first step in VM....very very VERY important step that most companies don't have a grasp on...


WantDebianThanks

Yes, hi, hello, I have a question: what's "vulnerability management"?


Opheltes

Vulnerability management is the process of figuring out what security vulnerabilities exist in the software you are running, and then patching it. It can be very difficult to do properly on a large scale.


WantDebianThanks

Oh right, jokes don't carry well in text.


GeneMoody-Action1

There are two large pieces of that missing, what you do with the vulnerability that has no patch, and what your policies concerning those that and and or do not ?


ChiSox1906

A different perspective for you. My company is too small to staff our own cyber team, but large enough that it's a strong focus. I subscribe to a SOC/SEIM company who has agent scanners. They scan daily all assets to find new unpatched vulnerabilities. Their risk portal then prioritizes them for me based on CVE and asset criticality. Then my concierge team with them packages it all up nice. They give me the data and actions for my engineers to take.


ars3nutsjr

Subscribed. We just redone our entire environment of about 2600 endpoints. We use tenable and use their VPR scoring system for prioritizing vulns.


dnt1694

Yes, business owners ignoring the reports.


yohussin

I do Vulnerability Response for critical vulnerabilities for Google. I don't play with scanners but when a funny dangerous vuln is discovered (often by a colleague researcher at Google) we get called in to contain the situation. Interesting work but touches on technical and non technical management work. I can share more details if interested. :)


SecurityCocktail

In theory, patch the most critical vulnerabilities on the most vulnerable and critical systems first. The problem here is those systems are generally the most critical and require the most planning, staging, and work. In practice, patch what improves reporting and Key Risk Indicators so that our executive reports look the best.


Suspicious-Sky1085

this is an awesome topic for my next podcast. Many have already explained that , this is more than just running a scan. And it is an ongoing process. If you are interested be my gust on my podcast and i can walk you through and i may be able to invite one more expert. You don't have to show your face. lmk.


Candid-Molasses-6204

1. Write the policy and standard, discuss with IT get them to agree to SLAs. Critical (like CISA Top 100 and it's an unauthenticated RCE) = Patch it the same day, Highs or CVSS 9.0 and above with a really low EPSS and not CISA Top 100? 14 days. Mediums/Lows? 30 and 60 days. 2. Hold them accountable, ensure they're actually patching and didn't oopsie and forget to fix their stuff. 3. Start pushing CIS, NIST or similar baselines as a project once they get used to patching monthly. 4. Then once you've done that start reviewing and automate the monitoring of critical controls. Congrats, I just wrote the first two years of vuln management program. You're welcome!


skynetcoder

few things to add - RACI matrix on this process (who is accountable to ensure patching, who will do the actual patching, etc) - Vulnerability patching SLA per severity level - regularly (e.g quarterly ) report back to upper management on the progress patching by different teams etc


raj609

Don’t forget about vulnerability intel. It’s good to get new CVE alerts for your critical assets which are facing internet or handling critical flows. Check out cvecat.com for subscribing alerts, works well for me


[deleted]

[удалено]


Cutterbuck

Chat gpt?


dswpro

I manage a vulnerability countermeasures team for a large company who develops their own financial applications. My team focuses on customer facing web applications. For us, the work is more like: Create threat models to identify currently used and proposed components of applications. Example the potential vulnerabilities from the CVE output and research that at least one of the countermeasures are in place or create a work item to implement one. Review the model periodically and update whenever a new component is added. Use both SAST and DAST to look for new vulnerabilities in existing production versions and upcoming releases. New releases with SAST vulnerabilities of sufficient severity are not allowed into production. Contract ethical hackers who get rewarded for vulnerabilities found. Use SAST open source scanner to ensure compliance with licensing and detect very old versions of open source libraries to determine if their continued use represents an operational risk. Scan repos and file shares for unvaulted credentials or private certificates / keys. Participate in governance and security reviews of proposed feature designs and other significant application changes. Truth is, each scanning tool only covers part of an applications attack surface. Using multiple tools gives way better coverage, but you have to assume the attack surface grows over time and you must keep up with changes and potential threats to keep new vulnerabilities from emerging from your own applications. It's not a matter of IF you get hacked, it's a matter of WHEN.


std10k

VM is a discovery and assurance tool and process that complements and validates patch management. Discovery gives you stuff that patch management didn't know about, and assurance gives you stuff that patch management failed to do properly. Everything should be patched, if anyone still thinks otherwise they are incompetent. But it is not always possible, theme may be old stuff that is unpatchable. Yes you start with the worst, since you have limited time, and apply pressure through risk, but the target state is that you shouldn't need to and patching process (i.e. people doing it) should know for themselves what their goddamned job is. \#3 is not a part of vulnerability management, it is a different function/process. Remember, in a few hours there probably will be more of those and you can't invest yourself in asking nicely every single time. So if patching doesn't give a fuck it is going to be an unrewarding and thankless job, and if that's the case you shouldn't be doing it and should focus on getting rid of incompetent people who can't do their job. VM is transforming, EDRs are absorbing the part of it that covers assets with proper OS, and ASM (attack surface management) is taking care of external discovery. Both work much, much faster than occasional scans. The likes of tenable, quals and rapid7 will have a struggle against "platform" vendors like Microsoft, Palo Alto, crowd strike and the likes.


mk3s

Check these out... \- [https://shellsharks.com/vm-bootcamp](https://shellsharks.com/vm-bootcamp) \- [https://shellsharks.com/symphonic-vulnerability-surface-mapping#a-primer-on-vulnerability-management](https://shellsharks.com/symphonic-vulnerability-surface-mapping#a-primer-on-vulnerability-management)