Centralizing the monitoring and management of endpoints (hosts) can be a security architecture challenge for security administrators. One of the tools that can automate security baselines and enforce security policies domain-wide are Windows’ Group Policy Objects (GPOs). GPOs can be configured or updated on the enterprise’s Domain controller and pushed out to endpoints within the organization. GPOs not only offer lower administrative overhead by centralizing and automating policy enrollment but can be configured to enact at a granular level. For example, a security administrator can create GPOs to prevent removable devices from connecting to the host machine, restrict what software users can install, and disable access to Windows command prompt. However, there are limitations to GPOs as they are designed exclusively for Windows systems. As more organizations adopt a multi-operating system environment, security administrators will need to look for alternatives to group policy management like JAMF.
I like how you mention other group policy management software (JAMF) besides Windows Active Directory. I feel like even though we learn that AD is the most widely used and policy-comprehensive group policy manager, it is important to be familiar with some other ones because not all enterprises use Windows servers/clients. It is also nice knowing that if you preferred to build a network using Apple hardware, there is another solution out there for group policy management.
One of the most important steps in hardening a system is to manage account and group permissions. Many exploitations begin with an entry point from a single user account. Whether these hacks are intentionally done by an employee or the result of ignorance or lack of security, humans tend to be the weakest link in any information system structure. Therefore, controlling access and permissions can prevent a user, or someone with access to their account, from navigating to sensitive information or controls. This chapter also discusses the steps of how to assign permissions in Windows as well as how to assign groups and permissions in Unix.
I agree with you that humans are often the weakest link in the structure of any information system as a result of ignorance or lack of security. Human factors result in a lack of security, and prevention at the root cause can be mitigated by controlling access and permissions.
Hello Patrick,
Understandably I am on your side regarding your analysis on host hardening. However, using penetration testing, vulnerability scanning, configuration management, and other security auditing tools to find flaws in the system and prioritize fixes would enable realistic and solid hardening on the system. And also Conduct system hardening assessments against resources using industry standards from NIST, Microsoft, CIS, DISA would help in safeguarding the system properly.
HI Patrick, thanks for your post. This is especially important when it comes to insider threats; access control is absolutely essential to ensure a single breach from an unprivileged user does not turn into an organization-wide attack.
One of the most widely known and immediately recognized benefits of virtualization is reducing operational costs. Having the ability to share resources means that organizations need to purchase less physical hardware. Having a server that can host multiple servers or a desktop that can run multiple instances of different desktops means less upfront purchase costs. But the cost savings are not limited to hardware. Less hardware on the network means less power consumption and lower cooling costs. These equate to savings. Less hardware also means less maintenance and physical management, resulting in salvation.
Hi Zijian,
virtualization is certainly reducing costs for organizational environments and has strong appeal like you mention from the reduced operating cost from purchasing physical hardware. It also allows for constant expected predictable costs, as well as fast resource deployment. It also introduces the risk of being at the mercy/success of the third-party hosting company, and having shared resources on the hosted storage.
I agree with you that the benefit of virtualization is lower operating costs. By reducing operating costs by sharing resources, it is wiser to focus on the system and reduce the effort required by the organization. In addition to less upfront purchase costs with servers that can host multiple servers, you can focus more on where monitoring is more important.
Whenever a vulnerability is noticed patching seems to be the obvious solution. However, the cost and number of patches often go unnoticed. As the chapter states, patching is free but the time and labor required to learn about new patches and apply them can be costly. Patching also has the ability to cause more harm than good. At times a patch could corrupt a system, I personally have seen this happen a few times in my role. Applying patches is critical, but I can understand why some organizations might be hesitant to rapidly apply all necessary patches.
Patch management is a relatively simple concept from the outside perspective, but many often overlook that patches cannot always just simply be immediately implemented for a variety of reasons based on the type of environment in question. Sometimes a patch might fix one immediate issue but give rise to other ones if certain dependencies are present throughout internal systems and/or applications. Sometimes workarounds have to be implemented temporarily to correct the greater issue that will be the result of applying a certain patch if it relates to multiple systems that intercommunicate to function cooperatively. Applying a patch is not always worth impacting the availability of another related system.
Hello Dhaval.
Patch management is essential for the following reasons: Security: Patch management fixes vulnerabilities in your software and applications susceptible to cyber-attacks, helping your organization reduce its security risk.
When a new patch is released, attackers use software that looks at the underlying vulnerability in the patched application as a point of compromise. Unfortunately, this is something that hackers perform quickly, allowing them to remove malware to exploit the vulnerability within hours of a patch release.
The reading went over recommendations for creating a strong password. I thought this was interesting since it differed from
The password should be 12 characters, include at least one change of case, have at least one digit, and have at least one non-alphanumeric character.
You should also consider what password hashes are used. Hackers can compare the company’s list of password hashes to a known database of common password hashes (such as the hash for password 123456 through common hashing functions).
Rainbow tables are the utility that helps speed up the computation required for hash cracking by malicious actors. It is important to try to avoid using common items that may be found in a rainbow table as you mention, like contiguous strings of numbers like 1-6, as they can significantly reduce the time required to crack the hash of such a password. Having 12 characters minimum with the complexity you mention also helps mitigate simple hash cracking to a degree, depending on the apparent randomness of the characters in that string.
Hello Madalyn,
That was a great post. The another thing that I would add to that is the frequency of the password change requirement. This would also ensure that the user are using a different password from something they might be using for other application/accounts.
The one salient point I found in this chapter has something to do with testing for vulnerability. this is so because Vulnerability testing is a mechanism of determining security risks in software systems to decrease the incidence of threats. It is important for the security of the organization and the process of locating and reporting the vulnerabilities, which provide a way to detect and resolve security problems by categorizing the vulnerabilities before someone or something can exploit them. Apart from to implement protections, planners and implementers, there is still possibility that hackers might attack the system or software through the system vulnerabilities.
In that regard, to do vulnerability testing properly, a security administrator needs to install vulnerability testing software on their PC in order to detect any possible vulnerabilities that the attackers might use to attack the systems. Hence, these programs run a battery of attacks against the servers, and then produce reports explaining the security vulnerabilities it found on the servers. Apart from that, its requirement is to create a vulnerability testing plan before testing, which contain the detailed step and accountability thorough determinations of the intended threat. Regarding vulnerability testing, it can decrease the possibility for intruders or hackers to have unauthorized access of systems to carry out their malicious intentions.
I also think that vulnerability/penetration testing is a key part of the system hardening process. One can use pre-written procedures and policies to perform system hardening, but if penetration testing is not a regular part of the procedure, then the vulnerabilities that go unnoticed will likely continue to go unnoticed for longer periods of time than necessary. There are most times unknown (or at least unnoticed) vulnerabilities in any system OS or its software, so penetration testing can also keep the system hardened against the more novel vulnerabilities.
One of the takeaways from this chapter for me was the importance of backing up a host regularly. While it may seem simple and obvious, I think it may be easy to forget when compared to other aspects of host hardening, like restricting access, using secure configuration options, managing users and groups, patching, etc. Personally, I haven’t backed up my computer at all (even though I know I should). I’m sure others are the same way, or have a backup that is older than 6 months (or whatever is deemed ‘regular’). It is easy to fall into the trap on focusing on more of the complex aspects of host hardening, and forgetting the simple basics of backing up your host in case something goes wrong, and this particular point was a reminder of that.
I agree, there are seemingly simple cyber hygiene mearsures everyone can take to help bolster their cyber security posture. The cloud helps to simplfy automatic backups for example Microsoft’s OneDrive can be set to automatically follow a backup schedule. Speaking locally, both Windows (File History) and MacOS (time machine) offer scheduling options for backup but these options are a bit more involved. However, in an enterprise setting – who would you make responsible for backups, IT or end users?
The most significant section of the chapter for me is that of assigning permissions; permissions are one of the most important aspects of basic host hardening that could have vast negative implications and effects when improperly implemented. Controlling and managing file permissions helps an organization adhere to the principle of least privilege. User accounts can have privileges / permissions managed by groups, and or access control models like DAC/MAC. When adding permissions to a user account, it is important to evaluate whether the individual in question still requires access to all of the permissions it previously retained. If not controlled, privilege creep will occur, and a bad actor could potentially take advantage of this scenario and have a wider array of possible attack vectors.
I agree. Poor implementation of file permissions is an easy way in for intruders, and it can allow for all three areas of the CIA triad to be breached. Implementing the concept of the principle of least privilege like you said is going to be a key factor in limiting exposure to certain documents.
In this chapter, it was interesting to me the importance of groups and users. As Chapter 7 states, “applying measures to groups also tends to reduce errors because most groups have well-defined roles that lead to clear security requirements. Individuals, by contrast, may have multiple roles with different security requirements, making it difficult to assign proper security settings to individual accounts.”
An administrator account has total control over the computer and anyone using it should use it as less as possible to avoid anything the individual shouldn’t be doing. The system administrator is utilized to look at each group to see its members, then add or delete members from the group. Additionally, signing permissions is one of the most serious problems associated with security.
This is a concept that hits home. I preach this idea to our customers during KT sessions. Assigning access controls through groups over individual users makes the job of the admin so much easier, it keeps the environment well organized and allows you to make instant changes if say a new employee is hired or if someone leaves. And of course, only those individuals who have administrative rights should be using that role.
One key point that I took away from this chapter is that there are many different aspects of host hardening that must all come together to form an optimally hardened network. This multitude of aspects includes but is not limited to: group policy objects being in alignment with written network policies, having properly configured and active antivirus / anti-malware software on all hosts, systematically pushing out software updates to all hosts, having a complete inventory of network resources, hardening enabled and/or active services on all systems, systematically auditing the network and systems, and more. Part of the last point that I included (auditing of network and systems) is penetration testing, which is crucial to refining system hardening because it may discover some outliers that are not noticed by network admins during an already long and exhausting system hardening experience. This can include endpoints and software that are not inventoried, software updates that are neglected, and more.
From reading this week, I learned that virtualization provides multiple benefits during host hardening. Virtualization allows system administrators to create a security baseline for each server within an organization, or remote clients. In addition to being more secure, virtual environments can also reduce labor costs associated with server management, development, testing, and training. It can benefit businesses by reducing utility bill tolerance by shutting down unused physical servers and increasing failures.
Great points! There is a debate about VMs being more secure than on-prem hosts, but I agree VMs do have multiple benefits when it comes to hardening and at times it can be easier to secure a VM compared to on-prem. Cost is a plus on the VM side because generally, you are only paying for what you use.
Good points. The benefit of using virtual machines is also to reduce hardware costs. Many organizations are underutilizing their hardware resources. Instead of investing in another server, organizations can spin up virtual servers.
One of the key points from this week’s reading is the vulnerabilities, which are security weaknesses in the system or applications. The vendors would create a patch for the vulnerability once it has been discovered by the researchers. However, there is a possibility of a zero-day attacks. Where the attackers would find out about the vulnerabilities before the vendors does and use that vulnerability to get unauthorized access to the systems or applications.
When the vendors discover the vulnerability, there are 4 types of fixes they could apply from it. First is work-around, where the series of steps are taken manually to attempt to mitigate the risk. The second type of fix is patch, when the vendor would find the vulnerability in the software or application and fix that vulnerability. The third type of fix is service packs. It is when the vendors would fix the identified vulnerability and improve a functionality in a single update. The fourth one is version upgrade. The version upgrade includes the fix for the identified vulnerability, and it also improves the security of the software.
Hey Vraj, I also found these different types of fixes interesting, and I was also surprised at some of the problems that come with patching. Large amounts of patches lead to organizations falling behind on patches, labor costs to install patches can become expensive, and organizations prioritizing patches can still leave them vulnerable.
One interesting takeaway from this reading was the usage of zero-day attacks. In the cybersecurity world, patches are viewed as a god-send, rapidly fixing exploitable weaknesses in an information system; I would have never thought that this asset can be used for malice. Zero day attacks are the result of hackers reverse-engineering published patches, therefore understanding the vulnerability in focus and gaining information on how to successfully exploit it. This intelligence is used by attackers to hack systems who have not yet administered these patches; knowing how long it takes for companies to patch (such in the Equifax case study), a plethora of organizations can easily fall victim to this type of attack.
Patch management is the process of distributing and applying updates to software. These patches are often necessary to correct errors. Criminal hackers can take advantage of known vulnerabilities in operating systems and third-party. It also ensures your software and applications are kept up-to-date and run smoothly, supporting system uptime.There are three types of patch management security patches, bug fixes, and feature updates. Security Patches fixes vulnerabilities on your software and applications that are susceptible to cyber-attacks, helping your organization reduce its security risk. Bug Fixes is a temporary work-around, patch, or bypass to update the program code to correct errors or defects. Feature Updates are are technically new versions of the OS. In a Windows Environment, Feature Updates are available twice a year, during spring and fall time frame. They are also known as “semi-annual” releases, and they’re supported for 18 months. It is good to apply patches in a timely manner, but unless there is an imminent threat, don’t rush to deploy the patches until there is an opportunity to see what effect it is having elsewhere in similar software user communities. A good rule of thumb is to apply patches 30 days from their release.
Host hardening is the process of removing unnecessary applications, ports, and services, tightly controlling any external storage devices that are going to be connected to the host, disabling unneeded accounts on the system, renaming default accounts, and changing default passwords to a system that could otherwise be used as a point of entry or compromise into the system.
By removing redundant and unnecessary infrastructure, i.e., (programs, accounts functions, applications, ports permissions access), etc., the likelihood of cyber-attacks is reduced based on a reduced attack surface which means that the system will be less vulnerable, making it more difficult for attackers or malware to compromise back doors within your IT system.
Full disk encryption coupled with robust network security protocols is the best way a system can be hardened.
Kelly Sharadin says
Centralizing the monitoring and management of endpoints (hosts) can be a security architecture challenge for security administrators. One of the tools that can automate security baselines and enforce security policies domain-wide are Windows’ Group Policy Objects (GPOs). GPOs can be configured or updated on the enterprise’s Domain controller and pushed out to endpoints within the organization. GPOs not only offer lower administrative overhead by centralizing and automating policy enrollment but can be configured to enact at a granular level. For example, a security administrator can create GPOs to prevent removable devices from connecting to the host machine, restrict what software users can install, and disable access to Windows command prompt. However, there are limitations to GPOs as they are designed exclusively for Windows systems. As more organizations adopt a multi-operating system environment, security administrators will need to look for alternatives to group policy management like JAMF.
https://www.jamf.com/
Michael Jordan says
Hi Kelly,
I like how you mention other group policy management software (JAMF) besides Windows Active Directory. I feel like even though we learn that AD is the most widely used and policy-comprehensive group policy manager, it is important to be familiar with some other ones because not all enterprises use Windows servers/clients. It is also nice knowing that if you preferred to build a network using Apple hardware, there is another solution out there for group policy management.
-Mike
Patrick Jurgelewicz says
One of the most important steps in hardening a system is to manage account and group permissions. Many exploitations begin with an entry point from a single user account. Whether these hacks are intentionally done by an employee or the result of ignorance or lack of security, humans tend to be the weakest link in any information system structure. Therefore, controlling access and permissions can prevent a user, or someone with access to their account, from navigating to sensitive information or controls. This chapter also discusses the steps of how to assign permissions in Windows as well as how to assign groups and permissions in Unix.
Dan Xu says
Hi Patrick,
I agree with you that humans are often the weakest link in the structure of any information system as a result of ignorance or lack of security. Human factors result in a lack of security, and prevention at the root cause can be mitigated by controlling access and permissions.
kofi bonsu says
Hello Patrick,
Understandably I am on your side regarding your analysis on host hardening. However, using penetration testing, vulnerability scanning, configuration management, and other security auditing tools to find flaws in the system and prioritize fixes would enable realistic and solid hardening on the system. And also Conduct system hardening assessments against resources using industry standards from NIST, Microsoft, CIS, DISA would help in safeguarding the system properly.
Lauren Deinhardt says
HI Patrick, thanks for your post. This is especially important when it comes to insider threats; access control is absolutely essential to ensure a single breach from an unprivileged user does not turn into an organization-wide attack.
zijian ou says
One of the most widely known and immediately recognized benefits of virtualization is reducing operational costs. Having the ability to share resources means that organizations need to purchase less physical hardware. Having a server that can host multiple servers or a desktop that can run multiple instances of different desktops means less upfront purchase costs. But the cost savings are not limited to hardware. Less hardware on the network means less power consumption and lower cooling costs. These equate to savings. Less hardware also means less maintenance and physical management, resulting in salvation.
Antonio Cozza says
Hi Zijian,
virtualization is certainly reducing costs for organizational environments and has strong appeal like you mention from the reduced operating cost from purchasing physical hardware. It also allows for constant expected predictable costs, as well as fast resource deployment. It also introduces the risk of being at the mercy/success of the third-party hosting company, and having shared resources on the hosted storage.
Dan Xu says
Hi Zijian,
I agree with you that the benefit of virtualization is lower operating costs. By reducing operating costs by sharing resources, it is wiser to focus on the system and reduce the effort required by the organization. In addition to less upfront purchase costs with servers that can host multiple servers, you can focus more on where monitoring is more important.
Dhaval Patel says
Whenever a vulnerability is noticed patching seems to be the obvious solution. However, the cost and number of patches often go unnoticed. As the chapter states, patching is free but the time and labor required to learn about new patches and apply them can be costly. Patching also has the ability to cause more harm than good. At times a patch could corrupt a system, I personally have seen this happen a few times in my role. Applying patches is critical, but I can understand why some organizations might be hesitant to rapidly apply all necessary patches.
Antonio Cozza says
Patch management is a relatively simple concept from the outside perspective, but many often overlook that patches cannot always just simply be immediately implemented for a variety of reasons based on the type of environment in question. Sometimes a patch might fix one immediate issue but give rise to other ones if certain dependencies are present throughout internal systems and/or applications. Sometimes workarounds have to be implemented temporarily to correct the greater issue that will be the result of applying a certain patch if it relates to multiple systems that intercommunicate to function cooperatively. Applying a patch is not always worth impacting the availability of another related system.
Olayinka Lucas says
Hello Dhaval.
Patch management is essential for the following reasons: Security: Patch management fixes vulnerabilities in your software and applications susceptible to cyber-attacks, helping your organization reduce its security risk.
When a new patch is released, attackers use software that looks at the underlying vulnerability in the patched application as a point of compromise. Unfortunately, this is something that hackers perform quickly, allowing them to remove malware to exploit the vulnerability within hours of a patch release.
Madalyn Stiverson says
The reading went over recommendations for creating a strong password. I thought this was interesting since it differed from
The password should be 12 characters, include at least one change of case, have at least one digit, and have at least one non-alphanumeric character.
You should also consider what password hashes are used. Hackers can compare the company’s list of password hashes to a known database of common password hashes (such as the hash for password 123456 through common hashing functions).
Antonio Cozza says
Rainbow tables are the utility that helps speed up the computation required for hash cracking by malicious actors. It is important to try to avoid using common items that may be found in a rainbow table as you mention, like contiguous strings of numbers like 1-6, as they can significantly reduce the time required to crack the hash of such a password. Having 12 characters minimum with the complexity you mention also helps mitigate simple hash cracking to a degree, depending on the apparent randomness of the characters in that string.
Vraj Patel says
Hello Madalyn,
That was a great post. The another thing that I would add to that is the frequency of the password change requirement. This would also ensure that the user are using a different password from something they might be using for other application/accounts.
kofi bonsu says
The one salient point I found in this chapter has something to do with testing for vulnerability. this is so because Vulnerability testing is a mechanism of determining security risks in software systems to decrease the incidence of threats. It is important for the security of the organization and the process of locating and reporting the vulnerabilities, which provide a way to detect and resolve security problems by categorizing the vulnerabilities before someone or something can exploit them. Apart from to implement protections, planners and implementers, there is still possibility that hackers might attack the system or software through the system vulnerabilities.
In that regard, to do vulnerability testing properly, a security administrator needs to install vulnerability testing software on their PC in order to detect any possible vulnerabilities that the attackers might use to attack the systems. Hence, these programs run a battery of attacks against the servers, and then produce reports explaining the security vulnerabilities it found on the servers. Apart from that, its requirement is to create a vulnerability testing plan before testing, which contain the detailed step and accountability thorough determinations of the intended threat. Regarding vulnerability testing, it can decrease the possibility for intruders or hackers to have unauthorized access of systems to carry out their malicious intentions.
Michael Jordan says
Hi Kofi,
I also think that vulnerability/penetration testing is a key part of the system hardening process. One can use pre-written procedures and policies to perform system hardening, but if penetration testing is not a regular part of the procedure, then the vulnerabilities that go unnoticed will likely continue to go unnoticed for longer periods of time than necessary. There are most times unknown (or at least unnoticed) vulnerabilities in any system OS or its software, so penetration testing can also keep the system hardened against the more novel vulnerabilities.
-Mike
Andrew Nguyen says
One of the takeaways from this chapter for me was the importance of backing up a host regularly. While it may seem simple and obvious, I think it may be easy to forget when compared to other aspects of host hardening, like restricting access, using secure configuration options, managing users and groups, patching, etc. Personally, I haven’t backed up my computer at all (even though I know I should). I’m sure others are the same way, or have a backup that is older than 6 months (or whatever is deemed ‘regular’). It is easy to fall into the trap on focusing on more of the complex aspects of host hardening, and forgetting the simple basics of backing up your host in case something goes wrong, and this particular point was a reminder of that.
Kelly Sharadin says
Hi Andrew,
I agree, there are seemingly simple cyber hygiene mearsures everyone can take to help bolster their cyber security posture. The cloud helps to simplfy automatic backups for example Microsoft’s OneDrive can be set to automatically follow a backup schedule. Speaking locally, both Windows (File History) and MacOS (time machine) offer scheduling options for backup but these options are a bit more involved. However, in an enterprise setting – who would you make responsible for backups, IT or end users?
Kelly
Antonio Cozza says
The most significant section of the chapter for me is that of assigning permissions; permissions are one of the most important aspects of basic host hardening that could have vast negative implications and effects when improperly implemented. Controlling and managing file permissions helps an organization adhere to the principle of least privilege. User accounts can have privileges / permissions managed by groups, and or access control models like DAC/MAC. When adding permissions to a user account, it is important to evaluate whether the individual in question still requires access to all of the permissions it previously retained. If not controlled, privilege creep will occur, and a bad actor could potentially take advantage of this scenario and have a wider array of possible attack vectors.
Dhaval Patel says
Hi Antonio,
I agree. Poor implementation of file permissions is an easy way in for intruders, and it can allow for all three areas of the CIA triad to be breached. Implementing the concept of the principle of least privilege like you said is going to be a key factor in limiting exposure to certain documents.
Victoria Zak says
In this chapter, it was interesting to me the importance of groups and users. As Chapter 7 states, “applying measures to groups also tends to reduce errors because most groups have well-defined roles that lead to clear security requirements. Individuals, by contrast, may have multiple roles with different security requirements, making it difficult to assign proper security settings to individual accounts.”
An administrator account has total control over the computer and anyone using it should use it as less as possible to avoid anything the individual shouldn’t be doing. The system administrator is utilized to look at each group to see its members, then add or delete members from the group. Additionally, signing permissions is one of the most serious problems associated with security.
Dhaval Patel says
Hi Victoria,
This is a concept that hits home. I preach this idea to our customers during KT sessions. Assigning access controls through groups over individual users makes the job of the admin so much easier, it keeps the environment well organized and allows you to make instant changes if say a new employee is hired or if someone leaves. And of course, only those individuals who have administrative rights should be using that role.
Michael Jordan says
One key point that I took away from this chapter is that there are many different aspects of host hardening that must all come together to form an optimally hardened network. This multitude of aspects includes but is not limited to: group policy objects being in alignment with written network policies, having properly configured and active antivirus / anti-malware software on all hosts, systematically pushing out software updates to all hosts, having a complete inventory of network resources, hardening enabled and/or active services on all systems, systematically auditing the network and systems, and more. Part of the last point that I included (auditing of network and systems) is penetration testing, which is crucial to refining system hardening because it may discover some outliers that are not noticed by network admins during an already long and exhausting system hardening experience. This can include endpoints and software that are not inventoried, software updates that are neglected, and more.
Dan Xu says
From reading this week, I learned that virtualization provides multiple benefits during host hardening. Virtualization allows system administrators to create a security baseline for each server within an organization, or remote clients. In addition to being more secure, virtual environments can also reduce labor costs associated with server management, development, testing, and training. It can benefit businesses by reducing utility bill tolerance by shutting down unused physical servers and increasing failures.
Dhaval Patel says
Hi Dan Xu,
Great points! There is a debate about VMs being more secure than on-prem hosts, but I agree VMs do have multiple benefits when it comes to hardening and at times it can be easier to secure a VM compared to on-prem. Cost is a plus on the VM side because generally, you are only paying for what you use.
zijian ou says
Good points. The benefit of using virtual machines is also to reduce hardware costs. Many organizations are underutilizing their hardware resources. Instead of investing in another server, organizations can spin up virtual servers.
Vraj Patel says
One of the key points from this week’s reading is the vulnerabilities, which are security weaknesses in the system or applications. The vendors would create a patch for the vulnerability once it has been discovered by the researchers. However, there is a possibility of a zero-day attacks. Where the attackers would find out about the vulnerabilities before the vendors does and use that vulnerability to get unauthorized access to the systems or applications.
When the vendors discover the vulnerability, there are 4 types of fixes they could apply from it. First is work-around, where the series of steps are taken manually to attempt to mitigate the risk. The second type of fix is patch, when the vendor would find the vulnerability in the software or application and fix that vulnerability. The third type of fix is service packs. It is when the vendors would fix the identified vulnerability and improve a functionality in a single update. The fourth one is version upgrade. The version upgrade includes the fix for the identified vulnerability, and it also improves the security of the software.
Patrick Jurgelewicz says
Hey Vraj, I also found these different types of fixes interesting, and I was also surprised at some of the problems that come with patching. Large amounts of patches lead to organizations falling behind on patches, labor costs to install patches can become expensive, and organizations prioritizing patches can still leave them vulnerable.
Lauren Deinhardt says
One interesting takeaway from this reading was the usage of zero-day attacks. In the cybersecurity world, patches are viewed as a god-send, rapidly fixing exploitable weaknesses in an information system; I would have never thought that this asset can be used for malice. Zero day attacks are the result of hackers reverse-engineering published patches, therefore understanding the vulnerability in focus and gaining information on how to successfully exploit it. This intelligence is used by attackers to hack systems who have not yet administered these patches; knowing how long it takes for companies to patch (such in the Equifax case study), a plethora of organizations can easily fall victim to this type of attack.
Kyuande Johnson says
Patch management is the process of distributing and applying updates to software. These patches are often necessary to correct errors. Criminal hackers can take advantage of known vulnerabilities in operating systems and third-party. It also ensures your software and applications are kept up-to-date and run smoothly, supporting system uptime.There are three types of patch management security patches, bug fixes, and feature updates. Security Patches fixes vulnerabilities on your software and applications that are susceptible to cyber-attacks, helping your organization reduce its security risk. Bug Fixes is a temporary work-around, patch, or bypass to update the program code to correct errors or defects. Feature Updates are are technically new versions of the OS. In a Windows Environment, Feature Updates are available twice a year, during spring and fall time frame. They are also known as “semi-annual” releases, and they’re supported for 18 months. It is good to apply patches in a timely manner, but unless there is an imminent threat, don’t rush to deploy the patches until there is an opportunity to see what effect it is having elsewhere in similar software user communities. A good rule of thumb is to apply patches 30 days from their release.
Olayinka Lucas says
Host hardening is the process of removing unnecessary applications, ports, and services, tightly controlling any external storage devices that are going to be connected to the host, disabling unneeded accounts on the system, renaming default accounts, and changing default passwords to a system that could otherwise be used as a point of entry or compromise into the system.
By removing redundant and unnecessary infrastructure, i.e., (programs, accounts functions, applications, ports permissions access), etc., the likelihood of cyber-attacks is reduced based on a reduced attack surface which means that the system will be less vulnerable, making it more difficult for attackers or malware to compromise back doors within your IT system.
Full disk encryption coupled with robust network security protocols is the best way a system can be hardened.