This chapter explains how a host functions and the vulnerabilities and attacks a host could face on the daily basis if not secured. A host in networking is any devices with an IP address but in a single word the term host includes servers, routers, firewalls, mobile phones. If not protected, it can bring damages and be harmful for an organization. For security purposes, a host must be checked regularly. Restrictions must be applied to avoid an unplanned incident because neglecting one host is the same as welcoming hackers into your system.
When reading, I saw that a system administrator is the one responsible of managing a group of accounts or individual accounts on a host (super user account). He is the one who conducts or establish security measures on a server. For larger firms, there are multiple system administrators and firms have “security baselines” that reduce a system administrator’s work and ensure uniformity across his hardening effort. Patches are required to secure a host because it is good to be on top of things especially when now we face experienced or skilled people who are just hacking for the sake of hacking. It’s also good to have regular system updates because as mentioned in Boyle chapter “some vulnerability finders sell their vulnerabilities to hackers, who quickly develop exploits—programs that take advantage of the vulnerability”. So having patches in your security plan is a good way to be well prepared in case there is a breach and the consequences will not be as damaged as if there is no patches at all.
Your point about neglecting one host underscores the importance of maintaining a complete asset inventory. As we saw in the Equifax case study, failing to have thorough documentation of assets can result in serious issues. An unknown or forgotten system is hard to patch. Good administrative controls can help to address this situation by providing policies detailing how systems are commissioned and decommissioned. Technical controls such as vulnerability scans can highlight violations of these polices by identifying undocumented hosts on the network.
Bryan, I agree with the points from above and I like to quickly add that to better manage the security hardening process in an organization, a sound system administrators must also keep abreast of the latest common vulnerabilities and corresponding patches required to close the gaps of penetration and secure any applications utilized by the organizational to carry out its business functions and services timely. The goal therefore is to have a procedure for conducting risk assessment peculiar to areas of concern to detect anomalies.
In section 7.6.2 the authors review different password cracking techniques such as Brute Force and Dictionary attacks. One technique that I was not aware of is a Hybrid Dictionary attack. This attack tries simple modifications of common words contained in a dictionary file, e.g. adding numbers (password1), entering the password twice (passwordpassword), and using prefixes and suffixes (passworded & postpassword). These modifications are called mangling rules. Mangling rules allow an attacker to customize the dictionary and create derivatives of common passwords.
Password cracking tools such as John the Ripper have predefined settings that automatically apply common mangling rules. Users may think that applying conventions like these increase the complexity of their password, when in fact these can easily be defined and applied when running cracking tools.
I like your summary of the mixed dictionary attack. I usually set my password to Adding Numbers, but going through this chapter, I think I need to change all the passwords badly. Because I almost keep the same password for different accounts. I have noticed that the difficulty of password cracking becomes more difficult as the length of the password increases. Figure 7-26 shows that cracking a 10-bit high-strength password requires trying 1.07374E+19 combinations, which is undoubtedly a good way to protect password security.
This chapter gives an overview to secure hosts. As the book explains, host hardening is a process of protecting against attacks. The protection elements help each other. They are listed as backups, vulnerability tests, log monitoring, encryption, users and groups, operation system vulnerability checks and patches installation, minimization of a number of applications and operating system services, physical access restriction.
The patch management and implementation take a crucial part in the protection. As vulnerabilities arise on hosts and patches help us protect them, patches might also cause some issues. Unfortunately, the patches installation reduces functionality, and it takes some time and labor costs.
Building on your point about patch issues, I think it’s important to factor in compensating controls when developing a patch management strategy. An organization may decide not to patch brittle legacy systems in favor of choosing to isolate them in the network. This helps prevent issues arising from the application of a patch. The network isolation provides a compensating control to hedge against the risk of unpatched vulnerabilities. The unpatched host can be restricted from accessing the internet and communicating with non-essential hosts on the network. This reduces the risk with attackers finding and exploiting unpatched vulnerabilities.
Patch system before deployment is crucial to an organization security process as it always good to check if there is no misconfiguration somewhere that could be exploited later by hackers. It’s also good to be on top of things like having alerts about system updates, backups, vulnerability tests like you mentioned in your post. Since patches sometimes involves issues, it’s always good to test patches before installing them that way you know what went wrong and what could be improved when installing them.
Patches also might not be worth applying to particularly isolated environments that cannot afford to have downtime. For example, applying it to semi-mobile systems such as ships might not be feasible and have extended periods of time where these systems remain unpatched. Patching systems in operation are difficult due to their CIA levels, and often remain unpatched and vulnerable mostly relying on physical security or logical access control.
This chapter is about hardening the host, any device with an IP address is defined as a host. To protect a host from attack, it is necessary to harden the host. The most important thing in hardening the host is to back up the host regularly.
Host protection also includes:
Physical access control
Strong password
Application and Operating System Audit
Patch
Manage users and groups
Local access
Data encryption
Firewall
Log audit
System Vulnerability Test
Attackers can bypass firewalls, routers, and servers to attack hosts, which are the last line of defense for an organized attack. Host hardening can better protect information security.
Lin,
Your points are clear and it is essential enough to have completeness in the process of host hardening for the protection of information and assets pertaining to the organization implementing this security for defense in depth against attacks that are continuous in nature.
Host hardening is the process of hardening any device with IP address because they are prone to several forms of attacks. Virtualization allows multiple operating systems along with their associated applications and data to run independently on a single physical machine. This explains the benefit of having a virtualization environment set up to manage multiple operating systems and have them share local systems resources across multiple physical computers. One important benefits of host hardening process in a virtualization environment is it allows the system administrators to create a single security baseline for each server (or remote client) within the organization. With cloning a hardened virtual machine minimizes the chances of incorrect configuring a server, also reduces the time needed to configure the server and eliminate the need to install applications, patches or service packs to be included.
There are some security concerns with Virtualization/Cloud Computing too. Corporations have been quick to virtualize internal computing environments for increased scalability, reliability, and agility. But they have been slow to adopt external cloud services. Several concerns arise when corporations consider using any third-party service provider. These concerns become more acute when the service provider has access to critical systems and data. Security breaches involving critical systems and data could cause irreparable harm. One of the most difficult factors to assess when considering using a cloud services is trust. Trust is hard to measure, and even more difficult to build. It’s also the main roadblock that prevents companies from adopting online services.
Shubham
I agree with your point. However, this improved security posture doesn’t mean that virtualization has no security risks. The fact that many businesses employ this technology makes it a valid target for hackers and other malicious actors.
It’s safe to say that virtualization is no more (or less) of a security risk than other parts of your information technology infrastructure explained in the context of this reading material and with increased adoption comes the need for awareness of the potential issues that IT administrators may face.
A key point that I took away from the chapter on host hardening was the section on the problems with patching. The number of patches really has increased over time. There is a significant increase in the number of patches that firms need to decide on installing. The report that was mentioned in this section stated that at the end of 2018 there were 22,000 new vulnerabilities discovered in the past year. Compared to 2001 when there was only an average of 5 patches to install a day, one can see how this has become near impossible to manage. The use of a patch management server/system is vital in ensuring a firm can organize and categorize all these patches so that they can be applied in order of criticality. Otherwise, it’s impossible to manually keep track of all the potential patches that need to be installed. Firms have many systems and software that require regular patching. If the number of new vulnerabilities was 22,000 in 2018 it is safe to say that it has only increased since then.
There are problems with patch management too, The most common problem associated with the patch management process is that of a buggy patch. Occasionally, a patch will introduce problems that did not previously exist. These problems may show up in the product that is being patched, or the problems may manifest themselves elsewhere if other software has a dependency relationship with the software that was recently patched. Because patches can sometimes introduce problems into a system that was previously working correctly, it is important for administrators to test patches prior to deploying them on an organization wide basis.
That’s a great point. I think it is important to prioritize the severity of the issues being patched. In other words, patches for issues that are not a high priority can be scheduled to be installed on a delay. That way if there are any issues with the patch, it will most likely be discovered in the first few weeks of it’s release. Anything addressing a high criticality issue will most likely still need to be patched in a timely manner.
Using multiple means of protection to any system can be considered as host hardening. Normally the protection is provided in various layers which is known as defense in depth.
The idea in system hardening is try to protect it in various layers like physical level, user level, OS level, application level, host level and other sublayers
Section 7.7 is very interesting and it makes a great point in explaining that vulnerability testing software is excellent, however it is useless if the person running it does not understand the attacks or know how to read the reports it produces.
I like your initial statement of “using multiple means of protection”. This is because even in doing this, there is still no way to determine that a computer system / network is 100% secure. There are always weaknesses or vulnerabilities that can be discovered and exploited to gain access to that system and or network. You also followed up with a few other valid points, great post!
As companies are rapidly moving their infrastructure to cloud, it was interesting to read about the benefits of virtualization in hosts hardening process. It allows systems administrators to create a single security baseline for each server within the organization. Subsequent instances of that server can be cloned from an existing hardened virtual machine in a few minutes of hours or days.
Cloning hardened virtual machines minimizes the chance of incorrectly configuring a server, reduces the time needed to configure the server, and estimates the need to install applications, patches or service packs. In addition to being more secure, virtual environments can also benefit businesses by reducing labor costs associated with server administration, development, testing and training. It can also reduce utility expenses by shutting down unused physical servers, and increasing fault tolerance and availability.
Boyle and Panko Chapter 7, section 7.6, emphasized having strong password-protecting access to underlying systems as an effective way of host hardening. The typical guidelines when creating a new password includes the following criteria:
-Be at least eight characters long
-Have at least one uppercase character
-Include at least one numeric digit
-Include at least one non-alphanumeric symbol
Beyond that, in this section, I understood the importance of enforcing the policies noted in Chapter 5, Access Control, after understanding how passwords are created, stored, and easily cracked. In class we have reviewed brute-force guessing, dictionary attacks, and hybrid dictionary attacks. The only other cracking method that was new to me was rainbow tables. It was definitely a wake-up call after realizing how easily passwords can be cracked, especially with the proper programs in place.
At my organization we have a password policy which outlines the criteria, including character length, complexity requirements, etc,. which our internal applications must adhere to. For our externally hosted systems, there are instances where those systems do not comply or adhere to our password requirements due to system limitations, such as they can’t be changed because it would also modify the password settings for some of the other third-parties customers. We typically document these systems as well as their attributes that don’t meet our internal criteria to ensure management awareness. In addition, within this document we also detail other factors, such as MFA or network requirements in order to access the system, so we can assume the risk is mitigated or at least lowered.
I really like what you wrote about this chapter. There are definitely pros and cons to an organization having password complexity standards on their network / systems. Yet it’s still not full proof as you have mentioned, you also elaborated on the methods that could compromise these alleged complex passwords. Those examples you gave are reasons why a lot of organizations also require two factor authentication such as biometrics, one-time passwords, verification codes, QR codes, hardware tokens, and other methods which all add another layer of security.
This chapter talks about the host hardening and network systems. It explains that unnecessary services/applications should be removed, weak default settings should be secured, and the stay system should be patched for any servers and devices that are implemented in an organization to prevent hackers from hacking into the organization’s systems. Section 7.3 talks about vulnerabilities and patches and how most vulnerabilities finders notify software vendors so that vendors can develop fixes for the vulnerabilities. However, any attack that comes before fixes are released is called a zero-day attack.
Good point about unnecessary services and apps needing to be removed, I believe often organizations kind of let some of these systems sneak by unnoticed which can be a crucial risk
Vulnerability and patching discuss the fixes and the problems fixes can sometimes cause after they are implemented. For example, the most dangerous period after a fix is usually immediately after the vendor released the fix because it can be reverse engineered. Which could leave the environment even more vulnerable than before since it will take time to patch the additional fix after it is found. And often times these patches can cause freezes or bugs to a system. Sometimes these bugs are irreversible since rollbacks are not possible. I’ve personally seen groups take entire images of the system before any implementation in the event that a bug is introduced they are able to rollback the system to a functional state before implementation. Which goes back to the topic does the fix pose a greater risk than not implementing it entirely?
In this chapter, the authors mentioned about host hardening. The authors pointed that the host was the last line of security defense. In fact, an entity should adopt a security baseline for a particular version of the OS the host is running. There are 2 main operating systems for servers: Microsoft’s Windows Server OS and open- source UNIX operating system including Linux – OS for PCs . The open-source UNIX is almost free and cheaper than MS operating system; however, it is hard to talk about UNIX hardening generally because there are several versions of UNIX and these versions offer different system administration tools including security tools. For me, there are couple guidelines needed to do for host hardening;
1. Need to test patches on test machines before installing them on servers
2. Need to use patch management servers automatically.
3. Need to give right permissions for the right users/groups and also pay attention to inheritance
4. Need to encrypt passwords when storing them ( do not store passwords as plaintext )
A part of chapter 7 that I really enjoyed learning about was patches. This chapter goes into good detail about vulnerabilities and fixing methods. Patches are released constantly, and this could pose as a sort of problem to companies. The reason behind this is that companies have a hard time keeping up with all the patches, and this forces them to prioritize what patches should be used, determined by what will be best for the company. A good way to help with this problem is by working with patch management servers. These servers do the job of finding what patches are most necessary and sending this patch recommendation out to your company.
Chapter 7 discusses the different protections included in host hardening. One of the protections discussed that I found interesting was section 7.6 on creating strong passwords and password-cracking techniques.
Passwords can’t really be secure if they are not stored securely. Passwords should be hashed and only the password hash should be stored, never the plaintext passwords. Access to the stored password hashes should be restricted to super users.
Password complexity and length are essential when creating a secure password. These considerations help to thwart off attacks such as brute force password guessing and dictionary attacks.
To make your operating system harder is to increase the security of your systems. One way to harden a system is to disable any services that may be running but are unnecessary. The smaller number of services you have running on a computer, the better the security posture will be. Another example would be disabling default accounts that are included in operating systems; such as the guest account, root account, or even a mail account. Moreover, system hardening is “a collection of tools, techniques, and best practices to reduce vulnerability in technology applications, systems, infrastructure, firmware, and other areas”. The main purpose is to minimize security vulnerabilities and eliminate as many security risks as possible.
Hey Joshua, great points about the chapter. It would be ideal to limit unnecessary services and accounts from running on a computer. My question for you is, How often have you run into a work computer with just the bare minimum on it? I feel like sometimes companies could just have the bare min of components for workers to work but why do they always have extra’s that could get the computer compromised?
I am currently a Windows 10 deployment tech at Thomas Jefferson University Hospital, so your question is actually relevant to my current work. When I am swapping machines out, I have to look at the SCCM to determine what software was installed on the previous machine so that I can reinstall it on the new onw. & honestly there is always something extra such as; VLC media player, spotify, or even iTunes. I agree that a work computer should have the minimum software and privileges possible for the end user to do his or her job, nothing more.. nothing less.
In chapter 7 host hardening, the chapter teaches us about vulnerabilities and patches, managing permissions, testing for vulnerabilities and creating strong passwords. I wanted to focus on the key points of creating and the storing of passwords. Creating passwords is more than just using letter, numbers, upper case letters and symbols. From reading the chapter, I learned about password hashes which are created when a password is passed from a user to a hashing function. The hashing function returns a fix sized password hash which is also known as a digest. The has is then stored with the corresponding username and other account information. The password is not stored only the hash is. Password stealing from this chapter was also an interesting point for myself. The process of stealing a password isn’t as easy as it sounds. To steal a password, a hacker must gain access to the system, obtain admin level permissions and then extricate a copy of the password database. Only then can a hacker crack the password.
The section around managing users and groups stood out to me from section 7. It’s true that assigning access to a resource at the group level makes managing access much easier for system administrators. However, there are a few components system administrators need to be aware of when managing access at the group level. First, it’s important that system administrators are aware of the users included in a particular group to ensure no segregation of duties risks are introduced when the group is assigned to a different or new IT resource. Second, when access reviews are performed system administrators shouldn’t just review access at the group level. Rather, the users or perhaps even nested groups who are included in that group should also be reviewed to ensure access is appropriate at the individual/nested group level as well.
In this week’s reading something that stood out as pretty interesting to me was the “In the News” under section 7.1.5, more specifically the area titled “Meltdown, Spectre, Foreshadow, & ZombieLoad”. This area describes how there are four hardware vulnerabilities in essentially every CPU that has been made since 2011. Meltdown & Spectre take advantage of two CPU processes, one being speculative execution, and the other being caching. Essentially attackers can manipulate these functions of the CPU via software to gain access to memory components such as bank account numbers or passwords. Foreshadow & ZombieLoad are flaws in Intel chips that allow hackers to extract data as it is being processed. The whole section is pretty fascinating, and also bit terrifying, because these vulnerabilities still exist, and who knows what other unforeseen vulnerabilities may emerge as CPUs continue to progress.
Speculative execution is an optimization technique in which a processor (CPU) performs a series of tasks before it is prompted to, in order to have the information ready if it is required at any point.
This chapter explains how a host functions and the vulnerabilities and attacks a host could face on the daily basis if not secured. A host in networking is any devices with an IP address but in a single word the term host includes servers, routers, firewalls, mobile phones. If not protected, it can bring damages and be harmful for an organization. For security purposes, a host must be checked regularly. Restrictions must be applied to avoid an unplanned incident because neglecting one host is the same as welcoming hackers into your system.
When reading, I saw that a system administrator is the one responsible of managing a group of accounts or individual accounts on a host (super user account). He is the one who conducts or establish security measures on a server. For larger firms, there are multiple system administrators and firms have “security baselines” that reduce a system administrator’s work and ensure uniformity across his hardening effort. Patches are required to secure a host because it is good to be on top of things especially when now we face experienced or skilled people who are just hacking for the sake of hacking. It’s also good to have regular system updates because as mentioned in Boyle chapter “some vulnerability finders sell their vulnerabilities to hackers, who quickly develop exploits—programs that take advantage of the vulnerability”. So having patches in your security plan is a good way to be well prepared in case there is a breach and the consequences will not be as damaged as if there is no patches at all.
Your point about neglecting one host underscores the importance of maintaining a complete asset inventory. As we saw in the Equifax case study, failing to have thorough documentation of assets can result in serious issues. An unknown or forgotten system is hard to patch. Good administrative controls can help to address this situation by providing policies detailing how systems are commissioned and decommissioned. Technical controls such as vulnerability scans can highlight violations of these polices by identifying undocumented hosts on the network.
Bryan, I agree with the points from above and I like to quickly add that to better manage the security hardening process in an organization, a sound system administrators must also keep abreast of the latest common vulnerabilities and corresponding patches required to close the gaps of penetration and secure any applications utilized by the organizational to carry out its business functions and services timely. The goal therefore is to have a procedure for conducting risk assessment peculiar to areas of concern to detect anomalies.
In section 7.6.2 the authors review different password cracking techniques such as Brute Force and Dictionary attacks. One technique that I was not aware of is a Hybrid Dictionary attack. This attack tries simple modifications of common words contained in a dictionary file, e.g. adding numbers (password1), entering the password twice (passwordpassword), and using prefixes and suffixes (passworded & postpassword). These modifications are called mangling rules. Mangling rules allow an attacker to customize the dictionary and create derivatives of common passwords.
Password cracking tools such as John the Ripper have predefined settings that automatically apply common mangling rules. Users may think that applying conventions like these increase the complexity of their password, when in fact these can easily be defined and applied when running cracking tools.
Hi Matthew,
I like your summary of the mixed dictionary attack. I usually set my password to Adding Numbers, but going through this chapter, I think I need to change all the passwords badly. Because I almost keep the same password for different accounts. I have noticed that the difficulty of password cracking becomes more difficult as the length of the password increases. Figure 7-26 shows that cracking a 10-bit high-strength password requires trying 1.07374E+19 combinations, which is undoubtedly a good way to protect password security.
This chapter gives an overview to secure hosts. As the book explains, host hardening is a process of protecting against attacks. The protection elements help each other. They are listed as backups, vulnerability tests, log monitoring, encryption, users and groups, operation system vulnerability checks and patches installation, minimization of a number of applications and operating system services, physical access restriction.
The patch management and implementation take a crucial part in the protection. As vulnerabilities arise on hosts and patches help us protect them, patches might also cause some issues. Unfortunately, the patches installation reduces functionality, and it takes some time and labor costs.
Building on your point about patch issues, I think it’s important to factor in compensating controls when developing a patch management strategy. An organization may decide not to patch brittle legacy systems in favor of choosing to isolate them in the network. This helps prevent issues arising from the application of a patch. The network isolation provides a compensating control to hedge against the risk of unpatched vulnerabilities. The unpatched host can be restricted from accessing the internet and communicating with non-essential hosts on the network. This reduces the risk with attackers finding and exploiting unpatched vulnerabilities.
Hi Miray,
Patch system before deployment is crucial to an organization security process as it always good to check if there is no misconfiguration somewhere that could be exploited later by hackers. It’s also good to be on top of things like having alerts about system updates, backups, vulnerability tests like you mentioned in your post. Since patches sometimes involves issues, it’s always good to test patches before installing them that way you know what went wrong and what could be improved when installing them.
Patches also might not be worth applying to particularly isolated environments that cannot afford to have downtime. For example, applying it to semi-mobile systems such as ships might not be feasible and have extended periods of time where these systems remain unpatched. Patching systems in operation are difficult due to their CIA levels, and often remain unpatched and vulnerable mostly relying on physical security or logical access control.
This chapter is about hardening the host, any device with an IP address is defined as a host. To protect a host from attack, it is necessary to harden the host. The most important thing in hardening the host is to back up the host regularly.
Host protection also includes:
Physical access control
Strong password
Application and Operating System Audit
Patch
Manage users and groups
Local access
Data encryption
Firewall
Log audit
System Vulnerability Test
Attackers can bypass firewalls, routers, and servers to attack hosts, which are the last line of defense for an organized attack. Host hardening can better protect information security.
Lin,
Your points are clear and it is essential enough to have completeness in the process of host hardening for the protection of information and assets pertaining to the organization implementing this security for defense in depth against attacks that are continuous in nature.
Host hardening is the process of hardening any device with IP address because they are prone to several forms of attacks. Virtualization allows multiple operating systems along with their associated applications and data to run independently on a single physical machine. This explains the benefit of having a virtualization environment set up to manage multiple operating systems and have them share local systems resources across multiple physical computers. One important benefits of host hardening process in a virtualization environment is it allows the system administrators to create a single security baseline for each server (or remote client) within the organization. With cloning a hardened virtual machine minimizes the chances of incorrect configuring a server, also reduces the time needed to configure the server and eliminate the need to install applications, patches or service packs to be included.
Oluwaseun,
There are some security concerns with Virtualization/Cloud Computing too. Corporations have been quick to virtualize internal computing environments for increased scalability, reliability, and agility. But they have been slow to adopt external cloud services. Several concerns arise when corporations consider using any third-party service provider. These concerns become more acute when the service provider has access to critical systems and data. Security breaches involving critical systems and data could cause irreparable harm. One of the most difficult factors to assess when considering using a cloud services is trust. Trust is hard to measure, and even more difficult to build. It’s also the main roadblock that prevents companies from adopting online services.
Shubham
I agree with your point. However, this improved security posture doesn’t mean that virtualization has no security risks. The fact that many businesses employ this technology makes it a valid target for hackers and other malicious actors.
It’s safe to say that virtualization is no more (or less) of a security risk than other parts of your information technology infrastructure explained in the context of this reading material and with increased adoption comes the need for awareness of the potential issues that IT administrators may face.
A key point that I took away from the chapter on host hardening was the section on the problems with patching. The number of patches really has increased over time. There is a significant increase in the number of patches that firms need to decide on installing. The report that was mentioned in this section stated that at the end of 2018 there were 22,000 new vulnerabilities discovered in the past year. Compared to 2001 when there was only an average of 5 patches to install a day, one can see how this has become near impossible to manage. The use of a patch management server/system is vital in ensuring a firm can organize and categorize all these patches so that they can be applied in order of criticality. Otherwise, it’s impossible to manually keep track of all the potential patches that need to be installed. Firms have many systems and software that require regular patching. If the number of new vulnerabilities was 22,000 in 2018 it is safe to say that it has only increased since then.
Ryan,
There are problems with patch management too, The most common problem associated with the patch management process is that of a buggy patch. Occasionally, a patch will introduce problems that did not previously exist. These problems may show up in the product that is being patched, or the problems may manifest themselves elsewhere if other software has a dependency relationship with the software that was recently patched. Because patches can sometimes introduce problems into a system that was previously working correctly, it is important for administrators to test patches prior to deploying them on an organization wide basis.
Hi Shubham,
That’s a great point. I think it is important to prioritize the severity of the issues being patched. In other words, patches for issues that are not a high priority can be scheduled to be installed on a delay. That way if there are any issues with the patch, it will most likely be discovered in the first few weeks of it’s release. Anything addressing a high criticality issue will most likely still need to be patched in a timely manner.
Chapter 7 explains Host Hardening
Using multiple means of protection to any system can be considered as host hardening. Normally the protection is provided in various layers which is known as defense in depth.
The idea in system hardening is try to protect it in various layers like physical level, user level, OS level, application level, host level and other sublayers
Section 7.7 is very interesting and it makes a great point in explaining that vulnerability testing software is excellent, however it is useless if the person running it does not understand the attacks or know how to read the reports it produces.
Hello Jason,
I like your initial statement of “using multiple means of protection”. This is because even in doing this, there is still no way to determine that a computer system / network is 100% secure. There are always weaknesses or vulnerabilities that can be discovered and exploited to gain access to that system and or network. You also followed up with a few other valid points, great post!
As companies are rapidly moving their infrastructure to cloud, it was interesting to read about the benefits of virtualization in hosts hardening process. It allows systems administrators to create a single security baseline for each server within the organization. Subsequent instances of that server can be cloned from an existing hardened virtual machine in a few minutes of hours or days.
Cloning hardened virtual machines minimizes the chance of incorrectly configuring a server, reduces the time needed to configure the server, and estimates the need to install applications, patches or service packs. In addition to being more secure, virtual environments can also benefit businesses by reducing labor costs associated with server administration, development, testing and training. It can also reduce utility expenses by shutting down unused physical servers, and increasing fault tolerance and availability.
Boyle and Panko Chapter 7, section 7.6, emphasized having strong password-protecting access to underlying systems as an effective way of host hardening. The typical guidelines when creating a new password includes the following criteria:
-Be at least eight characters long
-Have at least one uppercase character
-Include at least one numeric digit
-Include at least one non-alphanumeric symbol
Beyond that, in this section, I understood the importance of enforcing the policies noted in Chapter 5, Access Control, after understanding how passwords are created, stored, and easily cracked. In class we have reviewed brute-force guessing, dictionary attacks, and hybrid dictionary attacks. The only other cracking method that was new to me was rainbow tables. It was definitely a wake-up call after realizing how easily passwords can be cracked, especially with the proper programs in place.
At my organization we have a password policy which outlines the criteria, including character length, complexity requirements, etc,. which our internal applications must adhere to. For our externally hosted systems, there are instances where those systems do not comply or adhere to our password requirements due to system limitations, such as they can’t be changed because it would also modify the password settings for some of the other third-parties customers. We typically document these systems as well as their attributes that don’t meet our internal criteria to ensure management awareness. In addition, within this document we also detail other factors, such as MFA or network requirements in order to access the system, so we can assume the risk is mitigated or at least lowered.
Hello Elizabeth,
I really like what you wrote about this chapter. There are definitely pros and cons to an organization having password complexity standards on their network / systems. Yet it’s still not full proof as you have mentioned, you also elaborated on the methods that could compromise these alleged complex passwords. Those examples you gave are reasons why a lot of organizations also require two factor authentication such as biometrics, one-time passwords, verification codes, QR codes, hardware tokens, and other methods which all add another layer of security.
This chapter talks about the host hardening and network systems. It explains that unnecessary services/applications should be removed, weak default settings should be secured, and the stay system should be patched for any servers and devices that are implemented in an organization to prevent hackers from hacking into the organization’s systems. Section 7.3 talks about vulnerabilities and patches and how most vulnerabilities finders notify software vendors so that vendors can develop fixes for the vulnerabilities. However, any attack that comes before fixes are released is called a zero-day attack.
Hey Mohammed,
Good point about unnecessary services and apps needing to be removed, I believe often organizations kind of let some of these systems sneak by unnoticed which can be a crucial risk
Vulnerability and patching discuss the fixes and the problems fixes can sometimes cause after they are implemented. For example, the most dangerous period after a fix is usually immediately after the vendor released the fix because it can be reverse engineered. Which could leave the environment even more vulnerable than before since it will take time to patch the additional fix after it is found. And often times these patches can cause freezes or bugs to a system. Sometimes these bugs are irreversible since rollbacks are not possible. I’ve personally seen groups take entire images of the system before any implementation in the event that a bug is introduced they are able to rollback the system to a functional state before implementation. Which goes back to the topic does the fix pose a greater risk than not implementing it entirely?
In this chapter, the authors mentioned about host hardening. The authors pointed that the host was the last line of security defense. In fact, an entity should adopt a security baseline for a particular version of the OS the host is running. There are 2 main operating systems for servers: Microsoft’s Windows Server OS and open- source UNIX operating system including Linux – OS for PCs . The open-source UNIX is almost free and cheaper than MS operating system; however, it is hard to talk about UNIX hardening generally because there are several versions of UNIX and these versions offer different system administration tools including security tools. For me, there are couple guidelines needed to do for host hardening;
1. Need to test patches on test machines before installing them on servers
2. Need to use patch management servers automatically.
3. Need to give right permissions for the right users/groups and also pay attention to inheritance
4. Need to encrypt passwords when storing them ( do not store passwords as plaintext )
A part of chapter 7 that I really enjoyed learning about was patches. This chapter goes into good detail about vulnerabilities and fixing methods. Patches are released constantly, and this could pose as a sort of problem to companies. The reason behind this is that companies have a hard time keeping up with all the patches, and this forces them to prioritize what patches should be used, determined by what will be best for the company. A good way to help with this problem is by working with patch management servers. These servers do the job of finding what patches are most necessary and sending this patch recommendation out to your company.
Chapter 7 discusses the different protections included in host hardening. One of the protections discussed that I found interesting was section 7.6 on creating strong passwords and password-cracking techniques.
Passwords can’t really be secure if they are not stored securely. Passwords should be hashed and only the password hash should be stored, never the plaintext passwords. Access to the stored password hashes should be restricted to super users.
Password complexity and length are essential when creating a secure password. These considerations help to thwart off attacks such as brute force password guessing and dictionary attacks.
To make your operating system harder is to increase the security of your systems. One way to harden a system is to disable any services that may be running but are unnecessary. The smaller number of services you have running on a computer, the better the security posture will be. Another example would be disabling default accounts that are included in operating systems; such as the guest account, root account, or even a mail account. Moreover, system hardening is “a collection of tools, techniques, and best practices to reduce vulnerability in technology applications, systems, infrastructure, firmware, and other areas”. The main purpose is to minimize security vulnerabilities and eliminate as many security risks as possible.
Hey Joshua, great points about the chapter. It would be ideal to limit unnecessary services and accounts from running on a computer. My question for you is, How often have you run into a work computer with just the bare minimum on it? I feel like sometimes companies could just have the bare min of components for workers to work but why do they always have extra’s that could get the computer compromised?
Hey Corey,
I am currently a Windows 10 deployment tech at Thomas Jefferson University Hospital, so your question is actually relevant to my current work. When I am swapping machines out, I have to look at the SCCM to determine what software was installed on the previous machine so that I can reinstall it on the new onw. & honestly there is always something extra such as; VLC media player, spotify, or even iTunes. I agree that a work computer should have the minimum software and privileges possible for the end user to do his or her job, nothing more.. nothing less.
In chapter 7 host hardening, the chapter teaches us about vulnerabilities and patches, managing permissions, testing for vulnerabilities and creating strong passwords. I wanted to focus on the key points of creating and the storing of passwords. Creating passwords is more than just using letter, numbers, upper case letters and symbols. From reading the chapter, I learned about password hashes which are created when a password is passed from a user to a hashing function. The hashing function returns a fix sized password hash which is also known as a digest. The has is then stored with the corresponding username and other account information. The password is not stored only the hash is. Password stealing from this chapter was also an interesting point for myself. The process of stealing a password isn’t as easy as it sounds. To steal a password, a hacker must gain access to the system, obtain admin level permissions and then extricate a copy of the password database. Only then can a hacker crack the password.
The section around managing users and groups stood out to me from section 7. It’s true that assigning access to a resource at the group level makes managing access much easier for system administrators. However, there are a few components system administrators need to be aware of when managing access at the group level. First, it’s important that system administrators are aware of the users included in a particular group to ensure no segregation of duties risks are introduced when the group is assigned to a different or new IT resource. Second, when access reviews are performed system administrators shouldn’t just review access at the group level. Rather, the users or perhaps even nested groups who are included in that group should also be reviewed to ensure access is appropriate at the individual/nested group level as well.
In this week’s reading something that stood out as pretty interesting to me was the “In the News” under section 7.1.5, more specifically the area titled “Meltdown, Spectre, Foreshadow, & ZombieLoad”. This area describes how there are four hardware vulnerabilities in essentially every CPU that has been made since 2011. Meltdown & Spectre take advantage of two CPU processes, one being speculative execution, and the other being caching. Essentially attackers can manipulate these functions of the CPU via software to gain access to memory components such as bank account numbers or passwords. Foreshadow & ZombieLoad are flaws in Intel chips that allow hackers to extract data as it is being processed. The whole section is pretty fascinating, and also bit terrifying, because these vulnerabilities still exist, and who knows what other unforeseen vulnerabilities may emerge as CPUs continue to progress.
Speculative execution is an optimization technique in which a processor (CPU) performs a series of tasks before it is prompted to, in order to have the information ready if it is required at any point.