Managing users, groups and permissions were the two topics that stood out from this week’s reading. User and group management is one of the key steps to ensuring security of a host environment. The existence of any dormant user accounts and groups jeopardizes security and puts the network system at risk in several ways. The most common risk will be if those idle accounts are compromised and used to attack the network. The other risk is, such accounts could have outdated security vulnerability since they are not active and do not apply the updated security policies. Ensuring the proper deletion and removal of inactive user accounts and groups is one way to hardening the host environment, with the other being, ensuring that groups created are aligned with the specific duties and objectives they aim to achieve only.
Managing permissions also go together with user and group management, since users and groups are assigned specific roles and access rights to certain zones and resources in the environment. Therefore, system administrators out to ensure they are assigning the minimum needed permissions to each user to carry out their daily operations. This practice reduces the risks associated with a user account getting compromised, and in turn helps harden the host environment.
I think the principle of least privilege/permissions is quite effective as the authorized user is given the minimum levels of access to perform his/her job which as you said reduces the risk of a user account getting compromised which will help harden the host environment. This will reduce the security risks and attack surface. For example, implementing least privilege can help mitigate social engineering attacks like phishing by limiting execution of only certain file types.
Hi Priyanka….you make a valid point on least privileged access. However, least privileged is very theoretical and most organizations have difficulty defining the criteria for what least privileged means. For example, high risk systems in most organizations are surrounding the organization’s core business (client, vendor, contract or regulated data) and employee information. Any system that has create, update or delete that relates to core business is typically the crown jewels of the organization and least privileged access should be focused on those crown jewels. Then there are system integrations that cause the crown jewel data to be read only in other connected systems. Does least privileged apply to those systems that are read only? What about crown jewel data that is copied to a shared drive or a Teams site and is completely unstructured? What about systems that have been running for 40+ years before access controls were a concept? The implementation of least privileged becomes difficult when an organization is attempting to apply it after the system has been in existence prior to the adoption of least privileged or when least privileged criteria are not determined in advance of a system’s implementation.
Authority management generally refers to the security rules or security policies set by the system that users can access and can only access their authorized resources, no more, no less. Authority management appears in almost any system, as long as there are systems with users and passwords. Many people often confuse the concepts of “user identity authentication,” “password encryption,” and “system management” with the concept of authority management.
The poor authority management system will inevitably leave system loopholes, allowing hackers to take advantage of it. Many software can easily obtain unauthorized data through URL intrusion, SQL injection, and other modes. Even modify or delete system data, causing huge losses.
Many systems, especially hard-coded systems, have permission logic and business code closely coupled, and at the same time, they are scattered in various places in the system. There are bound to be many system vulnerabilities, and as the system continues to be modified, vulnerabilities gradually increase. A good system should centralize the permission logic and be set and analyzed by a professional security engine. The business logic calls the security engine to obtain the permission result and no longer uses the non-professional mode.
Hi Zibai,
I see how easy it would be for even IT professionals to confuse the terms you mentioned above. Almost all are set or enforced by group policies either directly or indirectly. Having proper authority management in place does however offer improved security benefits to the system, since users do not necessarily have access to more than they need for their duties. Should one be compromised, access to resources and zones by the attackers are limited to what the compromised account has access to, which as a good practice should always be the bare minimum.
One of the highlights of this chapter is what a security benchmark is and why it is important. A security baseline is a set of configuration Settings used to enforce a host for a certain type of operating system. Safety benchmarks are critical because they constitute standard guidelines for all departments. These Settings were determined based on feedback from the engineering team, partners and customers. In today’s computing era, the security landscape is evolving rapidly, and security professionals and policy makers are struggling to keep up with new threats and make the necessary changes to security Settings to mitigate them. Microsoft’s Group Policy Object backup is an example of a security benchmark provided to customers.
Hi Wenyao,
Setting a security baseline is also one way to help determine the level of residual risks in some instances. It helps security professionals know the level at which the system is protected based on the minimum intended/acceptable security they hoped to achieve.
Hi Wenyao,
I also found these very useful. These can also be used as security baselines for configuration settings. Safety baselines are important because they are standard guidelines for all systems and equipment. With the changing security situation, the challenge for security professionals is how to keep up with new threats and make necessary changes to mitigate new risks. Having a baseline / key component is very helpful to have a common starting point.
The one important point I noticed in this chapter is about testing for vulnerability. Vulnerability testing is a process of evaluating security risks in software systems to reduce the probability of threats. Except to implement protections, planners and implementers, there still have probability that hackers attack the system or software by the vulnerabilities.
To do vulnerability testing, a security administrator installs vulnerability testing software on his or her PC, and then run sit against the servers within the security administrator’s realm of concern. These programs run a battery of attacks against the servers, and then generate reports detailing the security vulnerabilities it found on the servers. Besides, it needs to create a vulnerability testing plan before testing, which contain the detailed step and accountability. By the vulnerability testing, it can reduce the possibility for intruders or hackers to get unauthorized access of systems.
Hi Xinyi,
Thank you for understanding the differences between the different operating systems outlined in this chapter. Fewer accounts will prove more risky to the operating system. However, the more accounts there are, the more vulnerabilities there are. From the auditor’s point of view, more classifications only add more work when compromising the security of information assets. But, in the end, safety is the main goal.
From the perspective of its auditors, this chapter is particularly insightful. It provides advanced information about the specific steps that system administrators take to protect systems and devices. It’s important to consider the application and operating system before you plan to enhance the system. System administrators should not use hardening as an independent form of protection but should use it in combination with a series of steps, such as routine backup and physical access restrictions. Especially in last week’s case study, it is particularly interesting to learn that patch management is a very critical control. However, organizations face several problems in patch management. For example, when a device is running various applications and operating systems, how to determine its priority. To better manage time and resource constraints, organizations should conduct a risk assessments when considering patch management priorities. System administrators must also keep abreast of the latest vulnerabilities and corresponding patches of any applications and operating systems used by their organizational systems.
I agree, I think this is one of those chapters where almost everything is important. It covers multiple points that auditors need to have their eyes peeled for, as some points may not be a significant deal, but omitting them in an audit could be detrimental to the organization. In general I also think that patch management is one of the hardest parts of the whole process to keep up and maintain with. Like you said you don’t always know the priority and the fact that they pop up quickly and need to be fixed on the fly do not help that.
The topic that I would like to mention for this week’s reading is Virtualization. Virtualization allows multiple operating systems along with their associated applications and data to run independently on a single physical machine. This chapter mentions benefits of virtualization in the host hardening process. One of the benefits is that is allows systems administrators to create a single security baseline for each server within the organization. Subsequent instances of that server can be cloned from an existing hardening virtual machine in a few minutes. This minimizes the chance of incorrectly configuring a server and reduces the time to configure the server. Besides this, virtualization provides many other benefits such as reduced costs, increased performance and availability of resources, reduce downtime, increase efficiency and productivity, better scalability, etc.
Hi, Priyanka, I agree with your points with virtualization, The advantage of virtualization is that users can use multiple operating systems, run simultaneously without interruption, and share the same local resources. To a large extent for the company to save time and money.
There are several benefits that virtualization provides. First, it makes management information systems more productive and makes them agile with business strategy. While system administrators manage individuals or groups by creating domains within the organization, the users can access the application and resources quickly. Also, an IT employee, who manages the virtual machine (VM), can duplicate the server from an existing hardened virtual machine in only a few minutes. To reduce operating costs, the organizations can save money on purchasing less hardware than physical machines. Moreover, VM can always manage the data center effectively as a backup. It uses snapshots to provide updated data throughout the day. Once the organizations apply the VM appropriately, it can increase the flexibility of the management information system and the performance of operations.
Good post on use of virtual servers and how virtualization can effect standard host hardening across an organization’s environments. The advantage to the standardization of host hardening is that patching for vulnerabilities is much easier and efficient to complete.
One important key takeaway from this chapter is the benefits of Cloud Computing practices, especially in an age where we rely more and more on cloud services. One benefit is a cost factor, where implementing cloud services makes it easier to buy cheaper PC clients in the office, since all of the software would be using the cloud PC’s resources. Another benefit is reliability and disaster recovery, where the cloud providers have redundant backups and disaster prevention tools in place to keep their services online and available. An important one is data loss prevention. In the event a laptop is stolen, the information on the device won’t be compromised, keeping confidentiality intact. Lastly is scalability, as the consumer of the cloud service only pays for what they need, and can increase as needed, and the cloud providers could add more equipment to keep up with demand.
Hi, Krish. Yes, many companies would like to use cloud computing. It is an advantage for many companies to use cloud computing, especially, small businesses. It does not cost a lot to implement the system. It is dependent on which cloud service they choose. When a company decides to use cloud service, it should have a data leakage prevention program to prevent data loss if the employees use their own device.
This chapter is very interesting to me as it covers a lot of what I do daily. We often check for vulnerabilities and are capable of running audits against most operating systems and hosts to check for host hardening. Vulnerabilities need to be managed and patched accordingly in order to reduce the organizations risk. Organizations have to make sure that when they patch that it fully mitigates the vulnerability. There are many times where I find the vulnerability still exists because the patch didn’t fully get installed or it didn’t cover the entire exploit/vulnerability.
Patching vulnerabilities is very crucial to keep a system hardened to attacks. Unfortunately, the patches sometimes don’t alleviate all vulnerabilities, or they are not installed at all until it’s too late, and attacks could penetrate.
This chapter provided a new perspective for me on Patch management. In theory, patching seems pretty straight forward but there is some complexity around it in a big organization. The author mentioned in 2000 there was 1,090 vulnerabilities and seven years later, the number of vulnerabilities increased to approximately 7,000. As the number of vulnerabilities grows, so does the amount of patches released. Patches may be free from the vendor, however, the time and resources to apply patches can be expensive. Patches must be tested before applied to Production environments as there may be unintended consequences to specific features or functionality in an application. Since resources are limited and there are an overwhelming amount of patches per year, patches must be prioritized based off the risk of to the organization. Because of this not all patches are applied and leave the organization vulnerable one way or another.
It’s amazing how many vulnerabilities are reported each year. That’s not to mention not of the zero day vulnerabilities which exist in the wild. Vulnerabilities are inevitable as attackers are always finding methods to infiltrate systems.
The key away that I took from this chapter is the difference between Windows and UNIX, Compared with access permissions in Windows, permissions in UNIX are limited. This is one of the most serious problems associated with security in UNIX computers. For the number of permissions, Windows has six different permissions that can be assigned to users and groups. If finer granularity is needed, Windows has 13 specialized permissions to assign. UNIX has Only 3: read (read-only), write (make changes), and execute (for programs). For a file or directory, the different permissions of Windows can be assigned to any number of individual accounts and groups. For instance, different members and subgroups within the team probably should be given different access permissions. However, The different permissions of UNIX can only assign different permissions to three entities. One entity is the account that owns the file or directory. The second is a single group associated with the directory. The third is everyone else. There is no way to assign different permissions to multiple accounts or groups.
Windows machines are notoriously vulnerable. But I feel that has more to do with the number of users there are on Windows when compared to UNIX. As more organizations move towards using UNIX, we’ll start to see more vulnerabilities and issues related to UNIX.
I learned from this reading is that the definition of hardening seems to be different for each of the readings. Boyle and Panko defines the elements of host hardening as back up host regularly, restrict physical access, install operating system with secure configuration options, minimize the number of applications running services on the server, harden the applications running on the host, download and install patches for OS vulnerabilities, manage users and groups, manage access permissions for users and groups, encrypt data if appropriate, add a host firewall, read OS logs regularly to detect anomalies and run vulnerability tests regularly. These elements, while addressed in the other reading are handled a little differently.
Host hardening is one of the more important aspects of information security and can be over looked quite frequently. This chapter goes into many of the aspects the involve host hardening such as patch management. This is often over looked because many times patches can have adverse affects on older servers or legacy software that is running on that server. Many times system admins will choose to ignore important patches that disrupt legacy software and hope that the infrastructures defense in depth will mitigate any vulnerabilities. Other times, smaller organizations that don’t have a staff experienced enough to be aware of the current cyber security risks in the wild will leave servers unpatched and exposed to the internet. Just recently Microsoft issued an emergency patch for an Exchange server zero-day. As of today, March 14, there are 125,000 servers that remain unpatched. Patching can often times be a nuisance, time consuming, and frustrating. However this is a necessary duty all administrators should put high on their priority lists. A couple of long days is a much better option versus a compromised environment.
Hi Anthony, great response. I do agree that updated antivirus and anti malware systems provide protection against known malware. Many schools, enterprises and businesses have organization wide policy with mandatory anti virus installation requirements. This applies especially if the organization allows bring your own device at the organization. I do think that its not just the antivirus that will protect a user from attacks, I think cyber security awareness among the user will help boost the effectiveness of all the other controls.
The chapter did a great job of introducing the concept of virtualization through an analogy about different kinds of property types. A stand-alone personal computer would be a “bachelor pad”, the one operating system is running on one physical computer.. The single-family is having one host, the physical computer, run many different operating systems. An example would be a MacBook pro allowing you to run both Mac OS and Windows 7. The hotel is having multiple physical servers hosting hundreds of VMs at the same time. It can expand by adding more physical servers to accommodate the hosted VMs. This is great for backup and failures, if a VM is to experience a hardware failure, it can automatically be transferred to another physical machine. Virtualization has many benefits because it can use a single security baseline to protect all servers, remote or home, within the organization. Hardened systems can also be easily cloned, only used servers will run, shutting down unused physical servers.
Patch management is the process of distributing and applying updates to software. These patches are often necessary to correct errors. Criminal hackers can take advantage of known vulnerabilities in operating systems and third-party. It is good to apply patches in a timely manner, but unless there is an imminent threat, don’t rush to deploy the patches until there is an opportunity to see what effect it is having elsewhere in similar software user communities. A good rule of thumb is to apply patches 30 days from their release.
Patch management is very important for the organization because it does relate to the security of the business and software. If the patch is not up to date, or the software company is no longer supporting the patch there will be a vulnerability.
I found it extremely interesting that patch management is actually incredibly complex. The sheer volume of patches that need to be assessed in ridiculous. At my office, any time a patch comes out for our software, we have to download it into our development environment. Our developers ensure that the patch doesn’t include anything that will break our custom code, then it has to pass QA and finally it can be vetted by us so it goes out to our client’s UAT environment. As you can imagine time and resources expended add up to 20K plus from the time we received a patch to the time it is deployed. Keep in mind that for our current version we are at patch 16!
Virtualization allows multiple operating systems, with their associated applications and data, to run independently on a single physical machine. These virtual machines run their own operating system and share local systems resources. The advantage of using virtualization is that you can use multiple operating system, and be able to run it at the same time without any disruption and also sharing the same local resources. It is interesting because an organization could utilize different operating system to be able to complete a task or complex project faster, which saves time and money. In addition to being more secure, virtual environment can also benefits business by reducing labor costs associated with server administration, development, testing, and training. It can also reduce utility expenses by shutting down unused physical servers, and increasing fault tolerance and availability. You will need necessary hardware to handle virtualization, otherwise the virtualization will not be smooth.
One of the most interesting subjects in this reading were password cracking techniques. There are many tools that allow the attackers/hackers to crack the passwords. out of all the cracking methods, I found the brute force technique the most interesting. Although it is the most obvious approach to break the password by trying all passwords possible it is quite interesting because it depends on the computing power. The more computing power the quicker to break the passwords. Here are the possible combinations depending on the character length of the password.
Try all possible passwords of Length 1, Length 2, etc.
Thwarted by passwords that are long and complex (using all keyboard characters)
N is the password length, in characters
Alphabet, no case: 26^N possible passwords
Alphabet, upper- and lowercase (52^N)
Alphanumeric (letters and digits) (62^N)
All keyboard characters (~80^N)
With complexity, password length is very powerful.
Managing permissions for me is the most important section. There is so much talk about human error being the reason a tight knit security system is attacked successfully therefore doing the minimum and making sure permissions are in order and updated as needed is a good way to mitigate human error risk. Windows makes it very straightforward in setting permissions whether thats based by individual or group. without thorough review of permissions, you may end up having old accounts that still have active traits or employees with the wrong level of access. This is how to prevent mistakes from happening.
Good post. And I agree that Windows make it easy to set permissions. I think it gets more complex when you are managing hundreds of thousands of accounts across thousands of servers. For high volume environments it is important to have a tool set that discovers and manages the inventory of accounts and the permissions needed for those accounts as attempting to manage directly in native Windows or Active Directory becomes untenable.
Having been recently a victim of a notebook theft, I found the section of protecting notebooks critical and highly relevant to my situation.
I. fortunately, followed a lot of the recommendations outlined in the section of the hardening chapter.
– Regular back ups of data – I use cloud backups and external drives that remain in a secured location. I didn’t lose any data in the theft of my laptop.
– Strong Passwords – My password configuration was set to require a password with special characters, alphanumeric, multi-case and a minimum of 26 characters.
– Anti-theft tracking software – though my computer was not recovered, I was able to have it completely wiped once the computer was connected to any wi-fi or internet provider. The lo-jack of laptops.
Hi Vanessa, I’m so sorry that happened to you! Hopefully, there weren’t big losses incurred. These tips are helpful in helping hardening the system, I utilize iCloud backup as well. It’s linked to my Apple account, as long as it’s turned on, I can expect regular backups to occur on my apple devices. Strong passwords and have MFA is also another way we can harden our system and protect our own passwords. I haven’t heard of anti-theft tracking software but I’m glad you had that option in place and can protect your personal information, in case the thief does manage to unlock your notebook.
The world maybe trapped in your living room today due to immense digitization. But it does put you to different threats from the online world. The way to evade all this and protect all your services and applications is to learn about the best security practices. If online safety features high on your list of priorities then one of the basics you must know is how to create strong passwords. Experts have compiled the worst passwords of the year for both 2017 and 2018 in an effort to help consumers avoid potential hacks. Now, researchers from the University of Plymouth want to warn consumers about their passwords as we get ready to head into 2020. As cybersecurity threats continue to loom, it’s important for consumers to be diligent and creative when setting up new passwords. But despite many sites creating meters that gauge the strength of new passwords, these tools aren’t always the most accurate. In fact, the researchers say they can actually make consumers more vulnerable to cyber attacks.
My key take away from this chapter is patching and the issues with them. While it is understood that keeping up to date with patches is a requirement in the hardening of any system, it is easier said than done for a variety of reasons.
– # of patches – keeping track of each vendor and the latest release of patches is cumbersome. Having patch management software can help mitigate this risk.
– Cost – Patch installation has inherent costs – labor, research, installation time. With how often patches need to be reviewed. installed, tested and deployed to a live environment, it can be very expensive.
– Prioritization – Given the inherent cost and the sheer volume of patches, there must be a way to prioritize which patches are critical, important and nice to have.
– Patch Management – Using these types of services can help mitigate the risk of missing a patch and also of the aforementioned inherent costs. However, there is a price to use these tools, but they are an option.
– Installation Risks – There is always a risk that a patch may introduce a bug, cause delays, or break custom code.
Managing users, groups and permissions were the two topics that stood out from this week’s reading. User and group management is one of the key steps to ensuring security of a host environment. The existence of any dormant user accounts and groups jeopardizes security and puts the network system at risk in several ways. The most common risk will be if those idle accounts are compromised and used to attack the network. The other risk is, such accounts could have outdated security vulnerability since they are not active and do not apply the updated security policies. Ensuring the proper deletion and removal of inactive user accounts and groups is one way to hardening the host environment, with the other being, ensuring that groups created are aligned with the specific duties and objectives they aim to achieve only.
Managing permissions also go together with user and group management, since users and groups are assigned specific roles and access rights to certain zones and resources in the environment. Therefore, system administrators out to ensure they are assigning the minimum needed permissions to each user to carry out their daily operations. This practice reduces the risks associated with a user account getting compromised, and in turn helps harden the host environment.
Hi Humbert,
I think the principle of least privilege/permissions is quite effective as the authorized user is given the minimum levels of access to perform his/her job which as you said reduces the risk of a user account getting compromised which will help harden the host environment. This will reduce the security risks and attack surface. For example, implementing least privilege can help mitigate social engineering attacks like phishing by limiting execution of only certain file types.
Hi Priyanka….you make a valid point on least privileged access. However, least privileged is very theoretical and most organizations have difficulty defining the criteria for what least privileged means. For example, high risk systems in most organizations are surrounding the organization’s core business (client, vendor, contract or regulated data) and employee information. Any system that has create, update or delete that relates to core business is typically the crown jewels of the organization and least privileged access should be focused on those crown jewels. Then there are system integrations that cause the crown jewel data to be read only in other connected systems. Does least privileged apply to those systems that are read only? What about crown jewel data that is copied to a shared drive or a Teams site and is completely unstructured? What about systems that have been running for 40+ years before access controls were a concept? The implementation of least privileged becomes difficult when an organization is attempting to apply it after the system has been in existence prior to the adoption of least privileged or when least privileged criteria are not determined in advance of a system’s implementation.
Authority management generally refers to the security rules or security policies set by the system that users can access and can only access their authorized resources, no more, no less. Authority management appears in almost any system, as long as there are systems with users and passwords. Many people often confuse the concepts of “user identity authentication,” “password encryption,” and “system management” with the concept of authority management.
The poor authority management system will inevitably leave system loopholes, allowing hackers to take advantage of it. Many software can easily obtain unauthorized data through URL intrusion, SQL injection, and other modes. Even modify or delete system data, causing huge losses.
Many systems, especially hard-coded systems, have permission logic and business code closely coupled, and at the same time, they are scattered in various places in the system. There are bound to be many system vulnerabilities, and as the system continues to be modified, vulnerabilities gradually increase. A good system should centralize the permission logic and be set and analyzed by a professional security engine. The business logic calls the security engine to obtain the permission result and no longer uses the non-professional mode.
Hi Zibai,
I see how easy it would be for even IT professionals to confuse the terms you mentioned above. Almost all are set or enforced by group policies either directly or indirectly. Having proper authority management in place does however offer improved security benefits to the system, since users do not necessarily have access to more than they need for their duties. Should one be compromised, access to resources and zones by the attackers are limited to what the compromised account has access to, which as a good practice should always be the bare minimum.
One of the highlights of this chapter is what a security benchmark is and why it is important. A security baseline is a set of configuration Settings used to enforce a host for a certain type of operating system. Safety benchmarks are critical because they constitute standard guidelines for all departments. These Settings were determined based on feedback from the engineering team, partners and customers. In today’s computing era, the security landscape is evolving rapidly, and security professionals and policy makers are struggling to keep up with new threats and make the necessary changes to security Settings to mitigate them. Microsoft’s Group Policy Object backup is an example of a security benchmark provided to customers.
Hi Wenyao,
Setting a security baseline is also one way to help determine the level of residual risks in some instances. It helps security professionals know the level at which the system is protected based on the minimum intended/acceptable security they hoped to achieve.
Hi Wenyao,
I also found these very useful. These can also be used as security baselines for configuration settings. Safety baselines are important because they are standard guidelines for all systems and equipment. With the changing security situation, the challenge for security professionals is how to keep up with new threats and make necessary changes to mitigate new risks. Having a baseline / key component is very helpful to have a common starting point.
The one important point I noticed in this chapter is about testing for vulnerability. Vulnerability testing is a process of evaluating security risks in software systems to reduce the probability of threats. Except to implement protections, planners and implementers, there still have probability that hackers attack the system or software by the vulnerabilities.
To do vulnerability testing, a security administrator installs vulnerability testing software on his or her PC, and then run sit against the servers within the security administrator’s realm of concern. These programs run a battery of attacks against the servers, and then generate reports detailing the security vulnerabilities it found on the servers. Besides, it needs to create a vulnerability testing plan before testing, which contain the detailed step and accountability. By the vulnerability testing, it can reduce the possibility for intruders or hackers to get unauthorized access of systems.
Hi Xinyi,
Thank you for understanding the differences between the different operating systems outlined in this chapter. Fewer accounts will prove more risky to the operating system. However, the more accounts there are, the more vulnerabilities there are. From the auditor’s point of view, more classifications only add more work when compromising the security of information assets. But, in the end, safety is the main goal.
From the perspective of its auditors, this chapter is particularly insightful. It provides advanced information about the specific steps that system administrators take to protect systems and devices. It’s important to consider the application and operating system before you plan to enhance the system. System administrators should not use hardening as an independent form of protection but should use it in combination with a series of steps, such as routine backup and physical access restrictions. Especially in last week’s case study, it is particularly interesting to learn that patch management is a very critical control. However, organizations face several problems in patch management. For example, when a device is running various applications and operating systems, how to determine its priority. To better manage time and resource constraints, organizations should conduct a risk assessments when considering patch management priorities. System administrators must also keep abreast of the latest vulnerabilities and corresponding patches of any applications and operating systems used by their organizational systems.
Haozhe,
I agree, I think this is one of those chapters where almost everything is important. It covers multiple points that auditors need to have their eyes peeled for, as some points may not be a significant deal, but omitting them in an audit could be detrimental to the organization. In general I also think that patch management is one of the hardest parts of the whole process to keep up and maintain with. Like you said you don’t always know the priority and the fact that they pop up quickly and need to be fixed on the fly do not help that.
The topic that I would like to mention for this week’s reading is Virtualization. Virtualization allows multiple operating systems along with their associated applications and data to run independently on a single physical machine. This chapter mentions benefits of virtualization in the host hardening process. One of the benefits is that is allows systems administrators to create a single security baseline for each server within the organization. Subsequent instances of that server can be cloned from an existing hardening virtual machine in a few minutes. This minimizes the chance of incorrectly configuring a server and reduces the time to configure the server. Besides this, virtualization provides many other benefits such as reduced costs, increased performance and availability of resources, reduce downtime, increase efficiency and productivity, better scalability, etc.
Hi, Priyanka, I agree with your points with virtualization, The advantage of virtualization is that users can use multiple operating systems, run simultaneously without interruption, and share the same local resources. To a large extent for the company to save time and money.
There are several benefits that virtualization provides. First, it makes management information systems more productive and makes them agile with business strategy. While system administrators manage individuals or groups by creating domains within the organization, the users can access the application and resources quickly. Also, an IT employee, who manages the virtual machine (VM), can duplicate the server from an existing hardened virtual machine in only a few minutes. To reduce operating costs, the organizations can save money on purchasing less hardware than physical machines. Moreover, VM can always manage the data center effectively as a backup. It uses snapshots to provide updated data throughout the day. Once the organizations apply the VM appropriately, it can increase the flexibility of the management information system and the performance of operations.
Good post on use of virtual servers and how virtualization can effect standard host hardening across an organization’s environments. The advantage to the standardization of host hardening is that patching for vulnerabilities is much easier and efficient to complete.
One important key takeaway from this chapter is the benefits of Cloud Computing practices, especially in an age where we rely more and more on cloud services. One benefit is a cost factor, where implementing cloud services makes it easier to buy cheaper PC clients in the office, since all of the software would be using the cloud PC’s resources. Another benefit is reliability and disaster recovery, where the cloud providers have redundant backups and disaster prevention tools in place to keep their services online and available. An important one is data loss prevention. In the event a laptop is stolen, the information on the device won’t be compromised, keeping confidentiality intact. Lastly is scalability, as the consumer of the cloud service only pays for what they need, and can increase as needed, and the cloud providers could add more equipment to keep up with demand.
Hi, Krish. Yes, many companies would like to use cloud computing. It is an advantage for many companies to use cloud computing, especially, small businesses. It does not cost a lot to implement the system. It is dependent on which cloud service they choose. When a company decides to use cloud service, it should have a data leakage prevention program to prevent data loss if the employees use their own device.
This chapter is very interesting to me as it covers a lot of what I do daily. We often check for vulnerabilities and are capable of running audits against most operating systems and hosts to check for host hardening. Vulnerabilities need to be managed and patched accordingly in order to reduce the organizations risk. Organizations have to make sure that when they patch that it fully mitigates the vulnerability. There are many times where I find the vulnerability still exists because the patch didn’t fully get installed or it didn’t cover the entire exploit/vulnerability.
Hi Jonathan,
Patching vulnerabilities is very crucial to keep a system hardened to attacks. Unfortunately, the patches sometimes don’t alleviate all vulnerabilities, or they are not installed at all until it’s too late, and attacks could penetrate.
This chapter provided a new perspective for me on Patch management. In theory, patching seems pretty straight forward but there is some complexity around it in a big organization. The author mentioned in 2000 there was 1,090 vulnerabilities and seven years later, the number of vulnerabilities increased to approximately 7,000. As the number of vulnerabilities grows, so does the amount of patches released. Patches may be free from the vendor, however, the time and resources to apply patches can be expensive. Patches must be tested before applied to Production environments as there may be unintended consequences to specific features or functionality in an application. Since resources are limited and there are an overwhelming amount of patches per year, patches must be prioritized based off the risk of to the organization. Because of this not all patches are applied and leave the organization vulnerable one way or another.
It’s amazing how many vulnerabilities are reported each year. That’s not to mention not of the zero day vulnerabilities which exist in the wild. Vulnerabilities are inevitable as attackers are always finding methods to infiltrate systems.
The key away that I took from this chapter is the difference between Windows and UNIX, Compared with access permissions in Windows, permissions in UNIX are limited. This is one of the most serious problems associated with security in UNIX computers. For the number of permissions, Windows has six different permissions that can be assigned to users and groups. If finer granularity is needed, Windows has 13 specialized permissions to assign. UNIX has Only 3: read (read-only), write (make changes), and execute (for programs). For a file or directory, the different permissions of Windows can be assigned to any number of individual accounts and groups. For instance, different members and subgroups within the team probably should be given different access permissions. However, The different permissions of UNIX can only assign different permissions to three entities. One entity is the account that owns the file or directory. The second is a single group associated with the directory. The third is everyone else. There is no way to assign different permissions to multiple accounts or groups.
Windows machines are notoriously vulnerable. But I feel that has more to do with the number of users there are on Windows when compared to UNIX. As more organizations move towards using UNIX, we’ll start to see more vulnerabilities and issues related to UNIX.
I learned from this reading is that the definition of hardening seems to be different for each of the readings. Boyle and Panko defines the elements of host hardening as back up host regularly, restrict physical access, install operating system with secure configuration options, minimize the number of applications running services on the server, harden the applications running on the host, download and install patches for OS vulnerabilities, manage users and groups, manage access permissions for users and groups, encrypt data if appropriate, add a host firewall, read OS logs regularly to detect anomalies and run vulnerability tests regularly. These elements, while addressed in the other reading are handled a little differently.
Host hardening is one of the more important aspects of information security and can be over looked quite frequently. This chapter goes into many of the aspects the involve host hardening such as patch management. This is often over looked because many times patches can have adverse affects on older servers or legacy software that is running on that server. Many times system admins will choose to ignore important patches that disrupt legacy software and hope that the infrastructures defense in depth will mitigate any vulnerabilities. Other times, smaller organizations that don’t have a staff experienced enough to be aware of the current cyber security risks in the wild will leave servers unpatched and exposed to the internet. Just recently Microsoft issued an emergency patch for an Exchange server zero-day. As of today, March 14, there are 125,000 servers that remain unpatched. Patching can often times be a nuisance, time consuming, and frustrating. However this is a necessary duty all administrators should put high on their priority lists. A couple of long days is a much better option versus a compromised environment.
Hi Anthony, great response. I do agree that updated antivirus and anti malware systems provide protection against known malware. Many schools, enterprises and businesses have organization wide policy with mandatory anti virus installation requirements. This applies especially if the organization allows bring your own device at the organization. I do think that its not just the antivirus that will protect a user from attacks, I think cyber security awareness among the user will help boost the effectiveness of all the other controls.
The chapter did a great job of introducing the concept of virtualization through an analogy about different kinds of property types. A stand-alone personal computer would be a “bachelor pad”, the one operating system is running on one physical computer.. The single-family is having one host, the physical computer, run many different operating systems. An example would be a MacBook pro allowing you to run both Mac OS and Windows 7. The hotel is having multiple physical servers hosting hundreds of VMs at the same time. It can expand by adding more physical servers to accommodate the hosted VMs. This is great for backup and failures, if a VM is to experience a hardware failure, it can automatically be transferred to another physical machine. Virtualization has many benefits because it can use a single security baseline to protect all servers, remote or home, within the organization. Hardened systems can also be easily cloned, only used servers will run, shutting down unused physical servers.
Patch management is the process of distributing and applying updates to software. These patches are often necessary to correct errors. Criminal hackers can take advantage of known vulnerabilities in operating systems and third-party. It is good to apply patches in a timely manner, but unless there is an imminent threat, don’t rush to deploy the patches until there is an opportunity to see what effect it is having elsewhere in similar software user communities. A good rule of thumb is to apply patches 30 days from their release.
Patch management is very important for the organization because it does relate to the security of the business and software. If the patch is not up to date, or the software company is no longer supporting the patch there will be a vulnerability.
I found it extremely interesting that patch management is actually incredibly complex. The sheer volume of patches that need to be assessed in ridiculous. At my office, any time a patch comes out for our software, we have to download it into our development environment. Our developers ensure that the patch doesn’t include anything that will break our custom code, then it has to pass QA and finally it can be vetted by us so it goes out to our client’s UAT environment. As you can imagine time and resources expended add up to 20K plus from the time we received a patch to the time it is deployed. Keep in mind that for our current version we are at patch 16!
Virtualization allows multiple operating systems, with their associated applications and data, to run independently on a single physical machine. These virtual machines run their own operating system and share local systems resources. The advantage of using virtualization is that you can use multiple operating system, and be able to run it at the same time without any disruption and also sharing the same local resources. It is interesting because an organization could utilize different operating system to be able to complete a task or complex project faster, which saves time and money. In addition to being more secure, virtual environment can also benefits business by reducing labor costs associated with server administration, development, testing, and training. It can also reduce utility expenses by shutting down unused physical servers, and increasing fault tolerance and availability. You will need necessary hardware to handle virtualization, otherwise the virtualization will not be smooth.
One of the most interesting subjects in this reading were password cracking techniques. There are many tools that allow the attackers/hackers to crack the passwords. out of all the cracking methods, I found the brute force technique the most interesting. Although it is the most obvious approach to break the password by trying all passwords possible it is quite interesting because it depends on the computing power. The more computing power the quicker to break the passwords. Here are the possible combinations depending on the character length of the password.
Try all possible passwords of Length 1, Length 2, etc.
Thwarted by passwords that are long and complex (using all keyboard characters)
N is the password length, in characters
Alphabet, no case: 26^N possible passwords
Alphabet, upper- and lowercase (52^N)
Alphanumeric (letters and digits) (62^N)
All keyboard characters (~80^N)
With complexity, password length is very powerful.
Managing permissions for me is the most important section. There is so much talk about human error being the reason a tight knit security system is attacked successfully therefore doing the minimum and making sure permissions are in order and updated as needed is a good way to mitigate human error risk. Windows makes it very straightforward in setting permissions whether thats based by individual or group. without thorough review of permissions, you may end up having old accounts that still have active traits or employees with the wrong level of access. This is how to prevent mistakes from happening.
Good post. And I agree that Windows make it easy to set permissions. I think it gets more complex when you are managing hundreds of thousands of accounts across thousands of servers. For high volume environments it is important to have a tool set that discovers and manages the inventory of accounts and the permissions needed for those accounts as attempting to manage directly in native Windows or Active Directory becomes untenable.
Having been recently a victim of a notebook theft, I found the section of protecting notebooks critical and highly relevant to my situation.
I. fortunately, followed a lot of the recommendations outlined in the section of the hardening chapter.
– Regular back ups of data – I use cloud backups and external drives that remain in a secured location. I didn’t lose any data in the theft of my laptop.
– Strong Passwords – My password configuration was set to require a password with special characters, alphanumeric, multi-case and a minimum of 26 characters.
– Anti-theft tracking software – though my computer was not recovered, I was able to have it completely wiped once the computer was connected to any wi-fi or internet provider. The lo-jack of laptops.
Hi Vanessa, I’m so sorry that happened to you! Hopefully, there weren’t big losses incurred. These tips are helpful in helping hardening the system, I utilize iCloud backup as well. It’s linked to my Apple account, as long as it’s turned on, I can expect regular backups to occur on my apple devices. Strong passwords and have MFA is also another way we can harden our system and protect our own passwords. I haven’t heard of anti-theft tracking software but I’m glad you had that option in place and can protect your personal information, in case the thief does manage to unlock your notebook.
The world maybe trapped in your living room today due to immense digitization. But it does put you to different threats from the online world. The way to evade all this and protect all your services and applications is to learn about the best security practices. If online safety features high on your list of priorities then one of the basics you must know is how to create strong passwords. Experts have compiled the worst passwords of the year for both 2017 and 2018 in an effort to help consumers avoid potential hacks. Now, researchers from the University of Plymouth want to warn consumers about their passwords as we get ready to head into 2020. As cybersecurity threats continue to loom, it’s important for consumers to be diligent and creative when setting up new passwords. But despite many sites creating meters that gauge the strength of new passwords, these tools aren’t always the most accurate. In fact, the researchers say they can actually make consumers more vulnerable to cyber attacks.
My key take away from this chapter is patching and the issues with them. While it is understood that keeping up to date with patches is a requirement in the hardening of any system, it is easier said than done for a variety of reasons.
– # of patches – keeping track of each vendor and the latest release of patches is cumbersome. Having patch management software can help mitigate this risk.
– Cost – Patch installation has inherent costs – labor, research, installation time. With how often patches need to be reviewed. installed, tested and deployed to a live environment, it can be very expensive.
– Prioritization – Given the inherent cost and the sheer volume of patches, there must be a way to prioritize which patches are critical, important and nice to have.
– Patch Management – Using these types of services can help mitigate the risk of missing a patch and also of the aforementioned inherent costs. However, there is a price to use these tools, but they are an option.
– Installation Risks – There is always a risk that a patch may introduce a bug, cause delays, or break custom code.