Unix is Command-Line Interface (CLI) operating system (text command in terminal), while Windows is a Graphical User Interface (GUI) operating system (interact by selecting objects on computer, ex. buttons, icons, and menus). Also, Windows can be configured to install updates automatically to enhance security; with Unix, you must install such updates manually.
UNIX Operating System is an open-source system that allows users to access and modify the code for their needs. Windows Operating System is not an open-source system. It is a closed, proprietary system. It is a more commercialist and user-friendly product.
I definitely believe role access control is the most important security policy to implement because without this, not only is the organization susceptible to hackers and malicious attacks externally, they will be at risk internally from rogue employees who realize they have access to high profile files or PII information.
I would say the following security policies are most critical:
Response to Incidents – in case there is a security breach, appropriate policies and measures are taken to handle the matter
Managing Patches – policies on implementing code to eliminate vulnerabilities is important to help protect against threats
Vulnerability scanning – since hackers can scan for vulnerabilities in a short period, a company should have a system for checking its own networks on a regular basis.
& System Data Security – the outline of all servers to the operating system is vital to security policy; that is why policies are needed to run on the company’s networks, and also manage its accounts and passwords.
Access Control and Employee Training. Employees are the first line of defense so it’s important that you not only have the right employees on the front line, but also make sure they are properly trained.
What factors go into an organization’s decision to implement or not implement a security patch for an application? Should an organization implement all patches as quickly as possible? Why or why not?
Organizations would typically have to consider factors such as cost, resources availability, reduced functionality, etc. While patches are free, it does require labor to learn about them, download & install them – these costs can quickly add up especially given the numbers of patches released each year. Even if an organization is willing to invest in patch management, it would require the necessary manpower to learn about them and download & install them. Patch management can also be slow, and it can cause machines to freeze or do other damage. Given the number of patches released each year and the factors influencing patch management, an organization should sort patches by priority and implement patches to fix its most critical vulnerabilities first. They can then work their way through their priority list in tandem with constraints outlined above to determine which patches will be implemented.
Some of the factors that determines an organization’s decision are whether the patch is related to a function the organization uses or the vulnerability is material. An organization should not implement all patches as quickly as possible as each pact should be reviewed case by case.
“Microsoft Exchange Exploits Pave a Ransomware Path”
Threatpost reports on a new variety of ransomware, called “DearCry”, which has exploded in the last few days and has appeared especially frequently on Microsoft Exchange servers. This comes in the wake of public disclosure of a series of four vulnerabilities that, in combination, allow remote administrative pre-authentication access to an Exchange server – in other words, no credentials whatsoever are required to gain the remote administrative rights. These vulnerabilities are related to the massive vulnerability now known as ProxyLogon which came to light in late 2020.
After gaining control of the Exchange server and exhausting its usefulness, the attackers encrypt it using the DearCry ransomware and hold it ransom for $16,000. This exploit has targeted in particular government and military organizations, and manufacturing and banking businesses. A patch has already been released by Microsoft and researchers urge all organizations with Exchange mail servers to install the patch as soon as possible.
Upgrading to a new version of the program is usually the best option when it comes to fixing vulnerabilities. Typically, security issues are corrected and there is improved security in new versions of the software. It should also be noted that vendors will stop creating fixes for older versions of software.
some of the options you provided can be ultimately addressing the same issue. A workaround, patches, service packs, and upgrading can all address vulnerabilities, but at the end of the day, organizations have to evaluate the cost vs the benefit, patches, service packs almost always have a less financial impact on an organization.
BYOD needs to have a big impact on how security teams address host hardening. It’s harder to secure devices that are no under the organization’s control. In a worst case scenario, those are devices that you have no control over that you are allowing into your internal network to access sensitive data. A lot of organizations take advantage of mobile device management (MDM). My company, for example, uses an MDM for anyone who wants to so much as access company emails on their personal phones. The MDM is configured to ensure that my phone is properly hardened. It requires that my phone is encrypted, has a PIN that is changed every 3 months, set to screen lock after 1 minute, etc. If I change any of those settings, the MDM blocks my access to my company applications and emails.
BYOD will only get more popular as time goes on, the real solution to this “security threat” is to adopt additional security models such as zero-trust. As we see today, COVID only accelerated the transition of the remote workforce and work from anywhere any devices. The traditional security model is no longer fit for the BYOD nor is it sufficient for the remote workforce.
The challenge can be particularly troublesome for an enterprise trying to implement the Secure Configurations for Hardware and Software on Mobile Devices and Laptops. Since each make and model of laptop has different traits, having multiple models requires the IT staff to understand these differences so they can service the laptops more effectively. The heterogeneity of BYOD makes the host hardening more complex.
Christopher, one of the things I’ve seen that can be challenging with patching vulnerabilities is for third-party applications. Without a tool that can help with the identification and deployment of third-party application patches, it can be extremely time intensive to keep up with the identification of third-party application patches, and also challenging to test, deploy, and troubleshoot when things do not work as expected. Unlike Microsoft patches, it can be hard to keep up with the timing of these patches as they are released, and depending on the environment, there may be a significant volume of different applications that need to be patched. As far as challenges with patching in general, I think it can be challenging when there are other priorities, or if an IT department is spending too much time in a break-fix mode without dedicated resources to manage patching, it can become harder to focus on patching as a preventive defense.
Patching can be a dreaded process for a lot of organizations. It requires a strong change management process and is more effective when implemented on regular schedules.
One issue is that patches can break things. Not everyone’s environment is the same. Organizations make tweaks and have different connections here and there. Patches need to be tested before implementation. Sometimes it’s hard to find a way to fix a patch so that it won’t break the current system.
Another issues is the number of patches. Microsoft has Patch Tuesday. Manufacturers and developers put out thousands of patches each year. Companies run many different applications and hardware. It can be hard to keep up with.
Additionally, patching takes time. There has to be some human effort involved in applying patches, though it can be minimized. But patches can also call system downtime. Ideally, there is redundancy built in so that one server can take over while one is patched, but this is not always the case.
Companies can have difficulties with patch management for a few different reasons. One is that the number of new patches being released daily is consistently rising. This, combined with the sheer number of applications a typical organization relies on, means the volume of patches to be noted, evaluated, tested, and implemented might overwhelm even a moderately-sized system administration team. Another problem is the trade-off between potential system downtime and increased security. While some vulnerabilities can pose a huge threat to security, many administrators face overbearing expectations on system uptime. The required downtime to install a patch (or worse, the prospect of a system failure caused by a patch) can push administrators to avoid patch implementation for longer than is advisable.
What are some examples of Windows Group Policy that you’ve seen in place in your work and/or school environments that you find important for improving security?
Screenlock after a specific idle time, GPO enforcing firewall configurations, account lockout policies, are some examples addressing different aspects of security.
A few group policies I think could improve security is limiting access to the command prompt, disallowing removeable media drives, and restricting software installation.
• Not allow to insert USB flash drive from external sources to company’s machine.
• Set up a password requirement with a certain length and complexity.
• Apply system patches and software updates within a period.
At my old company, they were very good at limiting internet usage for the company’s security. I found many of them pretty important:
1. No modifications to any settings, all settings require administrative permissions
2. Network settings blocking all cloud platforms such as Google Drive, Box, etc.
3. Policies not allowing personal computers, tablets, portable USBs to be plugged in
4. Not allowing employees to upload anything to websites
5. Blocking social media platforms such as Instagram, Facebook, explicit sites, etc.
At My company, we have two main windows group policies that improve our security. Firstly, admin privileges are very difficult to obtain, and can only be allowed under extremely strict circumstances. Another is forced updates. We are given a time frame for Windows updates that we must update in between (usually a 24 hr time window), If we do not update then, the update is forced, no matter what you are doing.
Network access control (NAC) supports network visibility and access management through policy enforcement on devices and users of corporate networks. With organizations now having to account for mobile devices accessing their networks and the security risks they bring, having the devices that provide the visibility, access control, and compliance capabilities that are required to strengthen your network security structure is critical. This will allow authorized devices to connect to your network securely and block devices that are unauthorized.
NAC focuses on controlling initial access to the network and aims to reduce the danger created by computers with malware connecting to the network. To this end, NAC screens the PC client to ensure that it has automated updating installed, up-to-date antivirus program, etc. Once the PC successfully passes this screening process, it is then granted access to the network.
I think baseline security configurations should be reviewed at minimum annually by the department responsible for those devices, and the information security team should provide guidance and review any changes made. An effective information security team would be monitoring developments in their field throughout the year and should be recommending changes as needed anyway. The annual review would consist of the responsible departments evaluating their own needs and potential changes, and the information security team and upper management would confirm their recommendations have been followed and requirements from updated policies are being met by the various departments.
I believe the security baseline should be reviewed annually as well. This will allow the team creating the baseline to review and take into consideration any new vulnerabilities and threats that may have been discovered over the course of a year. This way the organization can reevaluate their risks and determine if new mitigation requirements need to be added into the security baseline.
Patching servers and hosts require testing to ensure the continuing functioning of the systems receiving the patches. What would be a reasonable timeframe for organizations to fully implement a specific patch? What about zero-day patches?
Zero-day patches in particular should be prioritized for implementation within a few days of publication. This is because exploits for a zero-day vulnerability are typically reverse-engineered within days of patch release (if an exploit isn’t already in the wild), so an organization accepting heightened risk if they don’t patch quickly. Other patches should be implemented in timeframes relative to the level of vulnerability and the resources required to patch. If the vulnerability puts the enterprise at high risk, or if the patching process is low-impact (wouldn’t cause any downtime, for example), then it should be implemented quickly.
I believe both are equally important. Poor planning can lead to unnecessary vulnerabilities and threats and lack of maintenance can also lead to a host of issues. While maintenance will catch any issues missed in planning, both are necessary.
I think both are equally important, but if I had to pick one, I would say planning is slightly more important. Without a good plan, the organization will be at extreme risk. One of the benefits of planning is to help ensure the organization is ready for any issues that may arise. With a solid plan, you may be able to avoid doing maintenance in the future that you would have had to do if there was no plan. But, for an organization to be successful they need to be taken equally seriously
I believe maintenance is more important because if all systems, networks, and servers are properly kept up with patches and updates, it would minimize failures and the need for backup plans or replacements. It is much cheaper to keep strengthening your current systems than to constantly replace or fix it.
That is a tough choice! I have to say planning is more important, based on the NIST Guide to General Server Security. The guide notes that planning prior to installation, configuration, and deployment is the most important aspect of deploying a secure server as it will lead to stronger security and in compliance with company policies and procedures. It is easier to identify and architect the security upfront than to try to layer it in after in addition to being more cost-effective. Careful planning also allows for well thought out and rational decision making, rather than trying to operate in emergency mode if a vulnerability is exposed from poor planning design.
I would say both are equally important because an organization can spend a lot of time carefully planning their security architecture for the network and performing risk assessments on servers to determine which mitigations are most cost-effective to implement. However, the threat environment is constantly changing and new vulnerabilities are discovered, so maintenance is essential to keep the network and systems secure.
It might because applying patches is a timely process. It needs to avoid downtime and make sure applying a patch to a system does not break its compatibility with any interdependent systems. It also increases the complexity when it applies to mixture platforms like cloud, SaaS, and op-prem environment.
The first issue is the sheer number of patches generated on an annual basis, this can be difficult for an organization to maintain considering how often patches are required. In addition not all patches are able to be uninstalled once implemented. If a patch is not tested before deploying more broadly throughout the organization, the impact to the system is unknown which can result in frozen or slow machines. This extra element of testing the patch also adds resource and time constraints. When patches are installed, there is also an associated cost benefit element in that the extra security measure usually comes with slower machines or reduced functionality; therefore it is important to understand what the patch is “fixing” to determine if it is worth it.
The most common reason for not applying patches is fear. Applying patches requires stopping and then restarting the system. Applying a patch can result in some application no longer working or functioning properly. Also, when companies are managing their OS patching, third party application vulnerabilities are too often overlooked completely, leaving security holes on every endpoint.
I think most of the difficulty comes from software and operating system compatibility. The majority of organizations are afraid that the new patches could interfere with the availability principle of data and so they rather wait for the next patch to be released. They may also have a mindset that since they don’t have any issue currently “why should I install patches that may cause new issues”. It could also be a case of ignorance and IT mismanagement. Also, perhaps someone is not proactively concern about updating the system essentially.
My company does have a patching procedure and it is enforced. There is a small team that is dedicated to patch management. and we have software in place that scans our environment almost constantly to determine what assets (including operating systems and applications) are in our environment. The software helps manage the Microsoft releases issued on patch Tuesday and also scans externally for third-party application patches and compares what is in our environment to the patches that have been released. From there, we have a procedure for prioritizing and then testing and deploying patches. We have internal reporting that is monitored by our security team to ensure patches are deployed timely per our procedures We have quarterly vulnerability scans that provide an independent assessment of how well we are managing patches. And then audit does evaluate the process annually. But I would say there are a couple of layers of review and validation before it gets to audit, between the security team and the independent vulnerability assessment.
What is the most secure operating system server currently available to businesses and why? Also, why do some IT professionals believe that Apple’s OS is more secured than Windows?
In my opinion, Linux is the most secure OS, as its source is open. Anyone can review it and make sure there are no bugs or back doors. By having that much oversight, there are fewer vulnerabilities. I think IT professionals believe that Apple’s OS is more secured than Windows because the macOS is based on Unix which is generally more difficult to exploit than Windows.
My audit portfolio includes one of the critical designated applications throughout our organization by our national IT department. The application is backed up daily to two different servers; a national server and a contingency server. The contingency server is tested several times a year to ensure correct failover.
Nicholas Fabrizio says
Explain the advantages of how virtualization can be used in host hardening and if there are any disadvantages you can think of?
Charlie Corrao says
What are some of the main differences between a Windows OS and a UNIX OS?
Christopher Clayton says
Unix is Command-Line Interface (CLI) operating system (text command in terminal), while Windows is a Graphical User Interface (GUI) operating system (interact by selecting objects on computer, ex. buttons, icons, and menus). Also, Windows can be configured to install updates automatically to enhance security; with Unix, you must install such updates manually.
To-Yin Cheng says
UNIX Operating System is an open-source system that allows users to access and modify the code for their needs. Windows Operating System is not an open-source system. It is a closed, proprietary system. It is a more commercialist and user-friendly product.
Christa Giordano says
Which security policies do you think are most critical to implement for an organization and why?
Quynh Nguyen says
I definitely believe role access control is the most important security policy to implement because without this, not only is the organization susceptible to hackers and malicious attacks externally, they will be at risk internally from rogue employees who realize they have access to high profile files or PII information.
Christopher Clayton says
I would say the following security policies are most critical:
Response to Incidents – in case there is a security breach, appropriate policies and measures are taken to handle the matter
Managing Patches – policies on implementing code to eliminate vulnerabilities is important to help protect against threats
Vulnerability scanning – since hackers can scan for vulnerabilities in a short period, a company should have a system for checking its own networks on a regular basis.
& System Data Security – the outline of all servers to the operating system is vital to security policy; that is why policies are needed to run on the company’s networks, and also manage its accounts and passwords.
Panayiotis Laskaridis says
Access Control and Employee Training. Employees are the first line of defense so it’s important that you not only have the right employees on the front line, but also make sure they are properly trained.
Mitchell Dulaney says
What factors go into an organization’s decision to implement or not implement a security patch for an application? Should an organization implement all patches as quickly as possible? Why or why not?
Lakshmi Surujnauth says
Organizations would typically have to consider factors such as cost, resources availability, reduced functionality, etc. While patches are free, it does require labor to learn about them, download & install them – these costs can quickly add up especially given the numbers of patches released each year. Even if an organization is willing to invest in patch management, it would require the necessary manpower to learn about them and download & install them. Patch management can also be slow, and it can cause machines to freeze or do other damage. Given the number of patches released each year and the factors influencing patch management, an organization should sort patches by priority and implement patches to fix its most critical vulnerabilities first. They can then work their way through their priority list in tandem with constraints outlined above to determine which patches will be implemented.
Ashleigh Williams says
Some of the factors that determines an organization’s decision are whether the patch is related to a function the organization uses or the vulnerability is material. An organization should not implement all patches as quickly as possible as each pact should be reviewed case by case.
Mitchell Dulaney says
“Microsoft Exchange Exploits Pave a Ransomware Path”
Threatpost reports on a new variety of ransomware, called “DearCry”, which has exploded in the last few days and has appeared especially frequently on Microsoft Exchange servers. This comes in the wake of public disclosure of a series of four vulnerabilities that, in combination, allow remote administrative pre-authentication access to an Exchange server – in other words, no credentials whatsoever are required to gain the remote administrative rights. These vulnerabilities are related to the massive vulnerability now known as ProxyLogon which came to light in late 2020.
After gaining control of the Exchange server and exhausting its usefulness, the attackers encrypt it using the DearCry ransomware and hold it ransom for $16,000. This exploit has targeted in particular government and military organizations, and manufacturing and banking businesses. A patch has already been released by Microsoft and researchers urge all organizations with Exchange mail servers to install the patch as soon as possible.
https://threatpost.com/microsoft-exchange-exploits-ransomware/164719/
Mitchell Dulaney says
This was intended for the “In the News” post. Please disregard!
To-Yin Cheng says
Which fixes are more important to fix the vulnerabilities? Work-arounds, patches, service packs, or upgrading to a new version of the program?
Lakshmi Surujnauth says
Upgrading to a new version of the program is usually the best option when it comes to fixing vulnerabilities. Typically, security issues are corrected and there is improved security in new versions of the software. It should also be noted that vendors will stop creating fixes for older versions of software.
Xiduo Liu says
some of the options you provided can be ultimately addressing the same issue. A workaround, patches, service packs, and upgrading can all address vulnerabilities, but at the end of the day, organizations have to evaluate the cost vs the benefit, patches, service packs almost always have a less financial impact on an organization.
Lakshmi Surujnauth says
Does BYOD impact the security team’s approach to host hardening?
Jonathan Mettus says
BYOD needs to have a big impact on how security teams address host hardening. It’s harder to secure devices that are no under the organization’s control. In a worst case scenario, those are devices that you have no control over that you are allowing into your internal network to access sensitive data. A lot of organizations take advantage of mobile device management (MDM). My company, for example, uses an MDM for anyone who wants to so much as access company emails on their personal phones. The MDM is configured to ensure that my phone is properly hardened. It requires that my phone is encrypted, has a PIN that is changed every 3 months, set to screen lock after 1 minute, etc. If I change any of those settings, the MDM blocks my access to my company applications and emails.
Xiduo Liu says
BYOD will only get more popular as time goes on, the real solution to this “security threat” is to adopt additional security models such as zero-trust. As we see today, COVID only accelerated the transition of the remote workforce and work from anywhere any devices. The traditional security model is no longer fit for the BYOD nor is it sufficient for the remote workforce.
Wei Liu says
The challenge can be particularly troublesome for an enterprise trying to implement the Secure Configurations for Hardware and Software on Mobile Devices and Laptops. Since each make and model of laptop has different traits, having multiple models requires the IT staff to understand these differences so they can service the laptops more effectively. The heterogeneity of BYOD makes the host hardening more complex.
Christopher Clayton says
When it comes to patching vulnerabilities, what are some of the main reasons why companies may have difficulty with this process?
Megan Hall says
Christopher, one of the things I’ve seen that can be challenging with patching vulnerabilities is for third-party applications. Without a tool that can help with the identification and deployment of third-party application patches, it can be extremely time intensive to keep up with the identification of third-party application patches, and also challenging to test, deploy, and troubleshoot when things do not work as expected. Unlike Microsoft patches, it can be hard to keep up with the timing of these patches as they are released, and depending on the environment, there may be a significant volume of different applications that need to be patched. As far as challenges with patching in general, I think it can be challenging when there are other priorities, or if an IT department is spending too much time in a break-fix mode without dedicated resources to manage patching, it can become harder to focus on patching as a preventive defense.
Jonathan Mettus says
Patching can be a dreaded process for a lot of organizations. It requires a strong change management process and is more effective when implemented on regular schedules.
One issue is that patches can break things. Not everyone’s environment is the same. Organizations make tweaks and have different connections here and there. Patches need to be tested before implementation. Sometimes it’s hard to find a way to fix a patch so that it won’t break the current system.
Another issues is the number of patches. Microsoft has Patch Tuesday. Manufacturers and developers put out thousands of patches each year. Companies run many different applications and hardware. It can be hard to keep up with.
Additionally, patching takes time. There has to be some human effort involved in applying patches, though it can be minimized. But patches can also call system downtime. Ideally, there is redundancy built in so that one server can take over while one is patched, but this is not always the case.
Mitchell Dulaney says
Companies can have difficulties with patch management for a few different reasons. One is that the number of new patches being released daily is consistently rising. This, combined with the sheer number of applications a typical organization relies on, means the volume of patches to be noted, evaluated, tested, and implemented might overwhelm even a moderately-sized system administration team. Another problem is the trade-off between potential system downtime and increased security. While some vulnerabilities can pose a huge threat to security, many administrators face overbearing expectations on system uptime. The required downtime to install a patch (or worse, the prospect of a system failure caused by a patch) can push administrators to avoid patch implementation for longer than is advisable.
Megan Hall says
What are some examples of Windows Group Policy that you’ve seen in place in your work and/or school environments that you find important for improving security?
Xiduo Liu says
Screenlock after a specific idle time, GPO enforcing firewall configurations, account lockout policies, are some examples addressing different aspects of security.
Nicholas Fabrizio says
A few group policies I think could improve security is limiting access to the command prompt, disallowing removeable media drives, and restricting software installation.
To-Yin Cheng says
• Not allow to insert USB flash drive from external sources to company’s machine.
• Set up a password requirement with a certain length and complexity.
• Apply system patches and software updates within a period.
Quynh Nguyen says
At my old company, they were very good at limiting internet usage for the company’s security. I found many of them pretty important:
1. No modifications to any settings, all settings require administrative permissions
2. Network settings blocking all cloud platforms such as Google Drive, Box, etc.
3. Policies not allowing personal computers, tablets, portable USBs to be plugged in
4. Not allowing employees to upload anything to websites
5. Blocking social media platforms such as Instagram, Facebook, explicit sites, etc.
Charlie Corrao says
At My company, we have two main windows group policies that improve our security. Firstly, admin privileges are very difficult to obtain, and can only be allowed under extremely strict circumstances. Another is forced updates. We are given a time frame for Windows updates that we must update in between (usually a 24 hr time window), If we do not update then, the update is forced, no matter what you are doing.
Wei Liu says
What role does Network Access Control (NAC) play in PC security management?
Christopher Clayton says
Network access control (NAC) supports network visibility and access management through policy enforcement on devices and users of corporate networks. With organizations now having to account for mobile devices accessing their networks and the security risks they bring, having the devices that provide the visibility, access control, and compliance capabilities that are required to strengthen your network security structure is critical. This will allow authorized devices to connect to your network securely and block devices that are unauthorized.
Lakshmi Surujnauth says
NAC focuses on controlling initial access to the network and aims to reduce the danger created by computers with malware connecting to the network. To this end, NAC screens the PC client to ensure that it has automated updating installed, up-to-date antivirus program, etc. Once the PC successfully passes this screening process, it is then granted access to the network.
Jonathan Mettus says
How often should an organization review and update its secure baseline configurations for things like workstations, servers, and network devices?
Mitchell Dulaney says
I think baseline security configurations should be reviewed at minimum annually by the department responsible for those devices, and the information security team should provide guidance and review any changes made. An effective information security team would be monitoring developments in their field throughout the year and should be recommending changes as needed anyway. The annual review would consist of the responsible departments evaluating their own needs and potential changes, and the information security team and upper management would confirm their recommendations have been followed and requirements from updated policies are being met by the various departments.
Nicholas Fabrizio says
I believe the security baseline should be reviewed annually as well. This will allow the team creating the baseline to review and take into consideration any new vulnerabilities and threats that may have been discovered over the course of a year. This way the organization can reevaluate their risks and determine if new mitigation requirements need to be added into the security baseline.
Xiduo Liu says
Patching servers and hosts require testing to ensure the continuing functioning of the systems receiving the patches. What would be a reasonable timeframe for organizations to fully implement a specific patch? What about zero-day patches?
Mitchell Dulaney says
Zero-day patches in particular should be prioritized for implementation within a few days of publication. This is because exploits for a zero-day vulnerability are typically reverse-engineered within days of patch release (if an exploit isn’t already in the wild), so an organization accepting heightened risk if they don’t patch quickly. Other patches should be implemented in timeframes relative to the level of vulnerability and the resources required to patch. If the vulnerability puts the enterprise at high risk, or if the patching process is low-impact (wouldn’t cause any downtime, for example), then it should be implemented quickly.
Panayiotis Laskaridis says
What do you think is more important, planning or maintenance? Could one make up for the other?
Ashleigh Williams says
I believe both are equally important. Poor planning can lead to unnecessary vulnerabilities and threats and lack of maintenance can also lead to a host of issues. While maintenance will catch any issues missed in planning, both are necessary.
Charlie Corrao says
I think both are equally important, but if I had to pick one, I would say planning is slightly more important. Without a good plan, the organization will be at extreme risk. One of the benefits of planning is to help ensure the organization is ready for any issues that may arise. With a solid plan, you may be able to avoid doing maintenance in the future that you would have had to do if there was no plan. But, for an organization to be successful they need to be taken equally seriously
Quynh Nguyen says
I believe maintenance is more important because if all systems, networks, and servers are properly kept up with patches and updates, it would minimize failures and the need for backup plans or replacements. It is much cheaper to keep strengthening your current systems than to constantly replace or fix it.
Christa Giordano says
That is a tough choice! I have to say planning is more important, based on the NIST Guide to General Server Security. The guide notes that planning prior to installation, configuration, and deployment is the most important aspect of deploying a secure server as it will lead to stronger security and in compliance with company policies and procedures. It is easier to identify and architect the security upfront than to try to layer it in after in addition to being more cost-effective. Careful planning also allows for well thought out and rational decision making, rather than trying to operate in emergency mode if a vulnerability is exposed from poor planning design.
Nicholas Fabrizio says
I would say both are equally important because an organization can spend a lot of time carefully planning their security architecture for the network and performing risk assessments on servers to determine which mitigations are most cost-effective to implement. However, the threat environment is constantly changing and new vulnerabilities are discovered, so maintenance is essential to keep the network and systems secure.
Quynh Nguyen says
Why do firms have a hard time applying patches?
To-Yin Cheng says
It might because applying patches is a timely process. It needs to avoid downtime and make sure applying a patch to a system does not break its compatibility with any interdependent systems. It also increases the complexity when it applies to mixture platforms like cloud, SaaS, and op-prem environment.
Christa Giordano says
The first issue is the sheer number of patches generated on an annual basis, this can be difficult for an organization to maintain considering how often patches are required. In addition not all patches are able to be uninstalled once implemented. If a patch is not tested before deploying more broadly throughout the organization, the impact to the system is unknown which can result in frozen or slow machines. This extra element of testing the patch also adds resource and time constraints. When patches are installed, there is also an associated cost benefit element in that the extra security measure usually comes with slower machines or reduced functionality; therefore it is important to understand what the patch is “fixing” to determine if it is worth it.
Wei Liu says
The most common reason for not applying patches is fear. Applying patches requires stopping and then restarting the system. Applying a patch can result in some application no longer working or functioning properly. Also, when companies are managing their OS patching, third party application vulnerabilities are too often overlooked completely, leaving security holes on every endpoint.
Elias Harake says
Hi Quynh,
I think most of the difficulty comes from software and operating system compatibility. The majority of organizations are afraid that the new patches could interfere with the availability principle of data and so they rather wait for the next patch to be released. They may also have a mindset that since they don’t have any issue currently “why should I install patches that may cause new issues”. It could also be a case of ignorance and IT mismanagement. Also, perhaps someone is not proactively concern about updating the system essentially.
Michael Doherty says
Does your company have a patching procedure? is it enforced? How is guaranteed to be completed, does the audit team review?
Megan Hall says
My company does have a patching procedure and it is enforced. There is a small team that is dedicated to patch management. and we have software in place that scans our environment almost constantly to determine what assets (including operating systems and applications) are in our environment. The software helps manage the Microsoft releases issued on patch Tuesday and also scans externally for third-party application patches and compares what is in our environment to the patches that have been released. From there, we have a procedure for prioritizing and then testing and deploying patches. We have internal reporting that is monitored by our security team to ensure patches are deployed timely per our procedures We have quarterly vulnerability scans that provide an independent assessment of how well we are managing patches. And then audit does evaluate the process annually. But I would say there are a couple of layers of review and validation before it gets to audit, between the security team and the independent vulnerability assessment.
Elias Harake says
What is the most secure operating system server currently available to businesses and why? Also, why do some IT professionals believe that Apple’s OS is more secured than Windows?
Wei Liu says
In my opinion, Linux is the most secure OS, as its source is open. Anyone can review it and make sure there are no bugs or back doors. By having that much oversight, there are fewer vulnerabilities. I think IT professionals believe that Apple’s OS is more secured than Windows because the macOS is based on Unix which is generally more difficult to exploit than Windows.
Ashleigh Williams says
Per your company’s information security policy, what is the current backup schedule and where is data backed up to?
Christa Giordano says
My audit portfolio includes one of the critical designated applications throughout our organization by our national IT department. The application is backed up daily to two different servers; a national server and a contingency server. The contingency server is tested several times a year to ensure correct failover.