One of the challenges I’ve seen that was also mentioned in the Chapter reading was about storage of log files and how much space it takes. Absent any laws or regulations requiring retention, what would you suggest as an ideal amount of time to store logs in an IDS?
In an ideal world, you could keep logs forever and not have to worry about the storage space and costs. That way you could always reference back to them if needed. Like anything we talk about, I think it’s all risk-based in terms of what is the best solution. You never know when you’ll discover some sort of data breach that actually started 8 years ago and all of a sudden you need all those logs. For instance, access logs should probably be kept longer than error logs. Depending on where the IDS is and what traffic it is seeing, will determine how long you want to keep the logs. If it’s monitoring traffic to and from a highly sensitive and critical database server, you’d want to keep those for a long time.
I think a helpful system for log storage would be to store logs for a period of time commensurate with the criticality level of the activity being logged. If an IDS believes that an action is more likely to be related to a security threat, then that log would be retained for a longer period of time (potentially years), so that if it becomes clear far down the line that a compromise occurred, the security team is more likely to have further-reaching logs to analyze. On the other hand, if an action is seen as highly unlikely to be related to a threat (a user logs in on the first try on their primary machine), those logs would cleared first, after only a week or two.
In chapter 10 it was discussed that it is important to have a rehearsal of response plans in order to make sure the plan is fully developed and everyone is prepared for if an actual incident occurs. How often should these rehearsals take place and does your company have walkthroughs or live tests of their continuity plans?
Hi Nicholas,
I think the frequency for rehearsals should depends on the criticality of the system or application involved and the impact it would have if an incident occurs. The disaster recovery and business continuity plans should also be updated on an annual basis (more frequently if there are changes) and be available and reviewed with all participant/key individuals as part of the plan as well as the stakeholders and end users that could be impacted by an outage. I would say minimally semi-annual basis and/or if any major changes to the system/application have taken place. My organization has what are called business resumption testing and our critical application go through this testing on a quarterly basis. These business resumption tests are live tests of what would happen if our critical applications were down. This is a coordinated effort among the application owner, impacted business and our National IT team. On a semi-annual basis there are also surprise table top exercises that serve as walkthroughs of the continuity plans.
It’s all risk based, so your plans regarding highly critical systems should be tested more often. You should test your responses to the most impactful and most likely incidents more often. Tabletop exercises will be more frequent than full live tests because they’re easier to pull off. A common approach I’ve seen is to do an incident response tabletop once every 6 months, simulating a different type of incident each time. My company assists other companies in testing their business continuity and disaster recovery plans.
I think the rehearsal frequency should depend on the level of risk. If the level of risk is higher, it should have more frequent rehearsal than the other less critical parts. The regular test to ensure it will work in the event of an emergency would be around six months. The company can be based on the criticality of the system to adjust the frequency.
I believe rehearsals should take place more frequently if it is a high risk event, for example, I used to work for a large company that had over 3,000 people on the company campus during a normal work day. We had shelter in place and fire drills once every 3 months. During these drills, we are to treat the situation as it is actually happening and we are also timed on how long it takes us to complete the drill and modifications are made to make the process more efficient or safer.
Log files could be something as mundane as login location and time. For example, if an employee logs on outside of normal business hours or logs in from a location where business is not done, then that could put you on the right path.
Which phase of the incident response process ((1) detection, analysis and escalation, (2) containment, (3) recovery, (4) apology)) do you think is most important?
Step 1 – Detection, analysis and escalation is important, if poorly executed then chances are an attack could easily turn into an APT, that the Security Team and by extension is organization are completely unaware of. During this time, the vulnerabilities researched and backdoors created by hackers could result in significant loss/modification of data for both the company and their customers. This would result in loss revenues, dramatic decrease in customer base and a tarnished reputation for poor cybersecurity – all of which impacts the bottom line.
I think Detection is most important because without that, the rest of the steps would not be triggered. However, I think postmortem/lessons learned is a close second. It’s really important to learn from past events to make improvements to processes and systems in place to avoid getting hit multiple times with incidents that could have potentially been avoidable. An effective postmortem can also help mitigate and lessen the impact of future events that do occur.
I think the first analysis is the most important. When an incident occurred, it will affect the accuracy. If it is underestimated, the system may fail to respond and return to the planned state. If it is overestimated, it will waste the company’s time and money to prepare for incident response.
I think steps 1-3 would impact every company. Steps 4-6 may be optional to some companies.
4) a company may not need to apologize, because it did not impact a customer or the employees,
5) There may be no employee/vendor/supplier wrong doing, so there is no reason to punish.
6) a company may not have enough funds to prosecute, and will have to focus on recovery. It is also possible that the actor may not be identified for prosecution.
Hi Jonathan,
I think the Detection phase is the most important for the above reasons mentioned. However, I think the recovery phase gives Detection a run for its money in terms of criticality and importance. Each hour a system or application is unavailable costs a company a significant amount of money as well as a hit to it’s reputation. The system needs to be better than it was before so the attacker does not come back or other attackers cannot exploit the same vulnerability, so all backdoors must be removed and other vulnerabilities addressed (to the extent that they can). The recovery efforts must be quick but they must also be accurate.
I think detection is the most important step because it opens the opportunity for companies to analyze, escalate, contain, recover, punish, and evaluate. The most harmful stage of any disruption is when it goes undetected because the damage is being done with no containment or analysis going on until it’s too late. The second most important to me is containment. Companies can detect a threat and escalate it, but whether they do it correctly is important. For example, if a malware is detected but not contained on all computers, the damage will go undetected for a period of time before it is detected again, furthering the damage.
To-Yin, contact information should be updated regularly so in the event of the need to use the BCP, the document is updated and accurate. If the contact information is out of date, it will cause the enactment of the plan to not go smoothly and the right people may not end up being involved.
The contact list of a business continuity plan should be updated monthly to ensure that all of the people can be reached in the event an incident occurs. Updating this list monthly is especially important if the company has a higher employee turnover rate. In addition to updating the list I believe it is important keep a backup copy of the plan stored offline in case the severity of a incident results in resources being taken offline and preventing employees from retrieving the appropriate files or communicating. Having an offline backup of the plan could help initiate the BCP in a timely manner.
With contact information update, planners can ensure key contacts are made aware of any business interruption and BCP activation. Effective communication channels are the key to disseminate information to employees, assess and relay damage, and coordinate recovery strategies.
Contact information should update monthly because roles are always changing, people get promoted/leave the company. If the contact information aren’t updated, when disaster strikes (and it doesn’t happen with notice), the proper flow of Business Continuity Plan would be interrupted and response time would be delayed. The BCP plan should be updated monthly in order to make sure it is carried out efficiently.
It would be bad timing when an organization is dealing with an ongoing outage and the contact information is outdated causing an additional information disconnect.
Contact info should be updated monthly because if a disaster occurs and anyone involved in recovery is not able to be contacted as swiftly as the continuity plan calls for, the entire plan might be put on hold while the organization scrambles to establish contact. Maintaining contact information on a monthly basis also should not require much overhead – users involved in the continuity plan should be trained on the information update process, and should be notified if they haven’t confirmed that information in a timely fashion.
Which rehearsal method (walkthrough, tabletop exercise or live testing) is most effective for an organization planning ahead for an IT security incident?
Have clearly defined roles and responsibilities for incident response team (whether it be functional, infrastructure, administration, legal, and finance), have a periodic training program in place for operational and tactical purposes, have a checklist for operational maintenance response, and evidence collection procedures for legal and/or forensic purposes.
I think live testing would be the most effective for an organization planning ahead for an IT security incident. It is a way to test how would people respond if something unexpected happened. We can see how fast it would actually respond and find out some problems. Although it is more costly. It might help the company to adjust the plan as needed.
I believe live tests are the most effective approach to ensure the organization’s plans are completely developed and employees know their roles. While this type of test is more costly then walkthroughs and tabletop exercises it may reveal more flaws in the plan that have been overlooked.
This depends a lot on the type of incident. Sometimes, you might never notice there was an incident if the actor was precise enough. Anything along the lines from servers not functioning to data missing could trigger a detection.
There are several ways to detect the whether the company is facing a critical security incident. Ex. traffic anomalies should be a flag. If a company experiences an unusual increase in its network traffic, it should be on the lookout.
What are the security consequences if an IDS is not tuned properly? Should an organization prefer to have more false positives or more false negatives, and why?
Mitchell, if an IDS is not tuned properly, then the tool may not be useful or reliable. Proper tuning is going to be essential to getting the right alerts at the right time that are truly indicative of potential incidents. I would say of the two choices I would rather have false positives than false negatives because I would not want to miss out on actual events being identified. However I do not think a significant volume of false positives is helpful and can be detrimental. It can cause an overuse of resources to process and analyze the false positives and also can cause too much noise that leads other, actual positives to be ignored.
How often do you think a disaster recovery plan should be tested and updated in an organization? Which industries do you think would require more frequent testing and updating the DRP? Why would it require more frequent testing and updating?
Hi Elias,
Disaster recovery plans should be updated whenever there is a change to people, systems or processes as the organization wants to ensure the accuracy of the information in the event of a disaster. The last position an organization wants to be in is to discover incorrect information during a disaster (such as wrong contact information for a critical person). At a bare minimum DRPs should be updated annually, but as noted should really be done more frequently.
Two industries that I think of in terms of higher risk that would warrant more frequent updating and testing of disaster recovery plans include financial institutions and healthcare organizations as these industries could greatly impact the public in the event of a disaster. Financial institutions impact personal finances as well as other business and organizations and could cripple the economy if not prepared for a disaster. Healthcare institutions need to be able to access medical records and information which could be a life and death situation and they need to be able to operate their systems to perform procedures, read films, test results, etc.
I think the disaster recovery plan needs to be holistically reviewed at minimum on an annual basis, but that it should also be updated continually as policies change and staff turns over. The risk management team who are responsible for the disaster recovery plan should have relationships with management of other departments, and a workflow should be in place by which risk management is notified of policy updates and business developments. This facilitates communication between groups and enables risk management to confirm that the recovery plan is being maintained.
Detection is first of three priorities where you learn that an incident has occurred. Analysis is the second priority where you try and understand what the incident is before you take any kind of action to resolve it.
Malware analysis is an important part of preventing and detecting future cyber-attacks. Cyber security experts can analyze the attack that was happened in the past to understand the nature of their threat, so they will be able to figure out procedures to extract as much detail as can be done from the malware. Malware detection is the process of scanning the computer and files to detect malware.
The only plan that I can think of that I’ve participated in was a fire drill. As this is a common process that companies have plan for decades, I’d say it was pretty effective.
yes, on a regular interval. The organization I worked at before had ongoing disaster/outate testing, not only for internal resources and line of business application but more importantly, a monthly recovery on all client’s servers are tested. A copy of the client’s production servers will be pulled from backup and a test restore is conducted, the goal is to ensure they had the ability to fully restore a production server onto a VM or physical host at a moment’s notice.
I think there are not enough honeypots strategies currently being conducted by law enforcement agencies. If there were more honeypots, I think more cyber criminals could be found and detected before they actually commit another cyber breach. One of the reasons not enough honeypots are done is due probably because it takes a large human capital investment that many governmental agencies cannot currently afford.
I think honeypots can be extremely valuable to law enforcement. They help catch criminals that may be slightly less experienced and are more risk tolerant. I think they do run them, but not in a large scale due to the capital that is involved. For a honeypot to actually attract criminals, it needs to be believable. This means the application needs to look extremely realistic. This takes a lot of time and money that government agencies do not always have.
How did your company handle the pandemic in terms of Disaster Rcovery? Were they immediately prepared? Did you enjoy a few weeks off before transitioning to remote?
As an IT auditor for a public accounting for a public accounting firm, our business process was already structured to be remote as we spend so much time at client sites. There were a few transition challenges for office staff, but for the most part the transition was smooth.
The company I worked for was very prepared for the pandemic. We already had much of the WFH infrastructure ready, it just had to be scaled up to accommodate every employee working remotely. We didn’t have a few weeks off during the transition, but my teams workload slowed way down for a month or two once we transitioned to wfh. I work with the project managers, and most major projects were put on hold at that point.
Do you get involved in any of the contingency plan testings at your organization at all? If so, how often does your organization conduct such testings and do you think the regular testing of the plan is reducing the productivity of your organization?
The first example of a company that handled a security breach poorly is the Target data breach. They breach occurred ultimately due to a mismanagement on the security teams and management side. The organization also waited months after the breach occurred to notify affected stakeholders.
Yahoo comes to mind when I think of organizations that handled breaches poorly. They hired a highly qualified CIO but didn’t give him the proper resources to fix the poor cybersecurity. The CIO left and then another data breach occured.
The continuity of operations plan provides procedures on how to restore an organization’s mission essential functions at an alternative site for up to 30 days and may also activate other plans as needed. A information system contingency plan provides procedures on how to recover an information system and may be activated independently of other plans depending on the situation.
Megan Hall says
One of the challenges I’ve seen that was also mentioned in the Chapter reading was about storage of log files and how much space it takes. Absent any laws or regulations requiring retention, what would you suggest as an ideal amount of time to store logs in an IDS?
Jonathan Mettus says
In an ideal world, you could keep logs forever and not have to worry about the storage space and costs. That way you could always reference back to them if needed. Like anything we talk about, I think it’s all risk-based in terms of what is the best solution. You never know when you’ll discover some sort of data breach that actually started 8 years ago and all of a sudden you need all those logs. For instance, access logs should probably be kept longer than error logs. Depending on where the IDS is and what traffic it is seeing, will determine how long you want to keep the logs. If it’s monitoring traffic to and from a highly sensitive and critical database server, you’d want to keep those for a long time.
Mitchell Dulaney says
I think a helpful system for log storage would be to store logs for a period of time commensurate with the criticality level of the activity being logged. If an IDS believes that an action is more likely to be related to a security threat, then that log would be retained for a longer period of time (potentially years), so that if it becomes clear far down the line that a compromise occurred, the security team is more likely to have further-reaching logs to analyze. On the other hand, if an action is seen as highly unlikely to be related to a threat (a user logs in on the first try on their primary machine), those logs would cleared first, after only a week or two.
Nicholas Fabrizio says
In chapter 10 it was discussed that it is important to have a rehearsal of response plans in order to make sure the plan is fully developed and everyone is prepared for if an actual incident occurs. How often should these rehearsals take place and does your company have walkthroughs or live tests of their continuity plans?
Christa Giordano says
Hi Nicholas,
I think the frequency for rehearsals should depends on the criticality of the system or application involved and the impact it would have if an incident occurs. The disaster recovery and business continuity plans should also be updated on an annual basis (more frequently if there are changes) and be available and reviewed with all participant/key individuals as part of the plan as well as the stakeholders and end users that could be impacted by an outage. I would say minimally semi-annual basis and/or if any major changes to the system/application have taken place. My organization has what are called business resumption testing and our critical application go through this testing on a quarterly basis. These business resumption tests are live tests of what would happen if our critical applications were down. This is a coordinated effort among the application owner, impacted business and our National IT team. On a semi-annual basis there are also surprise table top exercises that serve as walkthroughs of the continuity plans.
Jonathan Mettus says
It’s all risk based, so your plans regarding highly critical systems should be tested more often. You should test your responses to the most impactful and most likely incidents more often. Tabletop exercises will be more frequent than full live tests because they’re easier to pull off. A common approach I’ve seen is to do an incident response tabletop once every 6 months, simulating a different type of incident each time. My company assists other companies in testing their business continuity and disaster recovery plans.
To-Yin Cheng says
I think the rehearsal frequency should depend on the level of risk. If the level of risk is higher, it should have more frequent rehearsal than the other less critical parts. The regular test to ensure it will work in the event of an emergency would be around six months. The company can be based on the criticality of the system to adjust the frequency.
Quynh Nguyen says
I believe rehearsals should take place more frequently if it is a high risk event, for example, I used to work for a large company that had over 3,000 people on the company campus during a normal work day. We had shelter in place and fire drills once every 3 months. During these drills, we are to treat the situation as it is actually happening and we are also timed on how long it takes us to complete the drill and modifications are made to make the process more efficient or safer.
Christa Giordano says
Event logging and log files are discussed as part of of an IDS. What are the different types of log files and how are they used to identify an attack?
Panayiotis Laskaridis says
Log files could be something as mundane as login location and time. For example, if an employee logs on outside of normal business hours or logs in from a location where business is not done, then that could put you on the right path.
Xiduo Liu says
Time, IP addresses, the user name(s), type of connections(s), are all good information as a starting point for identifying attacks.
Jonathan Mettus says
Which phase of the incident response process ((1) detection, analysis and escalation, (2) containment, (3) recovery, (4) apology)) do you think is most important?
Jonathan Mettus says
(5) punishment, and (6) postmortem evaluation
Lakshmi Surujnauth says
Step 1 – Detection, analysis and escalation is important, if poorly executed then chances are an attack could easily turn into an APT, that the Security Team and by extension is organization are completely unaware of. During this time, the vulnerabilities researched and backdoors created by hackers could result in significant loss/modification of data for both the company and their customers. This would result in loss revenues, dramatic decrease in customer base and a tarnished reputation for poor cybersecurity – all of which impacts the bottom line.
Megan Hall says
I think Detection is most important because without that, the rest of the steps would not be triggered. However, I think postmortem/lessons learned is a close second. It’s really important to learn from past events to make improvements to processes and systems in place to avoid getting hit multiple times with incidents that could have potentially been avoidable. An effective postmortem can also help mitigate and lessen the impact of future events that do occur.
To-Yin Cheng says
I think the first analysis is the most important. When an incident occurred, it will affect the accuracy. If it is underestimated, the system may fail to respond and return to the planned state. If it is overestimated, it will waste the company’s time and money to prepare for incident response.
Michael Doherty says
I think steps 1-3 would impact every company. Steps 4-6 may be optional to some companies.
4) a company may not need to apologize, because it did not impact a customer or the employees,
5) There may be no employee/vendor/supplier wrong doing, so there is no reason to punish.
6) a company may not have enough funds to prosecute, and will have to focus on recovery. It is also possible that the actor may not be identified for prosecution.
Christa Giordano says
Hi Jonathan,
I think the Detection phase is the most important for the above reasons mentioned. However, I think the recovery phase gives Detection a run for its money in terms of criticality and importance. Each hour a system or application is unavailable costs a company a significant amount of money as well as a hit to it’s reputation. The system needs to be better than it was before so the attacker does not come back or other attackers cannot exploit the same vulnerability, so all backdoors must be removed and other vulnerabilities addressed (to the extent that they can). The recovery efforts must be quick but they must also be accurate.
Quynh Nguyen says
I think detection is the most important step because it opens the opportunity for companies to analyze, escalate, contain, recover, punish, and evaluate. The most harmful stage of any disruption is when it goes undetected because the damage is being done with no containment or analysis going on until it’s too late. The second most important to me is containment. Companies can detect a threat and escalate it, but whether they do it correctly is important. For example, if a malware is detected but not contained on all computers, the damage will go undetected for a period of time before it is detected again, furthering the damage.
To-Yin Cheng says
Why should contact information update monthly in the business continuity plan?
Megan Hall says
To-Yin, contact information should be updated regularly so in the event of the need to use the BCP, the document is updated and accurate. If the contact information is out of date, it will cause the enactment of the plan to not go smoothly and the right people may not end up being involved.
Nicholas Fabrizio says
The contact list of a business continuity plan should be updated monthly to ensure that all of the people can be reached in the event an incident occurs. Updating this list monthly is especially important if the company has a higher employee turnover rate. In addition to updating the list I believe it is important keep a backup copy of the plan stored offline in case the severity of a incident results in resources being taken offline and preventing employees from retrieving the appropriate files or communicating. Having an offline backup of the plan could help initiate the BCP in a timely manner.
Wei Liu says
With contact information update, planners can ensure key contacts are made aware of any business interruption and BCP activation. Effective communication channels are the key to disseminate information to employees, assess and relay damage, and coordinate recovery strategies.
Quynh Nguyen says
Contact information should update monthly because roles are always changing, people get promoted/leave the company. If the contact information aren’t updated, when disaster strikes (and it doesn’t happen with notice), the proper flow of Business Continuity Plan would be interrupted and response time would be delayed. The BCP plan should be updated monthly in order to make sure it is carried out efficiently.
Xiduo Liu says
It would be bad timing when an organization is dealing with an ongoing outage and the contact information is outdated causing an additional information disconnect.
Mitchell Dulaney says
Contact info should be updated monthly because if a disaster occurs and anyone involved in recovery is not able to be contacted as swiftly as the continuity plan calls for, the entire plan might be put on hold while the organization scrambles to establish contact. Maintaining contact information on a monthly basis also should not require much overhead – users involved in the continuity plan should be trained on the information update process, and should be notified if they haven’t confirmed that information in a timely fashion.
Lakshmi Surujnauth says
Which rehearsal method (walkthrough, tabletop exercise or live testing) is most effective for an organization planning ahead for an IT security incident?
Christopher Clayton says
Have clearly defined roles and responsibilities for incident response team (whether it be functional, infrastructure, administration, legal, and finance), have a periodic training program in place for operational and tactical purposes, have a checklist for operational maintenance response, and evidence collection procedures for legal and/or forensic purposes.
To-Yin Cheng says
I think live testing would be the most effective for an organization planning ahead for an IT security incident. It is a way to test how would people respond if something unexpected happened. We can see how fast it would actually respond and find out some problems. Although it is more costly. It might help the company to adjust the plan as needed.
Nicholas Fabrizio says
I believe live tests are the most effective approach to ensure the organization’s plans are completely developed and employees know their roles. While this type of test is more costly then walkthroughs and tabletop exercises it may reveal more flaws in the plan that have been overlooked.
Christopher Clayton says
If an incident or disaster should occur in a company, how is usually detected?
Panayiotis Laskaridis says
This depends a lot on the type of incident. Sometimes, you might never notice there was an incident if the actor was precise enough. Anything along the lines from servers not functioning to data missing could trigger a detection.
Wei Liu says
There are several ways to detect the whether the company is facing a critical security incident. Ex. traffic anomalies should be a flag. If a company experiences an unusual increase in its network traffic, it should be on the lookout.
Mitchell Dulaney says
What are the security consequences if an IDS is not tuned properly? Should an organization prefer to have more false positives or more false negatives, and why?
Megan Hall says
Mitchell, if an IDS is not tuned properly, then the tool may not be useful or reliable. Proper tuning is going to be essential to getting the right alerts at the right time that are truly indicative of potential incidents. I would say of the two choices I would rather have false positives than false negatives because I would not want to miss out on actual events being identified. However I do not think a significant volume of false positives is helpful and can be detrimental. It can cause an overuse of resources to process and analyze the false positives and also can cause too much noise that leads other, actual positives to be ignored.
Elias Harake says
How often do you think a disaster recovery plan should be tested and updated in an organization? Which industries do you think would require more frequent testing and updating the DRP? Why would it require more frequent testing and updating?
Christa Giordano says
Hi Elias,
Disaster recovery plans should be updated whenever there is a change to people, systems or processes as the organization wants to ensure the accuracy of the information in the event of a disaster. The last position an organization wants to be in is to discover incorrect information during a disaster (such as wrong contact information for a critical person). At a bare minimum DRPs should be updated annually, but as noted should really be done more frequently.
Two industries that I think of in terms of higher risk that would warrant more frequent updating and testing of disaster recovery plans include financial institutions and healthcare organizations as these industries could greatly impact the public in the event of a disaster. Financial institutions impact personal finances as well as other business and organizations and could cripple the economy if not prepared for a disaster. Healthcare institutions need to be able to access medical records and information which could be a life and death situation and they need to be able to operate their systems to perform procedures, read films, test results, etc.
Mitchell Dulaney says
I think the disaster recovery plan needs to be holistically reviewed at minimum on an annual basis, but that it should also be updated continually as policies change and staff turns over. The risk management team who are responsible for the disaster recovery plan should have relationships with management of other departments, and a workflow should be in place by which risk management is notified of policy updates and business developments. This facilitates communication between groups and enables risk management to confirm that the recovery plan is being maintained.
Quynh Nguyen says
What is the difference between detection and analysis?
Christopher Clayton says
Detection is first of three priorities where you learn that an incident has occurred. Analysis is the second priority where you try and understand what the incident is before you take any kind of action to resolve it.
Wei Liu says
Malware analysis is an important part of preventing and detecting future cyber-attacks. Cyber security experts can analyze the attack that was happened in the past to understand the nature of their threat, so they will be able to figure out procedures to extract as much detail as can be done from the malware. Malware detection is the process of scanning the computer and files to detect malware.
Michael Doherty says
Have you personally participated in a live disaster plan or event? Was the plan that was created efficient and Effective?
Ashleigh Williams says
The only plan that I can think of that I’ve participated in was a fire drill. As this is a common process that companies have plan for decades, I’d say it was pretty effective.
Xiduo Liu says
yes, on a regular interval. The organization I worked at before had ongoing disaster/outate testing, not only for internal resources and line of business application but more importantly, a monthly recovery on all client’s servers are tested. A copy of the client’s production servers will be pulled from backup and a test restore is conducted, the goal is to ensure they had the ability to fully restore a production server onto a VM or physical host at a moment’s notice.
Wei Liu says
Do you think law enforcement agencies run honeypots to track criminal behavior?
Elias Harake says
I think there are not enough honeypots strategies currently being conducted by law enforcement agencies. If there were more honeypots, I think more cyber criminals could be found and detected before they actually commit another cyber breach. One of the reasons not enough honeypots are done is due probably because it takes a large human capital investment that many governmental agencies cannot currently afford.
Charlie Corrao says
I think honeypots can be extremely valuable to law enforcement. They help catch criminals that may be slightly less experienced and are more risk tolerant. I think they do run them, but not in a large scale due to the capital that is involved. For a honeypot to actually attract criminals, it needs to be believable. This means the application needs to look extremely realistic. This takes a lot of time and money that government agencies do not always have.
Panayiotis Laskaridis says
How did your company handle the pandemic in terms of Disaster Rcovery? Were they immediately prepared? Did you enjoy a few weeks off before transitioning to remote?
Ashleigh Williams says
As an IT auditor for a public accounting for a public accounting firm, our business process was already structured to be remote as we spend so much time at client sites. There were a few transition challenges for office staff, but for the most part the transition was smooth.
Charlie Corrao says
The company I worked for was very prepared for the pandemic. We already had much of the WFH infrastructure ready, it just had to be scaled up to accommodate every employee working remotely. We didn’t have a few weeks off during the transition, but my teams workload slowed way down for a month or two once we transitioned to wfh. I work with the project managers, and most major projects were put on hold at that point.
Xiduo Liu says
Do you get involved in any of the contingency plan testings at your organization at all? If so, how often does your organization conduct such testings and do you think the regular testing of the plan is reducing the productivity of your organization?
Charlie Corrao says
What is an example of a company that handled a cyber security breach well? On the other side, whats an example of a company that handled it poorly?
Ashleigh Williams says
The first example of a company that handled a security breach poorly is the Target data breach. They breach occurred ultimately due to a mismanagement on the security teams and management side. The organization also waited months after the breach occurred to notify affected stakeholders.
Panayiotis Laskaridis says
Yahoo comes to mind when I think of organizations that handled breaches poorly. They hired a highly qualified CIO but didn’t give him the proper resources to fix the poor cybersecurity. The CIO left and then another data breach occured.
Ashleigh Williams says
What is the difference between the continuity of operations plan and the information system contingency plan?
Nicholas Fabrizio says
The continuity of operations plan provides procedures on how to restore an organization’s mission essential functions at an alternative site for up to 30 days and may also activate other plans as needed. A information system contingency plan provides procedures on how to recover an information system and may be activated independently of other plans depending on the situation.