-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
1. Are the terms Business Continuity Plan (BCP) and Disaster Recovery Plan (DRP) synonyms or are they different? If they are different, what are the differences?
2. Is it practical to conduct a thorough test […]
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
Is it practical to conduct a thorough test of a Business Continuity Plan (BCP)? Why might it not be practical? If it is not practical, what alternative ways can you recommend for testing a BCP?
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
What is the U.S. Federal Government’s Recovery Time Objective (RTO) for IT capabilities needed to support continuity of communications? [Hint: see Homeland Security (2012) Federal Continuity Directive 1 – […]
-
So, I was a little confused in answering this question, because I’m not sure if you’re looking for a “X amount of hours” answer or a more general “this is what RTO” is answer. In either case, the FEMA document you mentioned as a hint has the following in Annex H, Subsection 7:
“Organizations must ensure that the communications capabilities required by this Directive are maintained, are operational as soon as possible following a continuity activation, and in all cases within 12 hours of continuity activation, and are readily available for a period of sustained usage for up to 30 days or until normal operations can be reestablished. Organizations must plan accordingly for essential functions that require uninterrupted communications and IT support, if applicable.”
If I’m reading it correctly, my understanding is that if you work for the Federal Government, you have no more than 12 hours to meet the required communications continuity plan.
Another portion of the Directive discusses teleworking options to support continuity, and that made me think of my own experience. As a contractor for the Federal Government, if there is inclement weather that prevents me from working on-site, I do have permission to work from home for the period of time required. The readings made me wonder what the plan would be if my organization had to move buildings. We’re small enough to likely be able to work temporarily at another on-site building, and our enterprise IT systems are designed to work across all the government buildings at the Navy Yard.
-
Andres,
Good post. I came to the same conclusion. Communication can not be interrupted for more than 12 hours and the back-up communication system needs to support up to 30 days of operations.
From experience in the Army, they told us our communication system required 100% up-time, at all times. Or, lives are lost. We had several systems on stand-by in case of an outage. We would also have equipment on call in case of a bare-metal restore for severs and workstation.
-
Nice post Andres.
RTO- ‘Recovery Time Objective is the targeted duration of time, a service level within which a business process must be restored after a disaster (or disruption) in order to avoid unacceptable consequences associated with a break in business continuity.’
As per the ‘Homeland Security (2012) Federal Continuity Directive 1 – available from FEMA.gov’- US federal Government requires that the PMEFs(Primary Mission Essential Functions) must be operational within 12 (RTO) hours after an event has occurred under all threat conditions. The capabilities include operability of the essential functions, access and usage of essential records/information, physical security and protection against all threats identified in the facility.
Reference: https://en.wikipedia.org/wiki/Recovery_time_objective
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
-
J&J warns diabetic patients: Insulin pump vulnerable to hacking
According to the article found from Reuters, Johnson & Johnson, the medical device and pharmaceutical company, has recently announced that its OneTouch insulin pump products are vulnerable to hacking. While one may think of hacking to only take place on computers, it turns out that medical devices have recently become a target. The article states that J&J has recently discovered the vulnerability in this product which if exploited, can cause an overdose of insulin into the user. According to the article, the approximately 114,000 individuals which include doctors and users of the medical device have been notified about the vulnerability. This Johnson & Johnson insulin pump is attached to the person underneath some layer of clothing but allows an individual to control a pump of dosage by using a remote control. This is where the vulnerability lies as it allows a hacker to spoof the communication between the remote control and the insulin pump since the communication is not encrypted, potentially injecting a lethal dosage. With that being said, Johnson & Johnson has stated that they believe the risk is low since it requires highly technical knowledge and need to be within 25 feet of the pump. While the risk is low, they have informed customers who are worried about the vulnerability to disconnect the remote control functionality from the pump. This article goes to show that hacking and vulnerabilities are not just relevant to businesses or databases, but can be applied to much more.
Source:
http://www.reuters.com/article/us-johnson-johnson-cyber-insulin-pumps-e-idUSKCN12411L -
https://www.sec.gov/news/pressrelease/2016-133.html
“SEC Proposes Rule Requiring Investment Advisers to Adopt Business Continuity and Transition Plans”
Registered investment advisers would be required to have and execute written business continuity plans.
This could be a great thing for clients and investors who are concerned about what happens with their money in the event they want to take action during a disruption to an adviser’s services.
The BCP would need to take into consideration the following components.
– Maintenance of systems and protection of data
– Pre-arranged alternative physical locations
– Communication plans
– Review of third-party service providers
– Plan of transition in the event the adviser is winding down or is unable to continue providing advisory services. -
Searching for Best Encryption Tools? Hackers are Spreading Malware Through Fake Software
The article I read is about the fact that people when trying to protect themselves from virus , malware etc actually run the risk to use fake securing tool. Indeed, hackers use now fake versions of encryption tools in order to infect as many victims as possible. The article specifically focus on a certain type of hackers called “StrongPity “ . They target users of software designed for encrypting data and communications. How ? by setting up fake distribution sites that closely mimic legitimate download sites, which trick users into downloading malicious versions of these encryption apps, allowing attackers to spy on encrypted data before encryption occurred. The top five countries affected by the group are Italy, Turkey, Belgium, Algeria and France.
-
Synopsis on “Critics Blast New York’s Proposed Cybersecurity Regulation”
The financial industry has always been a target for hackers. Back in January, New York’s governor, Andrew Cuomo proposed some new cybersecurity requirements on banks. The main components of this proposal required banks to:
– Hire a “qualified” CISO to be responsible/accountable for mitigating cyber risks.
– The bank must notify the state within 72 hours of any cybersecurity event that could impact business or
consumer privacy.
– Require two-factor authentication for employees, contractors, and other third-parties who has privileged
access to the organizations internal systems.
– Encryption of all non-public information.Critics has clash back with the state claiming that their approach are too “prescriptive” and that smaller banks does not have the resources to be compliant.
Personally, I think that this is a good thing and it should be a federal requirement. Financial institutions handles a great deal of personal information from the customers, including their money. A successful attack could leave customers vulnerable to bankruptcy and identity theft, among others. I am surprised that encryption of all non-public information and some of the other requirements is not already enforced at the banks. Even if smaller banks cannot meet the requirements at this time, I think a strategic goal for them is to ensure consumer privacy through the adherence of these requirements. What are your thoughts?
Source: http://www.databreachtoday.com/critics-blast-new-yorks-proposed-cybersecurity-regulation-a-9453
-
Is Your Access Control System a Gateway for Hackers?
With access control systems being prime entry points to hacking IT and OT systems, security professionals need to stress protecting security systems. In order to get into IT and critical infrastructure operational technology systems, hackers look for the easiest path in leveraging many different physical assets. They typically start with hardware which will give them access to specific computers. Unfortunately, many organizations don’t secure their own security equipment. For example, IP wireless cameras and card readers in the access control system are favorite targets of hackers.
How to protect the card system from hacking: first is to provide a higher-security handshake, or code between the card or tag and reader to help ensure that readers will only accept information from specially coded credentials, second is valid IT and anti-tamper feature available with contactless smartcard readers, cards and tags.In recent years, the security awareness is improving rapidly, but people often pay a lot of attentions on how to protect themselves from the internet and forget the most basic security issue-physical security. The hacking will never stop, as well as protecting your system.
link: http://www.securitymagazine.com/articles/87505-is-your-access-control-system-a-gateway-for-hackers
-
How is NSA breaking so much crypto?
The Snowden documents shows that NSA has built extensive infrastructure to intercept and decrypt VPN traffic and suggest that the agency can decrypt at least some HTTPS and SSH connections on demand.
However, the documents do not explain how these breakthroughs work.If a client and server are speaking Diffie-Hellman, they first need to agree on a large prime number with a particular form. There seemed to be no reason why everyone couldn’t just use the same prime, and, in fact, many applications tend to use standardized or hard-coded primes
NSA has prioritized “investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic.” It shows that the agency’s budget is on the order of $10 billion a year, with over $1 billion dedicated to computer network exploitation, and several subprograms in the hundreds of millions a year..http://thehackernews.com/2015/10/nsa-crack-encryption.html
-
http://www.technewsworld.com/story/84000.html
The article, “What Should be on the Next President’s Cyberagenda?” starts off by stating that usually cyber security is at the end of the agenda for most and that including the President of the U.S. TechNewsWorld asked experts what should be on the Presidents Cyber agenda.
-“The president has to set the tone early on cybersecurity – within the first 100 days”. Sam Curry, chief product officer at Cybereason further explains, “New cabinet secretaries have to understand that their mission can’t be done without secure systems. Far too often, cybersecurity is not even on the list of priorities for initiatives and agencies and staffing”.
-That Obama should be focusing on the protecting the private sector from cybercrime and threats if he wants a stronger economy.
-Critical infrastructure organizations need more legislation passed so they are able to protect their sensitive data through strict access controls and data encryption.
– Create a national cyber-recovery plan intended along the lines of the civil defense plans created for response to a nuclear attackIt concludes by stating making improvements and policies don’t need to be a new outlook when approaching security, as a country we need to work what we have due to the risk involved with change. Also, experts conclude with that, “The momentum needs to continue and grow. The handoff between administrations should not be a fumble.”
-
Hacking Beyond Software and Applications!
Security researchers have been figuring out hacking techniques that do not restrict to only operating system or applications, but break through the actual machine. They trying techniques to exploit the hardware behavior by targeting the actual electricity signals that comprises bits of data in computer memory. At a recent conference, researchers have presented similar attacks and how they would be implemented in real life.
Google researchers had demonstrated “Rowhammer.” The hacking trick that repeatedly overwrites or hammers a certain row of transistors in its DRAM flash memory, until a rare glitch occurs. They target to leak electrical charge from the hammered row of transistors into an adjacent row. The leaked charge then causes a certain bit in that adjacent row of the computer’s memory to flip from one to zero or vice versa. That bit flip gives you access to a privileged level of the computer’s operating system.
Such attacks will require to find defenses that purely run on digital models. -
The CIA is preparing for a possible cyber attack on Russia http://townhall.com/tipsheet/mattvespa/2016/10/15/report-cia-could-be-planning-a-cyber-strike-against-russia-n2232760
I thought this was interesting because it shows that a method of war will be cyber war. I thought it was interesting that the Vice President said that he hopes that the american people won’t know that this happened.. I’m curious if this will go down as a “Manhattan Project” type moment where we created and demonstrated a new weapon.
I’m curious if this will lead to further attacks from Russia or if this will worry them and other countries about what is yet to come and they might scale back their cyber operations against us.
-
Aviation Officials Step Up Cybersecurity Checks of Flight Communication Systems
U.S. and European aviation authorities are focused on cybersecurity threats that could affect ACARS (Aircraft Communications Addressing and Reporting System), which is a basic data-transmission system primarily used for air traffic control purposes. The ACARS is a decades-old system used since the 1980s and because of its age, the system lacks more secure safeguards embedded onboard newer messaging networks.
Up until now, the information sent by ACARS from planes to the ground wasn’t considered safety critical nor does it handle any data that could immediately jeopardize safe operation of flights. While no specific hacking attempts or intrusions have been detected, the activity has gained importance in light of increasing instances of cyberthreats to commercial aviation in general. Those threats have prompted governments and industries to sit up and be take action to develop future standards to ensure that any successful hacks will be detected and neutralized.
Besides ACARS, the FAA’s technical advisory group has also decided to pay more attention to cybersecurity threats across the full range of onboard equipment and internet connections.
-
Nuclear power plant was disrupted by cyber attack
The International Atomic Energy Agency director Yukiya Amano announced that a nuclear power plant had some disruptions due to a cyber attack. For security reasons, he did not clarify which power plant or what was disrupted. He was able to say that the plant stayed open but took precautionary measures. There is a difference between disruptive and destructive cyber attacks but disruptive can be very dangerous if they target critical infrastructure. Terrorists have considered nuclear plants as potential attack targets, even if it is predicted that they cannot blow up a reactor. The U.N. is helping nuclear facilities prepare for cyber security attacks with training and constructing an information database.
http://www.reuters.com/article/us-nuclear-cyber-idUSKCN12A1OC
-
I read the article “Survey Says Most Small Businesses Unprepared for Cyberattacks”. According to this article, over 78% of small-business owners still don’t have a cyberattack response plan, even though more than 54% were victim to at least one type of cyberattack which include:
1. Computer virus – 37%
2. Phishing – 20%
3. Trojan horse – 15%
4. Hacking – 11%
5. Unauthorized access to customer information – 7%
6. Unauthorized access to company information – 7%
7. Issues due to unpatched software – 6%
8. Data breach – 6%
9. Ransomware – 4%Not only public companies need to protect their information assets, as for those small business or new start companies, they also have responsibility to protect the customer’s personal information. Therefore, the small business owners should realize the importance of protecting their information assets by implement anti cyberattack controls.
-
Healthcare and Cyber Security
http://www.darkreading.com/threat-intelligence/healthcare-suffers-estimated-$62-billion-in-data-breaches/d/d-id/1325482This article talks about the healthcare industry being susceptible to cyber attacks. The issues with the industry and cyber security is budget since it doesn’t necessarily grow, but either stays the same or decreases by a certain percent. Each facility is different but the problem is they just don’t have the talent and according to the article, there are about 20,000 jobs available for cyber security in the healthcare field. Most of the attacks are aimed at getting medical records, some towards insurance and billing. It can happen maybe 1-5 times a year due to malware, ransomware, insider threats and sometimes it’s not reported. It’s scary to think when entering a healthcare facility that our personal healthcare records are put at risk because the protection of the systems isn’t up to date or non-existent. It also makes for a perfect opportunity to attract some talent and help secure these systems.
-
The United States publicly blamed Russia for the attack on the US voting system during the Democratic National Convention. The article I read is about the speculation that the US may be planning a revenge attack on Russia. Apparently, the CIA has given the white house a number of options that are all based on harassing and embarrassing Russia. Joe Biden said that the US will be sending a message to Russia with the revenge attack and he hopes that it will have the greatest impact possible. However, it ultimately will be President Obama’s decision.
I think that it is hypocritical that we pretty much admitted that government attacks are common and acceptable and I think it’s ridiculous that even though the voting system hack embarrassed our country, we are now beating our chest. Talk is cheap and you would think the US would want it to be somewhat of a surprise attack but we are basically telling Russia that it is coming. Seems like the US may be playing games with this issue. Regardless, I think our country needs to invest more in cyber security and the protection of our systems.
Source: http://www.digitaltrends.com/computing/us-cyber-strike-russia/
-
Darin, I read a very similar article. Isn’t it out that our country is being so public about this? You would think they would want a cyber attack to be somewhat secretive yet we are basically telling Russia that this attack is coming. I think it is also odd that the US is “beating its chest” about being so powerful with cyber, yet we allowed Russia to hack into our voting system. Biden said that the country is going to embarrass Russia, yet they definitely embarrassed our country a bit with the voting system hack.
-
This is interesting. I wonder if the cyber attack was executed to steal information to make money or if they wanted to cause harm. I also wonder to what extent they could’ve damaged the plant with this cyber attack. I wonder if they could have caused a meltdown.
-
Paul,
This is another level of criminality. I watched a movie were some was killed with a remote medical device, I thought that it was just fiction. But this article is showing that it something that can really happen, And it also shows that hackers are not only about money and information, now they want to hurt people physically. If J&J don’t come up with a solution this situation can get worse.
-
In a recent survey it is very much highlighted that both awareness and urgency are two major issues surrounding how cyber security and having a response plan in place are lacking, specifically in the EU. It was estimated that over the past 4 years a staggering amount of breaches on businesses occurred, at over 90%. What is equally concerning from this standpoint are the executive’s lack of concern regarding future breaches and how to respond effectively when a breach is identified. Also in the dialogue it appears that CEOs are now broadening or increasing the level of risk to cyber breaches that would fall into an acceptable amount of risk and focusing more of their resources into the incident response team, but still not nearly enough in my perspective. Outlining the executives lack of care was the response that only 42% of responders were worried about losing future business due to a security breach. I honestly did not expect this type of response when over the course of the past few years significant retailers and businesses have been impacted by data breaches and has definitely had a negative impact on their bottom line and brand image in the public eye. The government is try to force the businesses hands by implementing a regulation called General Data Protection Regulation or GDPR however it appears based on the survey results that this is still a very low priority on their list. It sounds as if the EU is much further behind in realizing the real-threat of cyber security and the negative impact it can have in all businesses and their overall bottom-line.
http://www.infosecurity-magazine.com/news/over-90-of-euro-firms-hit-by-data/
-
Leftover Factory debugger doubles as Android backdoor
A leftover factory debugger in Android firmware made by Taiwanese electronics manufacturer Foxconn can be flipped into a backdoor by an attacker with physical access to a device.
This can help the law enforcement or a forensics outfit wishing to gain root access to a targeted device.
It will allow complete code execution on the device, even if it’s encrypted or locked down. It’s exactly what a forensics company or law enforcement officials need.
An attacker with access to the device can connect to it via USB, run commands and gain a root shell with SELinux disabled and without the need for authentication to the device.
This not only allow to extract data stored on a password-protected or encrypted device, but also to brute-force attacks against encryption keys or unlocking a bootloader without resetting user data.
A command called Fastboot is a utility and protocol used to communicate with the bootloader, and to flash firmware. It comes in the Android SDK and devices can be booted into this mode over USB in order to re-flash partitions or file system images on a device. A custom client would support a reboot command that would put the device into a factory test mode. In the test mode, the Android Debug Bridge (ADB) runs as root and SELinux is disabled, allowing an outsider to compromise the device, bypassing authentication controls. -
“Android Banking Trojan Tricks Victims into Submitting Selfie Holding their ID Card”
According to Kaspersky Lab Anti-Malware Research team, Acecard is the most dangerous Android Banking Trojans out today.
You can read more about the evolution of Acecard malware here [https://securelist.com/blog/research/73777/the-evolution-of-acecard/]
Payment card companies like MasterCard have switched to selfies as an option instead of punching in the pin/password during the ID verification for payments that are made online. And, hackers have started to exploit vulnerabilities in this new security verification method.
Acecard, the android banking trojan, masks itself as a video plugin (like adobe flash player, video codec, etc.) Once the trojan is installed successfully, it will ask the target for different device permissions to execute the malicious code and then patiently waits for the target to open mobile applications. These applications are generally the ones that require user’s payment card information.
If a user opens up an app that requires payment transactions (Amazon shopping app, etc.), the trojan overlays itself on top of the legitimate app, and starts requesting user for card details.
“It displays its own window over the legitimate app, asking for your credit card details. After validating the card number, it goes on to ask for additional information such as the 4-digit number on the back.” – explains McAfee researcher Bruce Snell.
The trojan also prompts users to hold their ID card in their hand, underneath their face and take a selfie. A victim may be duped to think that these are the requests coming from the legitimate app they are using. Once customer data is obtained, hackers can make illegal transfers and control the target’s online accounts. This social engineering trick isn’t new but is still a big threat for less tech-savvy users. If one knows that there are family members or friends who are not tech-savvy, one can make sure that their phones aren’t downloading apps from un-trusted sources (in android phones you can change this setting).
Source: http://thehackernews.com/2016/10/android-banking-trojan.html
-
Mac malware can easily spy on your Skype calls
Patrick Wardle, an ex-NSA hacker has proposed a new way snoops might spy on people via their webcams.
As Macs make their camera sharable to multiple apps at the same time for perfectly legitimate reasons, it’s possible to create a malicious app that asks to use the webcam. The app wouldn’t just start using the camera, as the LED light would turn on and alert the user, instead, it would wait until another app – like Skype – ran so the spyware could piggyback on the process and start recording the victim.
With that, Wardle has created a basic tool, OverSight, to alert Mac owners whenever a program is asking for permission to access the camera. The user can then reject or allow access.
-
The article I shared is ” 6 Ways Hackers Can Monetize Your Life.”
Cybercrime is a multi-billion dollar economy with sophisticated actors and a division of labor that includes malware authors, toolkit developers, hacking crews, forum operators, support services and “mules.” There are countless sites in the dark web that offer ways for hackers to buy or sell stolen accounts, hacking tools and other criminal services.
Stolen credit card numbers aren’t the only way hackers take your money. The cybercrime industry is innovative and imaginative. It always comes to finding ways to turn our personal information into cash.
Sometimes you haven’t seen a consequence from a corporate data breach or reported software vulnerability, it doesn’t mean your life isn’t being traded online.
There are six ways hackers monetize your life online:
Medical Identity:
As Social Security numbers, health insurance accounts, Medicare account numbers aren’t so easy to replace as the credit card numbers. This type of information is the gold mine for identity theft and insurance fraud. In the black market value of these credentials at well over 10 times the price of stolen card numbers
Email and social media:
A cybercriminal can use a hacked email or social media account to distribute spam, run scams against the person’s contacts and connections, and try to leverage the stolen account to break into other online accounts used by the same person.
Uber
By hijacking your Uber account, most likely through a phishing email, they can set up fake drivers and bill you for “ghost rides.”
Airline Miles
hackers have to do is get access to your frequent flyer account, and they can steal your airline miles, sell them to other criminals or put the whole account up for sale.
Webcam
They infect your computer by using a remote administration tool (RAT), and they will be able to remotely control and access your webcam. Known as “ratters,” there are a lot of communities and forums on the dark web where these individuals share information, videos and photos of their webcam “slaves,” sell or trade them to other hackers, and rent access. One BBC report claimed hackers get $1 per hacked webcam for female victims, and $0.01 for men.
source:http://www.huffingtonpost.com/jason-glassberg/6-ways-hackers-can-moneti_b_9078224.html
-
I read an article about The Islamic State is seeking the ability to launch cyberattacks against U.S. government and civilian targets in a potentially dangerous expansion of the terror group’s Internet campaign. The Flight communication system would be target as well, so It is necessary to make sure the systems safe,
Source:: http://www.politico.com/story/2015/12/isil-terrorism-cyber-attacks-217179#ixzz4NQNSiHdO
-
Customer trust is often damaged after a data breach. following Yahoo recent disclosure of a data breach that affected more than 500 million accounts. Verizon may demand to renegotiate its $4.8 billion deal for Yahoo Inc. It’s yahoo’s responsibility to prove the full of impact and Verizon could allow it to change the terms of the takeover.
The breach occurred two years ago but was discovered after the merger deal was signed in July. Verizon doesn’t want to cut off the deal However they wanted to make changes on the term. Looking to renegotiate the deal could bring risks for Verizon as well.It’s not unusual for data breaches to affect acquisition deals, Verizon can ask for discounts or pull out of deals entirely because they don’t want to inherit Yahoo’s problems.
http://www.latimes.com/business/technology/la-fi-tn-verizon-yahoo-deal-20161013-snap-story.html
-
How to recover from a disaster:
This article talks about the importance for recovery plan. Disaster Recovery(DR) is a part of Business continuity plan and can result in success and failure of an organization. As per the 2014 Disaster Recovery Preparedness Benchmarking survey 60% of the company’s didn’t have a documented DR strategy. 40% felt that DRP didn’t help at the time of crisis.
It goes on to explain how DR cloud solutions or disaster recovery as a service(DRaas) is a cost effective as there are no hardware and agile way to handle disaster. It has faster recovery, better flexibility, off site data back up, real time replication of data, excellent scalability, and use of secure infrastructure.
Having a DR strategy is and continually testing is not enough. It should be updated regularly and be adapted in line with the changes in the business environment and the market shifts. According to the same survey 6.7% of organizations tested weekly, 19.2% tested annually and 23.3% never test them at all.
The implementation has challenges like budget issues, buy in from CIO and type of solution.
The three steps to have a successful DRP is
1. Identify and define your needs
2. Creating the DR plan
3. Test, assess, test, assess.Having an effective DR strategy in place will help organization to mitigate risks and help organization to recover quickly in event of disaster without negative impact.
http://www.cloudcomputing-news.net/news/2016/oct/17/recovering-disaster-develop-test-and-assess/
-
Euro Bank Robbers Blow up 492 ATMs by Phil Muncaster-UK/EMEA News Reporter, Infosecurity Magazine
492 ATMs across Europe were blown up by thieves in the first half of 2016. Criminals are increasingly using diverse tactics, and blending physical and online methods, to steal from banks. The physical attacks cost over 16,000 euro per attack, not including damage to equipment and buildings. The total 1604 incidents in the first six months of the year rise a loss to hit 27m euro. Thieves also use transaction messages to siphon off cash funds.
In my opinion, banks should use BCP to cover this part before these kinds of thieves happen. Because physical and message and online thieves had already happened like many years ago, but banks still not really care, or not too much. I ll say finding another way to solve these kind of problems even making less convenience for customers.http://www.infosecurity-magazine.com/news/euro-bank-robbers-blow-up-492-atms/
-
Popular Android App Vulnerable to Microsoft Exchange User Credential Leak
A popular Android App, Nine, used to access corporate email, calendar and and contacts via Microsoft Exchange servers is vulnerable to leaking user credentials to attackers. The application could allow an attacker to launch a man-in-the-middle attack, allowing them to steal corporate usernames and passwords of victims. Nine app lacked certificate validation when connecting to a Microsoft Exchange server – regardless of SSL/TLS trust settings. Attackers can pluck names and passwords out of the traffic or snag confidential emails as they pass by. An attacker could use a rogue Wi-Fi wireless access point (WAP) configured to capture Nine application traffic to Microsoft Exchange servers. Next, when the unsuspecting Nine user connected to that malicious access point, the attacker can intercept traffic and obtain the target’s Active Directory login credentials.
Popular Android App Leaks Microsoft Exchange User Credentials
-
Article: Back to School Security for iPads in the Classroom
As a provider of Apple-centric security solutions, SecureMac has outlined five of the top challenges faced when deploying iPads in a school setting, along with solutions that leverage the benefits of this powerful new technology in a safe and secure manner.Problem faced by schools: Securely deploying and managing devices
Schools need a centralized system to efficiently handle tasks including app installation, software updates and locating missing devices. Additionally, steps must be taken to ensure that proper access control and security configurations are in place on any network that will be used by student devices.Problem faced by schools: Maintaining student privacy in a digital environment
Student privacy needs to be a top priority for any educational institution looking to harness the power of technology in the classroom. Apple does not collect information or track students, but data collection might be present in third-party apps used as part of the education curriculum. Not only do schools need to maintain the privacy of student addresses, birthdates and other personal information, they also need to ensure the compartmentalization of student-generated data when it comes to things like school assignments, essays and projects.Problem faced by students: Cyberbullying and online harassment
No longer relegated to the schoolyard, cyberbullying and anonymous online harassment can take place 24 hours a day, seven days a week, and as such can be much harder to identify and address. It is important to provide guidance and student outreach on the risks associated with these new forms of bullying, as well as to educate students on the danger of sharing personal and private information over the internet.Problem faced by teachers: Limiting student access to inappropriate content
Problem faced by schools: App security and malware concernsResource: http://www.securitymagazine.com/articles/87469-back-to-school-security-for-ipads-in-the-classroom
-
Firms urged to automate security certificate backup after Globalsign blackout
The article I read this week is about online firms are being urged to reduce their dependency on Globalsign (a cross-certificate allows a certificate to chain to an alternate root) security certificate authority (CA) after an error made customer sites inaccessible. An unknown number of sites became inaccessible after a cross-certificate was revoked in error during a planned maintenance exercise to clean up of some of their root certificates links.
Education software developer Edsby said its website was affected, along with other sites such as the Financial Times, Guardian, Wikipedia, Logmein and Dropbox.
Globalsign responded by removing the affected cross-certificate and clearing its caches, but the CA’s customers still had to replace their SSL certificates to restore access to their sites.
What we should learn from this news is businesses must have an automated backup plan. Firms need to be able to take control and mitigate the risks immediately. -
Yahoo Confirms 500 Million Accounts Were Hacked by State sponsored Users
Yahoo finally found it’s been hacked in 2 years and they slowly responded to the serious hacking influencing 500 yahoo mall users. Over a month ago, a hacker was found to be selling login information related to 200 million Yahoo accounts on the Dark Web, although Yahoo acknowledged that the breach was much worse than initially expected.
It is on the investigation of the breach with law enforcement agency. They claimed that only the users’ name, email address, dates of birth, phone number, password and in some cases, encrypted and unencrypted security questions-answers were stolen from millions of Yahoo users. However, they don’t believe the credit information was stolen by the hackers. They need to take immediate action to inform the users after they confirmed the hack.
Same cases happened everyday and companies don’t know how to respond to the hack because they are lack of experience and they don’t have Business Continuity and Disaster Recovery Planning in place.
-
Nice post Andres,
I liked how you provide the components of BCP. All companies should develop their BCP based on those components. I think the Plan of transition in the event is extremely important because it can help the company locate where they are and how they can react and transit during the event.
-
“Cashing Out: ATMs Try to Stop Wave of Cyberattacks”
The article discusses the sharp rise in ATM fraud in 2015 and the slow implementation of EMV debit cards. Most financial institutions focused on credit cards and are now only starting to upgrade existing debit cards. Traditional debit cards and vulnerable to an attack known as skimming at ATM machines and gas stations. Criminals attach a device to capture the magnetic information from a card in ATM machines and then make counterfeit cards or transactions. Unlike credit cards, debit cards are tied directly to bank accounts, offer less security, and a more cumbersome process to recoup losses. By next near ATM locations without chip enabled machines will be responsible for fraud, however there is currently a backlog orders for upgrades with many rushing to complete the transition in time.
http://www.wsj.com/articles/cashing-out-atms-try-to-stop-wave-of-cyberattacks-1476529201
-
They are definitely being public which can be good or bad depending on what their goal is. First, its possible that officials are split on the decision the articles are a reflection of that. Or, it may be a form of psychological operations. Might be trying to warn the Russian government without actually conducting an attack.
-
Reminds of the episode in the second season of Homeland where one of the characters is assassinated by hacking into his pacemaker. These types of examples seem closer and closer to reality every day.
-
“Three Steps for Disaster Planning Toward a Smooth Recovery”
According to the Federal Emergency Management Agency (FEMA), 40% of companies that experience a disaster never re-open. The primary goal in disaster recovery is to limit business disruption and restore critical services as soon after a disaster as possible.
When creating or reviewing a recovery plan an organization should consider the following:
Have a written document that includes step-by-step instructions, emergency phone numbers, and back-up protocols.
Include communication procedures so employees, vendors, clients, and renters know how and when to reach management.
Consider establishing an alternative method for phone service, such as forwarding incoming calls to a cell phone or remote number/call center.
Seek out reputable disaster recovery companies, and set-up prearranged agreements that outline the priority of service and assessment of emergency equipment needed.Then, the organization should review its vulnerable areas, document all office processes, and develop a contingency plan for each.
Plan to communicate with employees, customers, and vendors — who, what, when, and how.
Develop the appropriate protocols to ensure your data is safe and can be accessed.
Keep copies of insurance policies and other critical documents in a safe and accessible location (e.g., fireproof safe or backed-up computer system)
Develop a training program for your staff on what needs to happen before, during, and after a disaster.
Address protocols for different types of disasters and prioritize based on the likelihood of these events.Next, the organization should understand and address the three elements of disaster recovery planning: prevention, detection, and correction.
Finally, the organization should test the disaster recovery plan. It should test the plan at least once per year to ensure the disaster plan as written still reflects the current operations.
-
The effectiveness of skimmers should only last as long as they remain relatively unknown. The benefit against skimmers is that you can confiscate the attacker’s equipment whenever they attempt this. With over the internet attacks, you would need law enforcement’s help to do anything to their physical machines. Education is still the best defense in this case as if consumers know that this device exists, they won’t fall victim to it. The tokenization of transactions also will prevent man-in-the-middle attacks like skimmers or anything else attackers can figure out.
-
Ecuador admits it has ‘temporarily restricted’ Assange’s Internet access
The article I selected this week is how the country of Ecuador decided to cut internet access to the leader of Wiki Leaks website, Julian Assage.
There were reports that Sen. John Kerry asked Ecuador foreign ministry to stop Julian Assage from releasing information that my jeopardize the election. The reports were denied but internet access has been cut for Mr. Assage.
Ecuador has harbored Mr. Assage to prevent prosecution for illegally penetrating U.S. and other private and government sector organizations and releasing the hacked information to the public.
It make us questions why they are cutting Mr. Assage’s internet access, while at the same time keeping him safe during his attacking efforts over the internet.
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
Exam 1: with Answers
Presentation: PDF format
Presentation: PowerPoint format
Quiz: Quiz
Quiz w/solutions: Quiz w/solutions
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
What physical security risks are created by an organization’s implementation of an integrated PHYSBITS solution? What mitigations are needed to lesson the risks?
-
PHYSBITS is Physical Security Bridge to Information Security. It is the collaboration between physical security and information security, where as links information system are used to control physical access to facilities, information infrastructure and resources. PHYSBITS focuses on the human aspect of physical security by integrating information security to provide authorized access to facilities and activities monitoring of personnel.
An organization implementing an PHYSBITS solution may experience physical security risks associated with loss of credentials/badges or other identifiable information. Depending on what types of authentication the organization uses, for instance a badge that provides access to restricted areas, if stolen or else compromised can give attackers access to the area to steal, vandalize, or destroy information system hardware or facilities. Another threat is to cause harm to personnel within the facility, like in the Fort Knox shooting.
To mitigate risks of unauthorized access may be to add additional security controls such as biometrics at restricted site or keypads that requires the person to provide a pin along with the badge. To prevent intentional or unintentional harm to personnel mitigation could be to establish strict policies weapons or a screening process like the TSA.
-
The main motive of physbits is to enable collaboration between physical and IT security to support overall enterprise risk management needs.Converging these security environments addresses security gaps that fall between these two different security disciplines and helps protect organizations against multifaceted
security threats.The Physbits include some common solutions1) Employee Provisioning and Access Management — Setting up new hires with:
system and facility access; self-serve password management; access card management; and
employee de-provisioning2)Card Management — issuance and life cycle management of access cards
3)Directory Management — infrastructure that enables distributed, scalable access to user
information and attributes.There are some risk associated with Physbits
1) In case the access card or pin get stolen it can be misused by some miscreant to gain unauthorized access
2)Verification of card or pin access occurs through the directory which has list of credible users and directory can be hacked by the hackers to input the desired credentials
3) The failure of IT infrastructure providing the access could disrupt any form of authorized access like swipe machinesIn order to mitigate such risks
The integration of biometric access along with card or pin access will reduce any form of unauthorized access which can happen in case the card or credentials get lost or stolen
The central directory or employee database server should have proper firewall,IDS in place to prevent any hacks
Backup of IT infrastructure in case any failures occurs an and alternate method which may involve human approach to check the physical security during the given failure timehttps://www.oasis-open.org/committees/download.php/7778/OSE_white_paper.pdf
-
To add
1. Tailgating can pose a risk.
Control: It can be mitigated by having a security person in place to just watch who comes and who goes out
2. Sometimes vendors or contractors are allowed inside the building without a badge accompanied by an employee.
Control: By having a physical registry keeping track of the people coming inside the building.
-
Physical Security Bridge to IT Security is a standard approach for enabling integration of physical and IT security. PHYSBITS provides a architecture for managing and monitoring physical and IT security systems by bridging both securities. There could be risks in using Physbits,
1. Implementation will be complex and time consuming. IT would be difficult to establish the complex structure again at the merged or acquired companies.
2. It will be difficult to maintain single authentication factor for a person who needs to have different physical accesses s and move between locations.
3. If one system fails the other will be impacted. If there is a restriction like, a person who is physically inside the company premises only is allowed to logically enter the database.If the access card reader is not working even if the person has entered the facility with a visitor, he will not be able to login to the database server.
4. Attackers will be able to target one and attack both the systems. When IT and physical security are integrated, a person who manages to disable access by attacking the database can get physical access to the organization.
5. Dependency of systems – Lets consider that HR software module needs to be updated and will be closed for maintenance for 1 day, it would be impossible to issue access cards that day. In this case the access management would be done manually by paper registry but the system will not be able to record any transaction. This is a major risk considering there would be no monitoring on how access is granted.
6. Complexities in the system will rise in case of tasks like patchwork management, installation of new software.
7. In certain cases it could be challenging to prevent authentication in local setup when global policies have authorized. ex. Is it possible to stop the user from entering via the workstation in Chicago when we know that he “badged” into the Los Angeles office?
8. IT security and Physical security have different reporting system. To track performance and tasks of a security guard and a software developer will be difficult to maintain same level of reporting. e.g Security guards may come in shifts and will be responsible to secure the same area. -
Adding mitigation for risks in PHYSBITS:
-Dependencies for maintenance activities of both the systems must be minimum.
-As far as possible the reporting structures of physical and IT securities must be considered to resolve the dependencies that can be avoided.
-Segregate authorization levels depending on locations, the assigned rights must also consider the location criteria. And there should be 1:M relation between person and location.
-Ensure vulnerability assessment must be done for applications handling both physical and IT security. A vulnerability in one interface of physical security system must also be verified in IT security system.
-A good visitor management system and escort system must be maintained.
In case of system failure hard copies can be maintained. Proper authorization and monitoring must be done while in such case. The paper entries must be added to the system once it is up and running. -
The main purpose of physical bits is to allow collaboration between physical and IT security to ensure support to the enterprise risk management. By conjoining these two security environments bridges the security gaps within the two security realms by establishing and strengthening the organization against security threats.
However, some risk may occur such as authentication, accessibility and hacking. The bases of these risk are the intermingling of the systems.
-Systems failure (natural disaster, power outage, etc.)
– The access between the physical bits may not be integrated correct and could arise some risk and weaknesses.
– If the system is hacked, they are able to get physical access to the company as well as the databases
I’d mitigate these risk by enabling bio-metrics, escort security throughout the facility, encryption of data and firewalls. -
The Physical Security Bridge to IT Security (PHYSBITS) focusing on integration of physical and IT security technologies. It is a vendor-neutral approach for enabling collaboration between physical and IT security to support overall enterprise risk management needs. The technical portion of the document presents a data model for exchanging information between physical security and IT security systems.
Risks associated with the implementing PHYSBITS are:– Very complicated to implement
– Creating a central data base that ensures integrity should include all identification record which is usually stored in HR database. Here is a biggest risk because IT active directory should be separated of HR data base.
– Because of the integration another risk is; if one system fails the other system would be failThis system has 3 main goal; Data Auditing, Strong Authentication and user right management.
In order to mitigate this risks:
– Consider additional security level such as biometrical authentication.
– Considering business continuity plan.
– Consider good monitoring services including camera or a physical person to check visitors. -
Great post Vaibhav. I strongly agree with the 3rd risk you mentioned. In case of failure of IT systems what can be done? Do you think a temporary manual system can work? You mentioned about access being blocked due to not functioning of card readers. In this case should the company be prepared to open the doors without the access readers? Is it another risk to have a back up plan like this. I think companies might have to proactively think about failure of infrastructure to support mainly 2 things, one, in case of IT failure how will the process work and two in case of natural disaster, a BCP like situation how will the physical access systems work?
-
What physical security risks are created by an organization’s implementation of a PHYSBITS solution? What mitigations would recommend to lesson them?
The two biggest risks I see in implementing a PHYSBITS solution are:
1. An ex-employee taking a current employees badge
This could cause several physical security risks. The ex-employee may be able to access the physical equipment and compromise the integrity of the system. One example is accessing an electrical grid site. Imagine if an ex-employee had access to the electrical grid, secretly accessed an electrical storage area, and shutting down the electricity for the city of Philadelphia. Yikes!
The mitigations you could put into place would be to update badges on a quarterly basis for employees with “admin” type access to the sites. You could also assign a “pin” number to enter after you swipe your card. This will provide multi-level authentication.2. An authorized person allowing a non-authorized person in the restricted area
As you mentioned in class, some people will think they are being nice by holding the door open for someone else. Unfortunately, that “someone else” may not have access to the room on other side of the door. I have seen this happen at my children’s daycare. I did bring it up to the director.
The mitigation she put in place was an email specific to security measures. The facility also includes real-time camera’s throughout the entire facility. Parents can access the camera’s through the website at any time.
For a larger organization, you could put in man-trap doors. This means one door closes before the other door opens. This allows a more secure look to the environment and may make the authorized person think twice about being nice.
-
Binu,
Great point about sub-contractors allowed inside buildings. One of my clients has a high level of security measures in place. It is a pharmaceutical company in the surrounding suburbs. All vendors and contractors must attend a securities class on the physical grounds and authorized areas. We are only allowed to use the entrance and exit assigned to vendors, and only had access to certain areas, but the biggest thing I noticed is that security was second nature. Tailgating didn’t happen because the employees knew not to do it. We would simply wait for the other person to swipe their card. It was frustrating but it was the culture. Security is a top priority and the entire company culture practices security like a daily habit.
The best way to get a secure environment is to have the employees participate in the policies and understand why they are in place.
-
Fred,
As you mentioned , PHYSBITS focuses on the human aspect of physical security by integrating information security. I agree with Ex- employee risk and the controls you considered for it is smart , adding additional level of access control.
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
For an organization choosing among Denver Colorado, Miami Florida, Redlands California and Tulsa Oklahoma, from a physical security perspective – where would be the best place to locate their data center? Why is […]
-
From a physical security perspective, I’d say Tulsa, OK or Denver, CO.
Why Tulsa?
First, the access to the city excellent. It is conveniently located in the direct path of the central fiber corridor that connects Houston to Chicago.The risk of seismic activity is also low. Second, for data centers that rely on water for chilling, the state has many aquifers, including the Great Plaines Aquifer and numerous rivers that crosscut the state.
Why Denver?
It is located in a state with low-risk climate and geographical features. CO offers an optimal area for disaster recovery. As Tulsa it is also a low seismic zone.Why not:
Miami? Too close to the water and vulnerable to hurricane. Hurricane cane quickly destroy data center in a matter of seconds and put companies in trouble.
Redlands? High seismic risk. Prone to earthquakes. -
When considering the natural disasters in physical security, the organization should choose Denver, Co.
According to the Disaster Hot Zones of the World (http://io9.gizmodo.com/5698758/a-map-of-the-world-that-shows-natural-disaster-hot-zones), the other choices seems more riskier than Denver.
Miami, FL: Is located in a hurricane path which could cause flooding, destruction of property, and residual effects of debris left by the hurricane. It can cause disruption of business during hurricane seasons for business closures for inclement weather and extended recovery times. It is also in an chemical accident zone, such as the oil spills can affect the health of its employees. Utilities and power may also disrupted and roadways maybe inaccessible in an event of a hurricane.
Redlands, CA: Prone to seismic activities that can have long-lasting effects on the data center. It can damage computer hardware and equipment and worst damage facilities structural integrity. It can also affect utilities and road networks that will prevent movements of critical supplies, like fuel for generators, if utilities network (pipes and power lines) are also down.
Tulsa, OK: Located in Tornado valley (https://www.ncdc.noaa.gov/file/1536) that can cause structural damage to facilities and loss outside equipment. It is also located near the New Madrid fault line, which according to USGS is as likely as CA to experience seismic activities (http://www.dailymail.co.uk/news/article-1366603/Earthquake-map-America-make-think-again.html)
Denver, CO: The risks from natural disasters effecting physical security is lower in Denver, than the other choices. It has low seismic activities and away from storms.
-
For an organization choosing among Denver Colorado, Miami Florida, Redlands California and Tulsa Oklahoma, from a physical security perspective – where would be the best place to locate their data center? Why is this place better and the other places worse?
From a physical security perspective, it would be best for an organization to locate their data center in Denver, Colorado. In fact, among the proposed 4 cities, Denver less risky.
According to Bankrate, Florida, Oklahoma, and California are among the 10 states most at risk for major disasters:
“Florida has been roughed up by dozens of tropical storm systems since the 1950s, none worse than Hurricane Andrew in 1992. The Category 5 hurricane with gusts over 200 mph held the title as the most expensive natural disaster in U.S. history until Hurricane Katrina in 2005. Severe freezes have been disastrous for Florida farmers on multiple occasions.”
“The monster tornado that blasted through the Oklahoma City suburbs in May 2013 is only the latest devastating storm to hit a state that has recorded an average of more than 55 twisters per year since 1950. The worst in recent history struck near Oklahoma City in May 1999 with winds over 300 mph and killed 36 people. Other disaster declarations have involved severe winter storms, wildfires, floods…”
“California has weathered wildfires, landslides, flooding, winter storms, severe freezes and tsunami waves. But earthquakes are the disaster perhaps most closely associated with the nation’s most populous state. The worst quakes in recent years have included a magnitude 6.9 quake near San Francisco in 1989 that killed 63 and a magnitude 6.7 quake in Southern California in 1994 that killed 61”
The worse place to locate a data center would be Redlands California.
-
The spots not ideally for data centers would be Miami, FL – Tulsa, OK – Redlands, CA.
Miami, FL – Hurricane season can wreck havoc in the area and cause damage many parts of the city. Also, the heat can play a factor because it will mess with the network signal and cause lag. I remember during my time at Comcast doing tech support, Florida would have outrage or weak signal issues because of either the heat caused signal interference, the rain would also cause interference and when the lines are wet, businesses experienced downtime in internet connection or outages.
Tulsa, OK – Tulsa has problems with Tornadoes, which is their major climate concern. Now tornadoes have their seasons and the prime season for it is between March-August (source: http://okc.about.com/od/forthehome/ht/oktornadotips.htm) but it can hit anytime. A data center there wouldn’t be a wise choice seeing as it get knocked offline.
Redlands, CA – Redlands are a prime spot for earthquakes, making it possible for data centers, utility facilities and other sites to get severely damaged. The downtimes can make it hard for businesses to run effectively because if some important data is unavailable, then certain functions can’t do their jobs slowing the overall process. Another threat would be wildfires, as California is sometimes affected by it due to droughts and very high temperatures. They can last for days and if a data center gets hit by a wildfire, recovery time is very high while an organization tries to plan on how to get back on their feet.
Spot for data center would be Denver, CO. Denver’s climate makes it an ideal choice because the seasons are well spread out and it gets more sunshine. The issues with Denver is the snow, as March is the month where snowfall is heaviest but it does get better. (source: http://www.denver.com/weather). But Denver isn’t near any major fault lines to get hit by earthquakes as much as California nor will it encounter tornadoes and hurricanes. The heavy snowfall is of concern but with Denver being used to it, the city is well-equipped to handle it. This allows for organizations to quickly get back to work, sometimes without skipping a beat.
-
Hi Brou,
I agree that Denver can be a good choice. As far as Tulsa is concerned, the reason why I think it’s risky as it falls in the Tornado Alley, which basically means that tornadoes more than 15 can be expected on an average in that alley.
This should help: https://en.wikipedia.org/wiki/Tornado_Alley#/media/File:Tornado_Alley.gif
-
I agree with you Alexandra that Miami should not be considered as an option.
Miami is the second most humid city in the US. The servers need the relative humidity in the air to be around 45-55%. If humidity levels rise, water condensation occurs which results in hardware corrosion and component failure. In Miami, additional cost would be required to control humidity and maintain it at expected levels. -
Well explained Niel. I agree with you that Denver could be the best choice. To add to your points I would say that Denver has good temperature balance to host a data center. Experts say that cooling management is the most difficult and costly to handle. Average yearly temperatures in Denver ranges from 64 F highest to 36 F lowest.
-
For an organization choosing among Denver Colorado, Miami Florida, Redlands California and Tulsa Oklahoma, from a physical security perspective – where would be the best place to locate their data center? Why is this place better and the other places worse?
From the geographical perspective, both Miami Florida and Redlands California are very close to the coastline. As for Denver Colorado and Tulsa Oklahoma, these two cities locate away from the coastline. Since the place to locate the data center should consider the physical security and ensure the location has less affected by the natural disasters like flood, hurricane or earthquake.
Both of the Denver and Tulsa are reasonable places to locate the data center, because they both located in the low geographical risk areas. The worse place to locate the data center I think is the Miami Florida, because the city is near the ocean, which higher the risk that the core servers physically damaged by the natural disasters like hurricane and tsunami.
-
Good post Loi, and I totally agree with you that Miami is the not a good choice to locate the data center. Actually, to mitigate the risk of natural disasters may damage the data center, I think to transfer the risk to the third party might also work like purchasing an insurance.
-
I totally agree with you, Brou. Miami is near the coastline, and the risk that natural disasters occurred is higher than other three cities. I also agree with you that Tulsa and Denver may be the better choice to locate the data center, but I was thinking which one of them is the best choice?
-
For an organization choosing among Denver Colorado, Miami Florida, Redlands California and Tulsa Oklahoma, from a physical security perspective – where would be the best place to locate their data center? Why is this place better and the other places worse?
As others have said, Denver, CO would be the best location for the data center to be located.
The other locations listed in this question all have significant physical/environment security risks. For example, with rising seas, Miami is at real risk of losing significant amounts of land; Redlands’ proximity to major faults means that strong earthquakes are a risk and Tulsa is located in tornado alley.
Denver, on the other hand, is exposed to considerably less environmental risks and has the added benefit of having a cooler climate, which should reduce cooling costs.
-
For an organization having to choose between Denver- Colorado, Miami – Florida, Redlands – California and Tulsa Oklahoma, from a physical security perspective, In my view, the best place to have the data center set up would be at Denver, Colorado. The pros and cons for each of the places are as below :
Miami, Florida – is located in the Hurricane zone thereby increasing the probability of disruption and destruction during and due to the annual tropical storm activities. Almost every year, the area has witnessed severe storms, rain and resulting flooding. This makes it a poor choice for setting up data centers.
Redlands, California – is close to the Pacific ring of Fire and in an area of high seismic activity. This too would be a high risk area. Apart from the seismic activity, California is also undergoing drought for the 6th year and is susceptible to wildfires. Areas close to these forest fires which often last for days and weeks are in danger of buildings and systems being destroyed due to fire. Because of this, Redlands would not be a good choice to set up a data centre.
Tulsa, Oklahoma – Tulsa, Oklahoma is situated in an area that is prone to Tornadoes and Flooding. In 1999, a total of 74 tornadoes swept across Oklahoma and nearby states in less than 21 hours. In 2015, many areas of Oklahoma suffered their worst floods. In 2013, a 1-2 mile wide tornado swept the ground for 39 minutes and caused damages estimated at over 2 billion dollars. Due to this, even Tulsa would not be a good fit for setting up a data centre.
Denver, Colorado – The City and its surrounding areas have a very low probability of natural disasters occurring. The climate being much cooler, also is favorable such that organizations can utilize free and cheaper cooling in the data centers. Infact, companies like IBM have their data-centers located in Boulder, Colorado which is close to Denver.
-
For an organization choosing among Denver Colorado, Miami Florida, Redlands California and Tulsa Oklahoma, from a physical security perspective – where would be the best place to locate their data center? Why is this place better and the other places worse?
Out of the options provided, I think Denver, Colorado will be the best choice to locate their data center.
Denver, CO:According to the For-Trust-Data-Center, Colorado has a low-risk position for natural disasters due to low incidences such as earthquakes, floods, hurricanes and tornadoes. According to FOR-TRUST, Colorado is located in seismic zone 1, which is the lowest risk areas for earthquakes.
As far as flood is concerned, according to FEMA, flooding can occur anywhere in the United States. In Colorado, flooding occurs due to spring snowmelt when rivers swell and flow from Colorado’s mountain ranges. Colorado has a sophisticated infrastructure to prevent snowmelt and rain from flooding the areas.
For Tornadoes, The National Oceanic and Atmosphere Administration has ranked Colorado 9th in the nation in the context of the number of tornadoes occurred in a year. Colorado also falls outside the “tornado alley.”
Colorado also escapes from the major effects of hurricane as it is mostly experienced by the areas that have low proximity to coast. According to the NOAA, the wild hurricanes have always struck the Gulf Coast and the East Coast.
Snowfall is something that can is commonly associated with Colorado but the snow accumulation is typically on the west of Denver, which has semi-arid climate. It is also located on the foot of Rocky Mountains; thus the climate is mild.
As far as wildfires are concerned, there haven’t been wildfires of a major magnitude that have impacted Denver. Out of 60 devastating wildfires that have listed by National Interagency Fire Center, only three have occurred in Colorado and hence it presents a very low risk.
As far as other resources, Denver is also located on the western and eastern halve of the country and has access to major communication networks.
Why not Miami?
According to the NOAA, Southeast United States are significantly susceptible to yearly hurricane activies.
Why not Tulsa?
Falls under the Tornado Alley, more than 15 tornado activities a year on average.
Why not Redlands?
Falls under the Seismic Zone 3, probability of earthquakes is more.
-
When deciding a location in the country, it is important to consider the environmental factors that could affect the data center. The danger of each hazard fluctuates throughout the country. This includes earthquakes, floods, hurricanes, tornadoes, and even volcanoes. To determine the safest location we can consider if any location is at high risk of a disaster.
Miami, Florida is a poor location since there is a risk of hurricanes and the floods they can cause. Miami is also listed as one of the most vulnerable cities to hurricanes.
Redlands, California is at risk of earthquakes. It is very difficult to protect a datacenter from an earthquake.
Tulsa, Oklahoma is prone to tornadoes. There is a vertical strip in the middle of the country referred to as tornado alley because that is where most tornadoes occur in the US.
Denver, Colorado besides being snowy is not known for being prone to natural disasters. A blizzard can disrupt service but will leave a lot less damage than the previously mentioned potential disasters. Denver is also very capable of dealing with heavy snow conditions. I would choose Denver as the city to build the datacenter.
-
I think that you rule out Miami Florida and Redlands, California due to the risk of natural disasters and forest fires. The backbone of the data center is power and network connectivity so I would choose a place with a geographic area that has access to a reliable power grid. I bet that Tulsa and Denver would both have access to a power grid so it then comes down to your company’s current location. If building from scratch wasn’t an issue than I would look for a location with room for expansion and an area accessible by multiple roadways and/or near an airport. I believe Denver is a bigger city than Tulsa and I bet Tulsa has much more room for expansion, yet still has an airport and multiple roads that would allow the location to be easily accessible. So although I know being close to customers provides advantages and my customers will probably not be in JUST Tulsa, I think the growth of the virtual world and virtual customers will cause me to pick Tulsa.
Source: https://www.expedient.com/blog/the-where-and-why-of-choosing-data-center-location/
-
Hi, Ian
You had a very good analysis! I like how you take the location into consideration.
However, I think Tulsa, OK has higher environmental risk than Denver, CO. Homefacts recorded Tulsa, OK is a Very High Risk area for tornados. According to records, the largest tornado in the Tulsa area was an F5 in 1960 that caused 81 injuries and 5 deaths. In addition, the yearly average for tornados is three. Therefore, I believe Denver, NC is a better option to locate the data center at .Source: http://www.homefacts.com/tornadoes/Oklahoma/Tulsa-County/Tulsa.html
-
Ian,
I don’t think Tulsa is a good idea because due to tornadoes and seismic activities. Truth that you want to have a data center with easy access. however, I do not think that it is wise to locate it in a place where the risk of natural disaster is high.
-
Abhay,
You did a great analysis of why Denver is the best place for a data center. The only thing I would mention to everyone is…
Two is better than one, three is better than two, and so on…
Redundancy is the key. Denver is the best place but finding another location in the world that matches Denver might be a good idea too
-
Abhay – that is good insight. I did not think about Tulsa being a tornado valley but you are definitely right. With that said, I may change my choice from Tulsa to Denver. Denver has the mountains to block if from tornados and it is not near the coast so you do not have to worry about hurricanes. Denver also has a major airport and is not an overly huge/crowded city. Plenty of land right outside the city to build the data center, yet still has major roads keeping it easy to get to.
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
What physical security risks are created by an organization’s implementation of a PHYSBITS solution? What mitigations would recommend to lesson them?
For an organization choosing among Denver C […]
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
What are the sources of Electromagnet Pulse (EMP)? Why is it a physical security threat? How can an organization defend itself against EMP?
-
The sources of EMP are typical chemical-based explosions but most likely large nuclear detonations. An EMP occurs when a nuclear device is detonated high in the atmosphere.
An EMP is a physical security threat because of its nature. In fact, it is a super-energetic radio wave that can destroy, damage, or cause the malfunction of electronic systems by overloading their circuits. An EMP attack on the U.S. would leave the country with no electricity, no communications, no transportation, no fuel, no food, and no running water. That’s huge.
Also, today’s world depends on advanced electronics systems which makes it even more vulnerable to EMP.In order to defend itself against EMP an organization needs to have an integrated catastrophic planning including equipment protection like installing surge protectors installed or store important electronics in a faraday cage or electromagnetic shield.
-
An electromagnetic pulse (EMP), also sometimes called a transient electromagnetic disturbance, is a short burst of electromagnetic energy. Such a pulse’s origination may be a natural occurrence or man-made and can occur as a radiated, electric or magnetic field or a conducted electric current, depending on the source. Caused by the high-altitude detonation of a nuclear weapon, electromagnetic pulse can cause widespread damage to electric systems across a wide area.
All electronic equipment and apparatus could be destroyed. Every device that relies on integrated circuits for operation could be immediately disabled or destroyed.
Unlike a cyber-attack where “fingerprints” can often be found for forensic analysis, an IEMI attacker will not leave any information behind.
An EMP shutdown of electronics is so rapid that the log files in computers will not record the eventPursue Intelligence, Interdiction, and Deterrence to Discourage
EMP Attack, highest priority is to prevent attack, shaped global environment to reduce incentives to create EMP weapons, and make it difficult and dangerous to try
What’s more, Protect Critical Components of Key Infrastructures, especially “long lead” replacement componentsResource: http://www.heritage.org/research/reports/2010/11/emp-attacks-what-the-us-must-do-now
-
EMP stands for Electromagnetic pulse, which is essentially a short burst of electromagnetic energy. This pulse if strong enough, can cause damage or destruction to electronic computers posing as a significant threat to businesses. The article I found in the New York Times on EMPs (see below) is revolved around how an EMP can be used as a weapon of mass destruction and is a threat against the United States. However, not all EMPs are that drastic. Thunders storms are a common form of EMP, which if the storm is strong enough and lightning strikes are close, the lightning can fry your electronic devices. Not only that, but the sun can create solar flares that emit an EMP which can reach the earth and cause damage. Therefore, from a risk analysis standpoint, it is more likely that a thunder storm or solar flare takes out one’s equipment than a full-fledged EMP attack on the United States.
Since EMPs can cause damage or destruction to computer electronics, businesses would consider it a physical threat since it can be considered as a natural disaster. In order for business to protect itself, it can follow what the government has done which the Wall Street Journal identifies as using surge arrestors, faraday cages, micro-grids, and using underground data centers. Therefore, businesses if wanting to address the risk of an EMP, will likely need to set up a datacenter that specifically implements these protections against EMPs. Likewise, businesses need to include in their Business Continuity Plans the ability to resume their businesses in the event of a loss of technology and electricity. While a large scale EMP such as a weapon of mass destruction or solar flare is a threat that could destroy a business, businesses’ need to decide if it is a risk worth addressing or not.
Articles: http://www.wsj.com/articles/james-woolsey-and-peter-vincent-pry-the-growing-threat-from-an-emp-attack-1407885281
http://www.computerworld.com/article/2606378/new-data-center-protects-against-solar-storms-and-nuclear-emps.html -
What are the sources of Electromagnet Pulse (EMP)? Why is it a physical security threat? How can an organization defend itself against EMP?
EMP, electro magnetic pulse is a short burst of electromagnetic energy. It can be due to natural occurrence like lightening or manmade.
EMP radiation can be caused by detonation of a nuclear bomb, a solar flare, a device indented to cause EMP, a close lightening stroke, a massive power-line short circuit. EMP source is a device that intentionally produces electromagnetic pulse. And it can be a small device used by the police to disable a fleeing vehicle, a source used to test equipment for EMP resistance or a weapon intended to disable the enemy equipment.Computer equipments, appliances containing microprocessor, semiconductor electronics, cellular phones, power grids, generators, transmission lines, computer disks, UPS are all susceptible to EMP. The EMP interference can damage an electronic equipment or disrupt its performance. The EMP induces large current in the conductors that are connected to the equipment damaging the equipment. At higher energy levels like lightening EMP can damage the entire building.
An organization can protect itself from EMP threats by:
.
• By enclosing the wiring with metal conduits and shield wiring connected to sensitive equipment. Make sure to ground the shields
• By making sure any person entering the organization are not carrying any EMP devices that can damage the equipment
• Ensure that the equipment bought are not faulty and do not produce EMP
• Fuse long wires and cables and use large ferrite beads on power wiring.
• Increase the current carrying capacity of the building ground.
• Avoiding or minimizing the use of semiconductors where possible. and if used, make sure they are rated at maximum voltages and currents at least 10 times the values actually in use.
• Bypass suitable electronics to ground with capacitors rated for several thousand volts and heavy currents.
• Design circuitry to be resistant to high voltages and currents.
• Provide battery backup power for essential equipment.
• Provide the above protections to essential equipment, such as emergency communications and traffic signals.
Corrective measurements to be taken to restore operations after an EMP has occurred
• Stock replacement equipment and parts in metal containers or rooms.
• Wrap replacement parts in aluminum foil.
• Keep a stock of batteries. And rotate the stock, so the newest ones are not taken out and used first.
• Stock up on light bulbs that are not CFL or LED.It is always better to take necessary steps to prevent the risk and also have some corrective measures in place. EMP can cause extensive damage to the organization. Though the probability is too low the organization needs to be prepared and cannot neglect it.
-
What are the sources of Electromagnet Pulse (EMP)? Why is it a physical security threat? How can an organization defend itself against EMP?
An EMP is a high-intensity burst of electromagnetic energy caused by the rapid acceleration of charged parties. The causes could be: detonation of a nuclear bomb, a solar flare, a device intended to cause an EMP, a close lightning stroke, a massive powerline short circuit. The sources of EMP can be: a small device used by police to disable a fleeing vehicle, a source used to test equipment for resistance to EMP and a weapon intended to disable enemy equipment. It is a physical security threat because its power to destroy computers and computer equipment and all electrical and technological infrastructure, effectively sending the U.S. back to the 19th Century. Organizations defend itself against EMP by storing equipment inside metal cases with no openings, using old-style electric telephone equipment, shield wiring connected to sensitive equipment, or enclose wiring in metal conduits, or using large ferrite beads on power wiring.
Sources: http://midimagic.sgc-hosting.com/emp.htm
http://www.heritage.org/issues/missile-defense/electromagnetic-pulse-attack -
An electromagnetic pulse is an extremely powerful burst of electromagnetic energy capable of causing damage and/or disruption to electrical and electronic equipment.
What can cause and EMP? 1. Detonation of a nuclear bomb 2. A solar flare 3. A device intended to cause and EMP 4. A close lighting stroke 5. A massive powerline short circuit.
What is an EMP source? An EMP source is a device that intentionally produces a small EMP. There are three kinds:1. A small device used by police to disable a fleeing vehicle (the source shown in the table) 2. A source used to test equipment for resistance to EMP (classified strength) 3. A weapon intended to disable enemy equipment (classified strength)
Why is it a physical security threat? For example, if a lightning stroke is close enough to occur and EMP to your data center. If there were no protections set up before, then there is a high chance that your data center is ruined. All the servers, monitors will be destroyed as well.
How can an organization defend itself against EMP? 1.) Shield wiring connected to sensitive equipment, or enclose wiring in metal conduits. Ground the shields 2.) Fuse long wires and cables 3.) Increase the current carrying capacity of the building ground 4.) Avoid the use of semiconductors where possible 5.) If semiconductors are used, make sure they are rated at maximum voltages and currents at least 10 times the values actually in use 6.) Use large ferrite beads on power wiring 7.) Bypass suitable electronics to ground with capacitors rated for several thousand volts and heavy currents 8.) Design circuitry to be resistant to high voltages and currents. 9.) Provide battery backup power for essential equipment 10.) Provide the above protections to essential equipment, such as emergency communications and traffic signals. -
Although EMP is not as common as cyber threats posed on a organizations information infrastructure, it should not be a threat that is taken lightly. A massive EMP, either naturally or man-made, can have devastating consequences to any organization. When deciding how a company should protect itself from this risk, I believe it should also come down the criticality of each system that it used. What if they can’t protect all of their computers and datacenter? The organization needs to decide which systems are critical (backups, critical servers, or generators) for the business to continue and recover after an event.
In this case a massive EMP would likely take out power grids, utilities services, generators and pretty much any electronic equipment. It is also important to consider the protection of physical assets required to keep the information systems running like generators, UPS, water pumps for cooling, etc.
-
What are the sources of Electromagnet Pulse (EMP)? Why is it a physical security threat? How can an organization defend itself against EMP?
An electromagnetic pulse (EMP) is a short burst of electromagnetic energy caused by an abrupt, rapid acceleration of charged particles, usually electrons. The sources of EMP can be a natural occurrence or man-made and can occur as a radiated, electric or magnetic field or a conducted electric current, depending on the source.
Natural EMP events can be:
• Lightning
• Electrostatic discharge
Man made EMP events can be:
• Electric motors
• Gasoline engine ignition systems
• Nuclear Electromagnetic pulse due to nuclear explosionIt can be a physical security threat as:
For example, a high voltage level an EMP can induce a spark, such as an electrostatic discharge when fueling a gasoline-engine vehicle can cause fuel-air explosions. Such a large and energetic EMP can induce high currents and voltages.
A very large EMP event such as a lightning strike can damage objects such as data centers, IT infrastructure set up, electricity grids (failure of which can lead to nonfunctioning of a country’s economy including IT Sector), destroy electronic network, buildings and aircraft directly, either through heating effects or the disruptive effects of the very large magnetic field generated by the current. An indirect effect can be electrical fires caused by heating.Organizations can defend themselves from against EMP in the following ways:
Building their own Power sources as a part of Business continuity planning such as solar panels, Hydroelectric power systems which can help them to continue their functioning during an EMP event.
Also organizations need protection from EMP weapons and should have proper controls to restrain any outsider or the employees to bring or to use any such products within the premises.
-
What are the sources of Electromagnet Pulse (EMP)?
EMP is a short burst of electromagnetic energy. The sources EMP could be manmade such as directed energy weapons or nuclear blasts and naturally, and natural such as solar flares.
Why is it a physical security threat?
The role of physical security is to protect the physical assets that support the storage and processing of information. The EMP damages the physical assets directly such as solar flare’s damage to bulk power system assets (e.g., transformers. ) by exploiting the vulnerabilities of it.How can an organization defend itself against EMP?
1,Grounding
Proper grounding and a proper relationship between the neutral and the ground is not only
essential to meet National Electric Code (NEC) requirements, but is imperative to achieve
optimum performance of microprocessor based equipment such as computers, programmable
logic controllers, communications systems and telemetry systems.
2,Shielding
Surge events cause a magnetic field to be induced in conductors within a given radius,
depending on the magnitude.
3.Filtering
Passive filter networks block out induced surge currents and voltages on data and power
circuits for hardening electronics against lightning and EMP surge energy.
4.Surge Protection
A voltage induced between conductors can drive a surge current into an electronic circuit,
or conversely, a current induced onto a conductor can create a voltage across a series
impedance as the current propagates into a circuit.Vacca, Physical Security Essentials, Chapter 54
-
What are the sources of Electromagnet Pulse (EMP)? Why is it a physical security threat? How can an organization defend itself against EMP?
An electromagnetic pulse (EMP) is a high-frequency burst of electromagnetic energy caused by the rapid acceleration of changed particles. A catastrophic EMP would cause the collapse of critical civilian infrastructures such as the power grid, telecommunications, transportation, banking, finance, food and water systems across the entire continental United States—infrastructures that are vital to the sustenance of our modern society and the survival of its citizens. EMP can be used as a weapon of mass destruction and Boeing has announced that it successfully tested an electromagnetic pulse.
The sources of EMP are:
1) A deliberate electromagnetic weapon attack
Without causing any harm to humans, the effects from an IEMI weapon could disable regional electronic devices.
2) A nuclear device detonated in space, high above the U.S.
A High-Altitude Electromagnetic Pulse (HEMP) detonated 30 miles or higher above the Earth’s surface would destroy electronic devices within a targeted area without creating blast damage, radiation damage, or injuring anyone.The EMP can cause damage to electronic equipment within an organization or it can affect its performance. An EMP could permanently destroy all electronic equipment including hardware, software, and data.
An organization can protect and defend itself against EMP by:
1. Having a business recovery plan to resume their business loss
2. Provide battery backup power for essential equipment.
3. Provide the above protections to essential equipment, such as emergency communications and traffic signals.Sources:
http://empactamerica.org/our-work/what-is-electromagnetic-pulse-emp/ -
An electromagnetic pulse (EMP) is a short burst of electromagnetic energy. It may originate from natural or man-made occurrence. It occurs as a radiated, electric, magnetic field or a conducted electric current, depending on the source.
Natural occurrence that cause EMP include:
-Lightning
-Electrostatic Discharge (two charged objects coming into close proximity with each other)
-Coronal Mass Ejection (A massive burst of gas and magnetic field arising from the solar corona and being released into the solar wind sometimes referred to as a Solar EMP)Man-made occurrence that cause EMP include:
-Switching action of electrical circuitry, whether isolated or repetitive (as a pulse train).
-Electric motors can create a train of pulses as the internal electrical contacts make and break connections as the armature rotates.
-Gasoline engine ignition systems can create a train of pulses as the spark plugs are energized or fired.
-Continual switching actions of digital electronic circuitry.
-Power line surges. These can be up to several kilovolts, enough to damage electronic equipment that is insufficiently protected.
-Nuclear electromagnetic pulse (NEMP), as a result of a nuclear explosion.EMP is a physical threat because it is generally disruptive or damaging to electronic equipment, and at higher energy levels a powerful EMP event such as a lightning strike can damage physical objects such as buildings.
In order to protect itself against EMP, the organization can utilize:
– Faraday Cage – Surround important electronic equipment completely within metal as it conducts electromagnetic radiation.
– Electrical Grid – Acts as a huge antenna which captures electromagnetic radiation and conducting it into the earth.
-Surge protectorsIn addition, business should always have backup and data recovery plan in the event that the above protection fail.
-
What are the sources of Electromagnet Pulse (EMP)? Why is it a physical security threat? How can an organization defend itself against EMP?
An electromagnetic pulse is a sudden burst of electromagnetic radiation that large enough to cause wide-scale disruption (Wikipedia), the sources of EMP could be but not limited to detonation of a nuclear bomb, a solar flare, a device intended to cause an EMP, a close lightning stroke, a massive powerline short circuit.
EMP is capable of causing damage and/or disruption to electrical and electronic equipment, so it is a physical security threat to most organizations.
Firstly, one company can transfer the risk to an assurance company. Then, in order to defend against EMP, one organization can do as follow:
Shielding: First, the equipment or rooms that require protection (e.g. communications console, utility room, electrical service room, entertainment room, or even the entire shelter), are covered with an overall shield. This is the first line of defense and provides excellent, although not perfect, protection. The shield must be very carefully designed and constructed; e.g., improper materials selection may not achieve enough shielding, incompatible materials may result in corrosion, and incorrect seams or bonding may greatly reduce or even destroy the shield’s effectiveness.
Alternative power source: Even if the devices could survive from EMP, they will need to have a usable and sustainable source of power. This can be done by setting up alternative energy sources in advance.
-
Loved the detailed explanation, Binu. In addition to the causes you mentioned, EMP could also be caused by geo-magnetic storms. To elaborate further on why Electro Magnetic Pulses are a physical threat,
I’d like to explain the Compton Effect, which is an intense release of electromagnetic energy that causes photons to knock loose electrons in the atmosphere. The electrons, guided by the Earth’s magnetic field, essentially become a giant and powerful circuit. The current flowing in this circuit, generates intense electromagnetic fields that propagate to the surface of the earth. When these fields cross conductive materials they release energy into the material. As we know, electronic devices are full of conductive material so given a sufficient density, the energy absorbed can fry the device.Source : Robert Frost, NASA
-
Great explanation Paul. As you mentioned companies should consider the Business Continuity aspect especially when protection against EMP can done only with a little additional cost. The concept of underground data centers and using shields or cabinets made out of EMP resistant special materials is great.
Experts also mention he risk is not high as expectancy of event is low currently. However, with the increasing face of terrorism protection against EMP terrorist-sponsored nuclear blast has become topic of much discussion for data center professionals. Even if consider the risk as low right now, the impact is high.
Iron Mountain’s National Data Center in Western Pennsylvania has been built to EMP resistant. It is located 220 feet below ground in a more than 450 acre area in an underground facility. The data center facility naturally absorbs 90 percent of the EMP pulses. This greatly reduces the cost and impact of any minimal residual shielding required in a customer’s individual space to ensure electrical, mechanical, power infrastructure and subsystems are thoroughly shielded and tested to be EMP-resistant. -
What would the cost be for an underground data center. Seems like it would be more expensive than building one above ground. Would the risk/impact justify the expense? Without considering the cost, it would be an effective strategy.
-
I definitely agree to all of your risk strategies. Although I was thinking that if an organization is affected by an EMP, than most likely others will be as well. So even if that data center is protected and survives the EMP, how will the surrounding damage affect its ongoing operation? If it is severe and infrastructure and businesses are affected, it might have a long term affect. For example, how long can the back up generator last? How does it recharge? Fuel or electricity?
-
I first learned about the EMP from the movie, Ocean’s Eleven. They used the device to take out all the electronic devies in Las Vegas to break into a casino. While this is fiction as the device they used is probably too small to take out an area that large, I don’t doubt that the EMP can be shrunk to do damage to key technology inside a company. Here is a video of someone who claims to have built a small EMP that can damage a cellphone.
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
-
The article I read was called, “Remote switch-on enlists Mac webcams as spies”, which is very concerning taking voyeurism to a whole new level via technology. The article explains the use of new malware that has enabled attacks via webcam. The results of some of these attacks have led to theft of personal information as well as the ability to use surveillance as means for blackmail.
Graham Cluely, a security researcher points out, “recent malware detections that showed Eleanor and Mokes arrive ready to record video and audio content from infected computers.”
This article sheds light on the ever increasing threat of technology. It definitely makes me wonder if my webcam is one at any given moment. I will make it appoint to close my computer when it’s not in use that is for sure.http://www.scmagazine.com/remote-switch-on-enlists-mac-webcams-as-spies/article/530381/
-
‘Security Fatigue’ Can Cause Computer Users to Feel Hopeless and Act Recklessly, New Study Suggests
NIST conducted a study on the weariness that users express when they are forced to adhere to certain types of security policies. Our program makes it clear that the largest vulnerability in an organization is it’s people. However, I think it’s important that we, as security professionals, continue to place value on the usability of our policies. We know that security and ease of use are often on opposite ends of the same scale, but a control that is overly cumbersome is likely to be tossed aside by end-users. This ultimately weakens the organization’s security stance, though “on paper” we may think we’re doing the right things.
The three “takeaways” from the article on not fatiguing your end-users:
1. Limit the number of security decisions users need to make;
2. Make it simple for users to choose the right security action; and
3. Design for consistent decision making whenever possible. -
Police Bust Multi-Million Dollar Indian Vishing Ring
Mumbai police have smashed an international vishing operation which could have netted ringleaders as much as $7.5 million from US victims who thought they were calling from the IRS. Police have detained over 700 staff at several call centers in the Thane and seized hundreds of servers, hard disks, laptops and other equipment. Staffs at call center pretended to call from IRS and claim the victim had outstanding taxes or fines to pay. This was ordered to be paid through online pre-paid cash cards. The callers used VoIP via proxy servers to anonymize their location. Staff said they have been heavily coached to speak with an American accent and handed a six-page script to use. The operation may also extend to the UK and Australia.
Vishing is the act of using telephone in an attempt to scam the user into surrendering private information that will be used for fraud purpose. It was rated as the most popular type of cyber fraud tactic according to Get Safe Online. The organization which started this vishing was still out there. People need to be more careful about their personal information and this kind of fraud.
Link: http://www.infosecurity-magazine.com/news/police-bust-multi-million-dollar/
-
The article I read is about Yahoo using a secret tool to scan users email content for US spy agency.
Yahoo recently suffered a major data breach and now is sharing user personal data just like Apple with Imessage (referring to my last week article).
Yahoo has a custom software that scans emails without the user knowledge, usually looking for certain information needed by agencies like the FBI for example.
The funny thing it’s that it looks like Yahoo security team was not even aware of it. That’ how secretive this software is.
What happened is that the US intelligence agency approached the company last year with a court order which, I guess, gave the company no other choice but comply with the directive. However, I do not understand why Yahoo (the CEO and the general counsel) decided to go behind the security team back’s to ask the company’s engineer to build the secret software program. This is an example of a lack of communication in a company which lead to the resignation of the chief information security officer who was disapprove the fact that he was left out of a decision that hurt users’ security. -
The article I read named “High Cybersecurity Staff Turnover is an Existential Threat”. According to the article, nearly 65% of cybersecurity professionals struggle to define their career paths—leading to high turnover rate opens up big security holes within organizations. Of course, most people want a better job with higher salary or more opportunities to be promoted, but this also brings the potential risk that the high cybersecurity staff may impact his former company’s information assets since he or she well understands the company’s IT systems. Even worse, if he or she was the initial member who built the IT governance frameworks of the company, and involved in the corn decision making of the company, then these former high level staffs knew the security loopholes of the former company. In some cases, these former high cybersecurity staffs may work for their former competitors, in this scenario, if they have well understanding of the existing weakness and loophole of the former company, they may use these loopholes to against the former company.
On the other hand, some people strongly agree that they are happy as a cybersecurity professional, and many of them believe that they will keep secret about the loopholes of their former employers with professional moral.
Source: http://www.infosecurity-magazine.com/news/high-cybersecurity-staff-turnover/
-
PwC: Security is No Longer an IT Cost Center
Many organizations no longer view cybersecurity as a barrier to change, nor as an IT cost.That’s the word from the Global State of Information Security Survey 2017 from PwC US, which found that there is a distinct shift in how organizations view cybersecurity, with forward-thinking organizations understanding that an investment in cybersecurity and privacy solutions can facilitate business growth and foster innovation.According to the survey, 59% of respondents said they have increased cybersecurity spending as a result of digitization of their business ecosystem. Survey results also found that as trust in cloud models deepens, organizations are running more sensitive business functions on the cloud. Additionally, approximately one-third of organizations were found to entrust finance and operations to cloud providers, reflecting the growing trust in cloud models.
resource: http://www.infosecurity-magazine.com/news/pwc-security-is-no-longer-an-it/
-
Hi Andres,
I agree. I like to hope that users will see the “Secret” conversations and ask “How comes this isn’t the standard?”. However, I think for the majority of users they won’t even use this function while the remaining will likely just think of it as a way that deletes a message after a certain time without understanding the real premise. While I may be pessimistic, I really hope this is a step in the right direction.
-
Synopsis of “2016 Emerging Cyber Threats Report” from Georgia Tech Institute for information Security and Privacy.
This report came out of the security summit in 2015. It speaks of cyber threats in broader terms and addresses these four areas:
Consumers continue to lose their privacy as companies seek to collect more data:
As consumers become more mobile and dependent on technology in their everyday life, companies are taking advantage of big data collection to improve operations and lead generations, posing a significant risk to privacy. There are limit use of technologies that does not collect data, and unfortunately, consumers are giving up a lot of their privacy for convenience.Growth of internet connected devices creating a larger attack surface.
As more devices get connected to the internet, hackers are looking for vulnerabilities to exploit. Devices, sensors, cars, industrial control systems, and devices from just about every industry is being added to the Internet of Things, which is also adding more entry points for attacks. The challenge and still growing concern is that these devices does not have security built-in, and there is no single solution for securing all devices in IoT.Growth of digital economy and the lack of security professionals.
-
The influx of technology creates a high demand for security professional to help protect organizations from attacks. According to a research conducted by Frost & Sullivan and the International Information Systems Security Certification Consortium (ISC)2 worldwide shortfall of security professionals will be 1.5 million workers by 2020.
Information Theft and espionage shows no signs of abating
Cyber-criminals that are not just financially-motivated has become commonplace. Attacks are be more sophisticated and nations along with private organizations are at risks from cyber attacks.To Read the report: http://www.iisp.gatech.edu/sites/default/files/documents/2016_georgiatech_cyberthreatsreport_onlinescroll.pdf
-
The article I read is about how many of the recent major breaches have something in common… In all of the major cyber security breaches, the path of the attack has been the common password because hackers know that the password is the weakest link in cyber security today. There are a number of reasons passwords are failing, including the reuse of passwords across accounts (Facebook and work email). The article stated the need to make our password issue a national priority and the need to come up with something better. We apparently need to leverage and develop the next generation of authentication technologies to authenticate identities in a way that is both stronger than passwords but not too inconvenient for users.
“This innovation is being spurred by the near-ubiquity of mobile devices that contain biometric sensors and embedded security hardware, creating new ways to deliver strong authentication – in many ways, with models that are both more secure and easier for the end-user, relative to “first generation” authentication technologies.”
-
Tech support scams put UK Users at Risk
A warning issued of tech support scams aimed at UK users. A company named Eset revealed data and claimed that the UK’s share of HTML/FakeAlert malware rose to over 10% over the past month.
HTML/FakeAlert refers to the malware typically used in tech support scams. It flashes up fake alert messages relating to supposed malware infection or other technical issues with the victim’s machine. The victim is then typically urged to contact a fake tech support phone line which could be a premium rate number, or else download and install a fake security tool which is actually additional malware.
It is recommended that the users mitigate the risk of support scams like this by keeping machines patched, up-to-date and protected with reputable security. Users should remain vigilant and should not trust unsolicited calls purporting to come from major IT companies like Microsoft. Users must get in touch with tech support via the official channels—a phone number or email contact on a vendor’s website, the firm added.
Microsoft claimed last year that such scams had cost more than three million victims over $1.5 billion. The company says that they have received more than 175,000 complaints about these scams over an 18-month period.http://www.infosecurity-magazine.com/news/tech-support-scams-put-uk-users-at/
-
Turkey blocks Google, Microsoft and Dropbox to control the data leaks.
As a result of the release of 17GB worth of leaked government emails, Turkey blocked access to Google, Microsoft and Dropbox services to suppress mass email leaks. The nation-wide censorship attempt was launched on 8th October.
Analysis has revealed that Google drive and dropbox services were issuing SSL errors which was intercepting the traffic at a national or ISP level. Around 57,623 emails from the Turkish government dating as far back as 2000 were leaked. The hackers were threatening to leak the stolen data if the Turkish government failed to set free a number of leftist dissidents. Instead of complying with these demands, the government instead chose to ban news outlets and forced Twitter to suspend accounts circulating the leak.
This way of blocking the site has been a common approach by the Turkish Government. I think that as this is not the first attack, the government should start working on preventive controls to avoid such circumstances.
-
Insurer Warns of Drone Hacking Threat
The increasing amount of drones, so-called unmanned aircraft systems(UAS) is being used in military and business, could present a major physical cybersecurity threat, potentially even resulting in loss of life.
However, there are attendant risks, notably the prospect of hackers taking remote control of a drone “causing a crash in the air or on the ground resulting in material damage and loss of life.” There is a hacking term “spoofing” that is referred to using a UAS via hacking the radio signal and sending commands to the aircraft from another control station. There’s also a risk of data loss from the UAS if a hacker manages to intercept the signal, or hack the company gathering the data. Even the companies of drones claim that the owners of drone can be found online, it is still a threat.Source:
http://www.infosecurity-magazine.com/news/insurer-warns-of-drone-hacking/ -
iOS 10’s Safari Doesn’t Keep Private Browsing Private
The Safari browser in iOS 10 no longer offers the same level of privacy as before. Previously, Suspend State was stored in a manner that would prevent information recovery, but iOS 10 changes that, in iOS 10, Suspend State is designed to create a list within the web browser to allow easy switching back and forward between the recently accessed pages in the currently opened tabs. It is stored in a database, thus allowing for the recovery of deleted records, and some experts have already proved that by experiments.
This change would make web browsing much faster when the user decides to go backwards or forwards to recently accessed pages, it seems that Apple chose user experience over user privacy.Source: http://www.securityweek.com/ios-10s-safari-doesn%E2%80%99t-keep-private-browsing-private
-
The article I found this week involved the possibility of someone hacking a diabetic patients insulin injector.
Ethical hackers have found J&J’s Animas Onetouch Ping insulin pump, which allows patients to push a button to inject the proper dose of insulin can be hacked because the communication from the remote to the device isn’t encrypted. The flaw would allow the hacker to inject insulin into the patient multiple times. Scary.
J&J has warned customers and offered a fix for the problem. The group also said hacking of the system is extremely low but this is a vulnerability that must be fixed. The science behind it is great, especially for the elderly who may have problems with the a syringe, but you would think simple encryption is a no-brainier. Crazy how J&J didn’t think about this during developement.
http://www.reuters.com/article/us-johnson-johnson-cyber-insulin-pumps-e-idUSKCN12411L
-
Card Data Stolen from eCommerce Sites Using Web Malware.
RiskIQ, a cloud-based security solutions provider have been monitoring a campaign in which cybercriminals compromise many ecommerce websites in an effort to steal payment card and other sensitive information provided by their customers. The method of attack was called “Magecart” where threat actors inject keyloggers and URLS directly into a website. RiskIQ identified more than 100 online shops from around the world hacked as part of the Magecart campaign.
JavaScript code injected by the hackers into these websites captures information entered by users into purchase forms by acting as a man-in-the-middle (MitM) between the victim and the checkout page. In some cases, the malware adds bogus form fields to the page in an effort to trick victims into handing over even more information. The harvested data is exfiltrated over HTTPS to a server controlled by the attacker.
By loading the keylogger from an external source instead of injecting it directly into the compromised website, attackers can easily update the malware without the need to re-infect the site.http://www.securityweek.com/card-data-stolen-ecommerce-sites-using-web-malware
-
UK BANS APPLE WATCHES IN CABINET MEETINGS
The news I read basically talks about Apple Watches have been banned from government cabinet meetings in UK. There is a concern that Russian spies will utilize Apple Watch as a listening tools.
Russia has chosen hacking to gather intelligence and play a role in government activity
Prime Minister Theresa May imposed the new rules following several high-profile hacks that have been blamed on Russia. Several cabinet ministers previously wore the Apple Watch has brought up concerns because “The Russians are trying to hack everything.”. Mobile phones have already been banned due to similar concerns.I believe it’s a good action that preventive control taking in place to mitigate the risk. The reason why I think it make sense to ban iwatches during cabinet meetings is because all these meetings are confidential meetings, they wouldn’t want to leak any sensitive information to others. An iWatch is like a mini computer. Once it is hacked, it can be programmed to do whatever the attacker wants. It could record all the audio offline, and when the hacker connected to the internet next time, the audio will be uploaded to the attacker’s server.
Source: http://www.infosecurity-magazine.com/news/uk-bans-apple-watches-in-cabinet/
-
Paul,
Thanks for sharing, I didn’t know about ” Secret Conversation” feature. However I don’t think social media is safe platform for sharing important information
-
The attacks on iCloud especially for celebrity accounts has been on rise. Hackers confess it is a easy hack and can be done by finding out the email address behind the icloud account. Hackers find a target to exploit and can find purported email accounts. Hackers use the Apple’s create account page to guess email address used. While creating new entry if a used email is listed, the message confirms if it is used or unavailable. If they get a message displaying that the email listed cannot be used or already in use, they are one step away from hacking. After this they would attempt to crack password or guess the user details to answer security questions. First step is to enter birth date of victim which is commonly available on any social networking sites.For answering security questions like my pets name, where were you on Jan 1st 2010?, Who was your favorite teacher is a matter of social engineering.
Against this Apple must modify the sign up process and forgot password mechanism to detect hackers while they are attempting to guess iCloud accounts.http://www.businessinsider.com/how-hackers-get-into-your-apple-iCloud-account-2014-9
-
The bad news for Mac users!
Malware targeting webcam and microphone, now targeting Mac laptops. Mac malware to tap into your live feeds from Mac’s built-in webcam and microphone to locally record you even without detection.
Attackers use a malicious app that monitors the system for any outgoing feed of an existing webcam session, such as Skype or FaceTime call.
The malware then piggybacks the victim’s webcam or microphone to secretly record both audio and video session, without detection.
You should physically cover your webcam! -
“UK Bans Apple Watches in Cabinet Meetings” by Tara Seals, Infosecurity Magazine
The news I read talked about that in the UK, Apple Watches have been banned by government cabinet meetings, because of the concerns that they could be used as listening tools by Russian spies. Many sources claimed that those smart watches have become a major concern for hacking activities, and one said “the Russians are trying to hack everything.” People said that intelligence community had more conviction in the presence “Weapons of Mass Destruction” in pre-invasion Iraq than they have in the clear attribution of who is really behind the cyber-attacks.
I think for now, Apple has to make response for this because this could influence not only for UK markets but for the entire global markets. And Apple should update the system of Apple watches to a secured system. However, Iphones are also portable devices and could also be listening tools by hackers. Maybe Apple should build the same security system for both Iphones and Apple Watches.http://www.infosecurity-magazine.com/news/uk-bans-apple-watches-in-cabinet/
-
What Makes a Good Security Awareness Officer?
Sharing the article i found interesting that how communication skills are also important with technical skills
Communication is one of the most important soft skills that a security awareness officer will need. Time and time again its been seen that people with the strongest communication skills develop outstanding awareness programs.The best awareness officers seen have little to no security background, but instead worked in communications, marketing, public relations, or sales .
In contrast 2016 Security Awareness Report identified that over 80 percent of people involved in security awareness have technical backgrounds.http://er.educause.edu/blogs/2016/10/what-makes-a-good-security-awareness-officer
-
Military Cyber Command of South Korea Suffers Embarrassing Hack
South Korea cyber command center of military has been hacked last month when officials discovered malicious code in its system. Officials are not clear how the malicious code entered the system, but think that the objective was a “vaccine routing server” used by the cyber command of the country.
Kim Jin-pyo member of the national defense committee of the parliament stated that the probability of leaking or stealing sensitive data is low because targeted server was not connected to military intranet.
North Korea is being suspected for this attack, but investigators are looking for finding facts and thus officially will not blame anybody till the investigations have been completed.
Fortunately, the attackers didn’t steal any data from the server, which has been secluded from the rest of the network and the Internet network of military did not experience any downtime due to the breach.
The server’s task is security of computers, which military has for the purpose of Internet-connection. Approximately 20,000 computers of military are believed to be connected to server. Officials are trying to find out how the malicious code entered the system
-
“Government lawyers don’t understand the Internet. That’s a problem”
The article discusses the dearth of lawyers with a science or technical background, and the effect it is happening in prosecutions and the legal profession. It first chronicles the physics professor who was arrested for espionage and accused of working for China. Eventually the charges were dropped after it was revealed that prosecutors did not understand the actual contents of the material in question. He was simply collaborating with a colleague in China, but the Justice department assumed it was regarding a sensitive research when it wasn’t. Very few lawyers have an understanding in cybersecurity or any science which makes prosecuting cases more difficult and leads to mistakes. More and more prosecutions, and also civil lawsuits involve technical information central to the issue of the cases. As technology and sciences progresses at a faster rates, lawyers will have more trouble properly litigating and prosecuting cases.
-
White House Vows ‘Proportional’ Response for Russian DNC Hack
The precursor to this story is that the Democratic National Committee emails as well as other organizations have been hacked and leaked by unknown sources. The files have been posted by WikiLeaks, DCLeaks.com, and Guccifer 2.0, who also may have been a hacker. The U.S. intelligence community stated that they were highly certain the hacks were orchestrated by high level Russian officials. The White House press secretary Josh Earnest told the press that Obama will take a proportional response to the hacking. Proportional isn’t very well defined in this case (the DNC doesn’t have a Russian wing to hack). Obama does have several options at his disposal still. More economic sanctions could be imposed but may hurt other countries that trade with Russia. There could be a diplomatic approach but that jeopardizes the situation in Syria where discussion still isn’t on similar pages. Obama could try to prosecute the hackers themselves but as seen with Snowden, we cannot extradite from Russia to try them. The response could be with our own hackers to go after Russian officials or elections. As with anything proportional, any move could cause a continuing escalation as two sides rarely see attacks as equal.
http://www.wsj.com/articles/white-house-vows-proportional-response-for-russian-dnc-hack-1476220192
-
Physically covering the webcam doesn’t stop the microphone recording, which often will have juicier details. Even if you have a Mac, you need to run antivirus and frequent scans. The article also mentions a 3rd party tool that monitors what programs try to access the webcam or mic. If you suspect you have an issue, don’t start Facetime or any other VoIP calls. Piggybacking means they can’t access them unless you’re also using those features.
-
Laly – this is very concerning. I work in a “closed area” and am able to bring my laptop into the area (most times). With that said, my work computer has a webcam. You will see many employees but a sticky note or some kind of coverage over their webcam. In fact, I have done that as well. I would guess that most people are doing this because of the recent news that you reported. Pretty crazy but definitely not a surprise.
-
Brou – I bet with a situation like this, the US agency came in and took control of the monitoring. The security team has their own role and they need to continue to improve their work. Another high volume task like this would not help with the roles that are already assigned to the security team. Considering the team had a terrible breach in recent history, I think it is probably wise to let the agency that is forcing this, monitor the emails themselves. Also, if the security team is monitoring the emails, many employees would have to get a government clearance that many of the employees probably do not already have.
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
-
David Lanter wrote a new post on the site ITACS 5206 8 years, 1 month ago
-
The article I read was about how the FBI has discovered that hackers are trying to hack voter registration sites in 12 or more states. This time, FBI investigators believe that it is Russia behind the attacks. The FBI director, James Comey, said that there have been a number of scanning activities which is a step that leads to potential intrusion activities and indicate “bad actors”. Although most of the attacks have not been successful (only two were successful in Illinois and Arizona), the FBI director is telling states to make sure they are on top of their voter registration systems. In no cases was any information changed and apparently no voting systems were at risk. 18 states have requested cyber assistance from the Homeland Security Secretary’s department for voting systems.
-
http://www.technewsworld.com/story/83845.html
This article explains Apple’s latest issued patch which, revealed the iOS zero-day vulnerability. The spyware called Trident implanted appears to be NSO’s Pegasus product which is a highly advanced tool that makes use of zero-day flaws. This flaw of obfuscation, encryption and kernel-level exploitation. Pegasus can access iPhone’s camera/microphone to overhear activity. It can record the user’s calls over WhatsApp and Viber, observe messages sent in mobile chat apps, and track the user’s movements.
According to Yair Amit, CTO of Skycure, “Pegasus clearly shows the dangers of mobile devices [that] can be transformed into ideal tracking devices”.
Subsequently, the author eludes that although Pegasus is a very sophisticated tool and can be used to target people. This spyware requires a minimal technical background to penetrate iOS and Android.
-
The article I found is about Apple sharing its customers’ personal data with the police when required.
In fact, one may think that doing conversations with friends using IMessage is safe due to the end to end encryption (a system of communication where only the communicating users can read the messages). However, it is not. Apple records a log of which phone numbers you typed into their iPhone for a message conversation, along with the date and time when you entered those numbers as well as your IP address, which could be used to identify your location. Surprisingly, this goes against Apple’s statement in 2013 that ‘the company “do not store data related to customers’ location.”’ICloud backup is even worse because it saved copies of all your messages, photographs and every important data stored on your device. You may think that you are in control, but the truth is that these information are encrypted on iCloud using a key controlled by Apple, and not you.
Do you think that it is ok for Apple to not only save our personal data ,but also share that information with the police?
-
Yahoo Mobile Mail Wide Open Even After Password Reset
Yahoo announced that at least 500 million Yahoo accounts were stolen from the company in 2014. Trend Micro Zero Day Initiative (ZDI) researchers are warning that a password reset still leaves mobile mail wide open to criminals. ZDI’s Simon Zuckerbraun said that he received a notification that his account was included in the breach. Like many, he logged in his account and changed his password. He then opened his iPhone mail application since he configured the app to use his Yahoo account. He expected to be prompted for his new password and was more than a litter surprised when he found it was not necessary. Even though he had changed the password associated with his Yahoo account, the phone was still connected.
Many users canceled their yahoo account after this breach. Personally, I don’t use Yahoo account very often. I still changed my password. This article was saying that people who connected their yahoo account on mobile phone still had a chance to be further attacked. There were many data breach crises in the history, Yahoo should react immediately and take some actions that can save the organization’s reputation.
Link: http://www.infosecurity-magazine.com/news/yahoo-mobile-mail-wide-open/
-
Article: Facebook Ordered to Stop Collecting Data on WhatsApp Users in Germany
According to the New York Times article, German watchdog had ordered Facebook to stop collecting data on WhatsApp users. WhatsApp is an instant messaging application for phones that can be used cross platform and uses the internet to send and receive messages. The application has around 1 billion users and the company is now owned by Facebook. According to the article, the company has at first told users that it will not be sharing data with its new parent company, however, that has changed and they recently announced they will now be sharing the information used on the app which cause many to believe that their “digital privacy could be at risk”. However, German regulators are looking to protect its citizen by requiring WhatsApp to stop collecting and storing data on users in Germany as well as delete any information already stored. In response, Facebook has stated that they had not violated any of Europe’s privacy rules and will aide regulators in addressing any of their concerns. While no substantial actions have been taken yet, this conversation that the German regulators and Facebook are having could turn into something much bigger as certain watchdogs aim to protect the privacy of users.
Source:
http://arstechnica.com/tech-policy/2016/09/facebook-germany-whatsapp-data-delete-order/ -
A Syrian national sympathetic to Syrian President Bashar Al-Assad’s government has pleaded guilty to federal charges for his role in an extortion scheme that targeted US media outlets, the US government and foreign governments
In 2011, the group targeted multiple entities including The Associated Press, Reuters, Microsoft, Harvard University, CNN, National Public Radio and Human Rights Watch among others. The group also reportedly targeted the computer systems and employees of the Executive Office of the President, but was unsuccessful.In April 2013, the hacker group sent a fake tweet from The Associated Press’ official Twitter account claiming that a bomb exploded at the White House and injured President Barack Obama. Within just minutes, the message caused the Dow Jones Industrial average to plunge over 100 points, before it was confirmed to be a hoax.
http://www.ibtimes.co.uk/pro-assad-syrian-electronic-army-hacker-pleads-guilty-us-court-1583919
-
Johnson concerned about Russia meddling in election
Republican Sen. Ron Johnson chairs the Senate Homeland Security & Governmental affairs Committee. He believes the Russians are capable of “Meddling” with the presidential election process. The Russians are said to be responsible for hacking into state voter registration databases. Each state has a different system for the election process. Sen. Johnson believes the goal “… is to de-legitimize the election”.
The president is the highest position in our land. Do we want to continue down this process?
We need to improve and implement a better EA system for each state. Controls should be implemented and monitored to reduce the chances for rigged elections. These are our leaders, creating and deciding on our laws. It is important to maintain the fair electoral process to achieve democracy.
-
https://www.entrepreneur.com/article/282908
This article talks about various ways cybercrime can occur and how scammers are stealing money from people. Some of the methods discussed are phishing, pretending to offer a great deal, pretending to be a friend on FB, etc. Once they are in the computer or network, they can spread and cause havoc. So to prevent this is to use common sense, install anti-virus software, don’t open emails or accept friend requests from people you don’t know, and alert the banks and authorities if fallen victim to this, It’s not a complete list but this article does include some new methods which is alerting.
-
That is bad news. Thanks for sharing. I believe hackers can attack ios and launch zero day attacks as they are able to jailbreak into the phones. Once hackers get root access they can easily bypass security in the OS. With this they are able to run a shell code, find kernels base address and execute a code in kernel to launch the attack.
-
Great article Paul. This points out that Privacy laws change depending on the government rules. I think a whatsapp user in Germany or anywhere else in the world must restrict the parent company from having access to sensitive data.
I know that whatsapp started encrypting messages only from April this year, until then it was only clear text. So the earlier data till April 2016,if stored, must also be protected. -
This article relates how hackers are impersonating the IRS and sending scam emails to victims asking them to pay balances related to the Affordable Care Act health coverage requirements. Some people have even received letters in their mailboxes and other have received phone calls from the hackers.
“Since October 2013, the Treasury Inspector for General Tax Administration said it has received more than 1.7 million complaints from people saying they have received phone calls from fraudsters impersonating IRS agents, and more than 8,800 individuals have paid more than $47 million to these scammers”.
I found it interesting because I have already received a similar email asking me to pay the remaining of my bill. In general, agencies do not operate like that. If you owe them money, they usually send you a bill in your mailbox. As hackers can do the same thing, you have to make sure it is a real bill from the agency. -
Bad Security Habits Persist Despite Rising Awareness
The article mentioned that organizations undermine their own efforts by failing to enforce well-known security best practices around potential vulnerabilities associated with privileged accounts, third-party vendor access and data stored in the cloud While the huge number of cybersecurity incidents are helping to raise awareness of security best practice, many organizations are persisting with bad habits that leave them exposed to hackers and data breaches.
The percentage of actions taken by organizations are as follow:
Deployment of malware protection – 25%
Endpoint security – 24%
Security analytics – 16%Even though cyber attacks still happen, many organizations still have weak control in their password management. 40% organization admitted they store privileged and admin passwords in a word document or spreadsheet. Nearly half of organization did a poor job in securing the third-party access to remote access to their systems.
http://www.infosecurity-magazine.com/news/bad-security-habits-persist-despite/
-
Newsweek recently published the breaking story of Donald Trump’s money trail that went around America’s embargo on spending money in Cuba. Soon after the story went national, they were hit by a massive DDoS. Initially it could’ve been unexpected heavy traffic but the pattern became clear as a DDoS. The IT Chief of Newsweek found out that the main IP addresses were Russian based although doesn’t believe that that proves anything about the nature of the attack.
Speculated state sponsored Russian threats has been a leading story in the 2016 election. The DNC has stated that they believe Russia was behind the DNC attack. This current attack may be from the same group. This attack may also be using Russia as a cover to launch attacks from. Depending on where you compromise computers, you can launch a DDoS attack from any country.
http://talkingpointsmemo.com/livewire/dos-hack-newsweek-trump-cuba-embargo-story
-
Hi Paul,
Interesting article, not just Germany but invasion of privacy by Facebook or google happens everywhere nowadays because they are dominantly getting involved in our daily life. We are seeing all the ads. on Facebook or Google based on our search history or whatsapp conversation. They can earn big money from those ads if they predominate the data of user’s behaviors. I have used whatsapp for many years and i think whatsapp / Facebook should let the users choose either they want to share they information or not.
-
Thanks for sharing the news Magaly. “The jailbreak is the key here” When the users choose jailbreak their phones, they already accept the risk. The cydia apps are not authorized by Apple and their safety is always a doubt. A jailbreaked iPhone is like a house without its front door. The security controls built into the device can easily be bypassed.
-
Brazilian Hackers are using RDP to spread Xpan Ransomware:
Brazilian cybercriminals are using ransomware as a new means to attack local companies and hospitals. Xpan is a ransomware developed by the organized gang that uses targeted attacks via Remote Desktop Protocol(RDP) to infect systems.
This ransomware checks the systems default language, sets a registry key. Obtains the computer name from the registry and deletes proxy settings defined in the system. During execution, Xpan logs all actions to the console and clears it when the process is completed. When the user then clicks on any file It informs the user that the system is encrypted with RSA- 2048 bit encryption and encrypts all the files in the system except for .exe and .dll files. It disables the database services, disables the anti virus products and begin installing their malware.
Kaspersky has managed to break the malware encryption and was able to successfully help a hospital in Brazil to recover from an Xpan Attack.Source: http://www.securityweek.com/brazilian-hackers-using-rdp-spread-xpan-ransomware
-
Project Shield Has Krebs on Security’s Back
Last month a Distributed Denial of Service (DD0S) hit Brian’s Krebs (krebsonsecurity.com) website and was dubbed Marai, one the largest DDoS attach in history. It delivered over 620 gigabytes of “junk” data, making the site unresponsive and had to be taken down until Krebs was able to move his site under Google’s Project Shield infrastructure.
The attack used an army of about 300K bots from Internet of Things, IoT, based devices with default usernames and passwords. Krebs stated that is was unlikely a state-sponsored attack and articles from HackerForums implied that his site was taken down because of increase security scrutiny was bought to IoT devices. The hacker, nicknamed Anna-senapi, was getting out of IoT based botnets because ISP’s were tightening down the hatches.
Read More:
http://krebsonsecurity.com/
http://www.technewsworld.com/story/83932.html -
A medical office in Texas was attacked by multiple burglars who stole 5 laptops. One of the laptop contained confidential patient data which was not encrypted. Data like medical records numbers, diagnosis, admission and discharge details, date of birth, address, SSN, medicare and medicaid numbers is at stake. The StartCare Health System has now taken steps to ensure security in the office as well as encrypting its computer systems. There has been no record of misuse of medical data until now.
A similar neglect towards sensitive data lead to another potential breach in Texas. One employee at Premier Physicians Group left patient records at his previous home which was later owned by a bank who reported that PHI records for 1326 patients was left unattended. Although data has not been misused, potential security policies and records checks have been done to ensure safety.
Source: http://healthitsecurity.com/news/stolen-patient-records-in-oh-lead-to-potential-phi-breach
-
I agree with Said and Yu Ming.
Yes I have read about Apple denying to unlock some iphones. The question that we should look at is, what would happen to the data once it is shared. The attacks are always organized and well planned. However the security system should also be well organized. If an attack is determined before the launch, while it is in progress, lot of damage can be avoided. -
I found this subject interesting, because recently I saw my friends posting their boarding passes on Instagram. In this article its explained that this can put you at risk by posting your boarding pass photo on social media.
Many information including full name, flight number, flight account, and frequent flyer number can be extracted from barcodes of this document.
These information can be used to get the information about your future flight, so your seat can be changed, your flight can be canceled or all future frequent flier flight might be canceled,Don’t post pictures of it online!
http://www.businessinsider.com.au/barcodes-on-boarding-passes-2015-10
http://www.businessinsider.com/uploading-boarding-pass-photos-bad-2016-9 -
Yu,
That YouTube video is amazing.
-
– Just 26% said notifying the CEO is among their top priorities, ahead of the rest of the staff (25%) and customers (18%).
That’s crazy. The article reeks of “this is not really a priority for the business leaders”.
-
Security Design: Stop Trying to Fix the User
https://www.schneier.com/blog/archives/2016/10/security_design.htmlI think that Mr. Schneier wouldn’t necessarily absolve end-users of all responsibility, but his point on security professionals laying too much of a burden on users is well taken.
The internet has given us tools that make life easier. Ease of use is at the core of how we’ve designed the technology that has revolutionized our world. For us, as practicing or aspiring security professionals, there is a need to understand where the line is when it comes to asking too much of users.
-
DressCode Malware Infects 400 Apps in Google Play
Dresscode Malware infected a total of 40 apps in google play and a total of 400 apps via third party app stores but the actual number can be much higher. Over 3000 apps distributed by Android mobile market have been infected with this Trojan.
Once the infected app is installed on a victim’s device, the malware connects with the command and control (C&C) server, which is a domain in the newer versions (it was a hardcoded IP address before). The device is then turned into a proxy that can relay traffic between the attacker and internal servers the device is connected.
“A background service creates a Transmission Control Protocol (TCP) socket that connects the compromised device with the C&C server and sends a ‘HELLO’ string. Once the C&C server replies, a ‘CREATE, , ‘ command prompts the device to establish a TCP connection between it and the attacker. This allows the device to receive commands from the attacker via the SOCKS protocol.
As soon as the SOCKS proxy has been set up, the device can forward commands from the attacker to other servers in the same LAN, thus allowing the attacker to connect to internal servers located behind the router. Through this the attacker can either bypass the NAT device to attack the internal server or download sensitive data using the infected device as a springboard. With the growth of Bring Your Own Device (BYOD) programs, more enterprises are exposing themselves to such kind of risk.
Because of the installed SOCKS proxy, the device can be abused as a bot if the attacker decides to ensnare them into a botnet, and can be used in various types of attacks, including distributed denial-of-service (DDoS) attacks or spam email campaigns. The attacker could generate revenue in other ways as well, such as creating fake traffic or disguising ad clicks.
This can also be used to reach connected cameras and other devices connected to the same network and because attackers can discover the IP address of these devices by exploiting weak router credentials or other vulnerabilities, hence opening the door for other types of attacks as well.
http://www.securityweek.com/dresscode-malware-infects-400-apps-google-play
-
Absolutely Andres, while writing the News post, I was thinking the exact same thing. In this week itself, if you see, we have a number of articles that point to Russia’s superior cyber-security capabilities. And these are just the instances that have come to light – can you imagine the number of incidents that would have gone undetected or unreported?
-
World’s largest 1 Tbps DDoS Attack launched from 152,000 hacked Smart Devices
If anyone owns a smart device like smart TVs, thermostats, etc. there is a good possibility that they were a part of a botnet team that was used to launch the biggest DDoS attack, with peaks over 1 Tbps of traffic.
The victim was a hosting provider named OVH in France. IoT (Internet of Things) is the next big thing and is growing at a great pace, but it also provides attackers a lot of entry points to affect consumers in various ways.According to the OVH founder, the DDoS attack was carried out through a network of over 152000 IoT devices, which also included CCTV cameras and personal video recorders. Poorly configured IoT devices are the low hanging fruit for hackers to carry out such attacks of unprecedented size.
The problem is that the manufacturers are reusing the same set of hard-coded SSH cryptographic keys every year that leaves millions of devices open to hijacking. To make things worse these IoT devices that are vulnerable don’t have any security updates coming up.
The below URL will link to a different news article that has the source code for IoT botnet; be cautious.
http://thehackernews.com/2016/10/mirai-source-code-iot-botnet.htmlSource:
http://thehackernews.com/2016/09/ddos-attack-iot.html -
Article: “Hack of Half a Billion Records Takes Shine Off Yahoo’s Data Trove”
Yahoo on Thursday disclosed that a data breach in late 2014 resulted in the theft of information from at least 500 million customer accounts. It appears that state-sponsored hackers carried out the attack. Account information compromised includes names, email addresses, telephone numbers, dates of birth, hashed passwords, and encrypted or unencrypted security questions and answers.
Yahoo encouraged its users to take precautions, such as changing passwords and security questions, to protect themselves from malicious activity. Yahoo also introduced the Yahoo Account Key last year, which is similar to the two-factor authentication systems used by some online services.
Customers who are affected by data breaches suffer a significant loss of trust, and this is particularly true of men. -
I read the article “Wi-Fi Flaw Exposes Android Devices to Attacks.” According to the article, the Wi-Fi technology used in the Android OS and many other products allows malicious actors to escalate privileges and cause a denial-of-service (DoS) condition on affected devices. This vulnerability patched with this month’s Android security updates, affects versions 4.4.4, 5.0.2, 6.0 and 6.0.1 of Google’s mobile operating system. Therefore, Android mobile users with these versions may allow attackers access in the system through the malware from Google store, and monitoring the data flaw in customers’ device. If such a malicious app causes the Wi-Fi component to malfunction, the issue can only be addressed by resetting the device to its factory settings.
Source: http://www.securityweek.com/wi-fi-flaw-exposes-android-devices-attacks
-
Thank you for sharing, and this is an interesting news to read. I didn’t hear this spyware before, but it seems similar to another PC malware which can access to the camera of the PC, and monitor the data flaw in and out. Now the Trident can even access smart phone’s camera and microphone, which significantly impact the users’ privacy.
-
Hacking Elections Is Easy, Study Finds
Except for leaked emails from Hillary, two state election databases had been breached, and the voter registration databases from all 50 states are being hawked. This leakage may be used for all kinds of mischief, for example, an attacker could sour a candidate’s supporters by sending bogus robocalls, supposedly originating from the candidate, at 3 a.m.
Some experts claimed that, while the systems do have vulnerabilities and it might be possible to generate noise intended to undermine the credibility of the election, it is impossible to change the outcome of an election.
But for me, it’s no longer a question whether hackers will influence the 2016 elections in the United States — only how much they’ll be able to sway them.
-
Android Malware Improves Resilience
There have been numerous reports about malwares infecting apps in the Google Play store. One of the possible reason for this is the improvement of Android malwares to both avoid detection and maintain their presence on infected device even after being discovered. The most common technique used is packing where packed Android malware has increased from 10% to 25% in 9 months. Another trending technique is MultiDex applications where programs have two DEX files to deliver the malware. Android Apps typically have a single DEX file and detection focuses on single DEX files causing MultiDex applications to evade detection.
Malwares are also becoming difficult to remove. Malwares that gains roots privileges on the infected device is becomes difficult to remove because it employs a new technique to further lock the malware installation. It leverages Android’s Linux roots by using the chattr Linux command which makes files immutable.http://www.securityweek.com/android-malware-improves-resilience
-
While I do understand what the article is trying to explain, I do not agree with it. Yes, IT was created with the purpose of making human life easier. Ease of use is a top priority, however it comes at a cost of little to no security. This will not be an issue if we leave in a peaceful environment where no body have any malicious intent. As we know, that is not the case. Malicious users took advantage of the lack of security in IT and we the innocent users have to react by educating ourselves on how to avoid being attacked. Essentially, security and awareness is a response to malicious attackers.
A comparable example to the idea of this article is blaming a rape victim for getting raped.
-
OpenJPEG Flaw Allows Code Execution via Malicious Image Files
For those of you who are not sure what OpenJPEG is, it is an open-source library designed for encoding and decoding JPEG2000 images, a format that is often used to embed image files inside PDF documents. OpenJPEG is used by several popular PDF readers, including PDFium, the default PDF viewer in Google Chrome.
An update released last week for the OpenJPEG library addresses several bugs and important security issues, including a flaw that can be exploited to execute arbitrary code using specially crafted image files. The attacker attaches a malicious file to an email, or uploads it to a file hosting service, such as Dropbox or Google Drive, and sends the link to the victim. The vulnerability allows an attacker to execute arbitrary code on the targeted user’s system after the victim opening a specially crafted JPEG2000 image or a PDF document containing such a file.
This is something we should take into consideration, and the best defense against virus infection is don’t open e-mail attachments you are not expecting.
Source:
http://www.securityweek.com/openjpeg-flaw-allows-code-execution-malicious-image-files -
I do not think it’s okay for Apple to share your personal data. Legal and privacy should you hand in hand. Unfortunately, many people fail to realize that what Apple is doing with your data is legal. Below is and excerpt from Apple’s User’s Agreement :
“b. Consent to Use of Data: You agree that Application Provider may collect and use technical data and related information, including but not limited to technical information about Your device, system and application software, and peripherals, that is gathered periodically to facilitate the provision of software updates, product support and other services to You (if any) related to the Licensed Application. Application Provider may use this information, as long as it is in a form that does not personally identify You, to improve its products or to provide services or technologies to You.”
The way it’s worded , “related information” or “including but not limited to,” makes it arguable to collect just about anything from your devices. You essentially gave away your privacy by using there devices.
-
“THINK TANK REPORT WARNS OF CYBERTERRORISM IN SPACE”
The article discusses the emerging cyber security vulnerabilities and threats from satellites. More and more of economic activity/productivity are dependent on satellites including functions such as GPS and communication. As more satellites enter earth’s orbit, the potential for debris from satellite to strike another increases. Once a satellite is destroyed in orbit, the debris can continue in the earths orbit and risk impact with other satellites. This threat is known as the Kessler Effect and is not new. However, the growing threat of cyber attacks increases the this risk, as well as others. Satellites were designed before cyber security became a concern, as a result many were not built to be secure with many containing backdoors. Hackers could hypothically hack a satellite and alter its course to crash into another satellite, starting the kessler effect. Another possibility could be disrupting communications or any other function provided by satellites. Or hackers could simply hold the satellite for ransom, similar to ransomware currently used by cyber criminals. However, the real long term danger is space debris because it continues to pose risks fare into the future.
-
Sounds similar to the another IRS scam where victims would receive a phone call from someone claiming to be from the IRS alleging back taxes owed. Victims would be threatened with prison and would then receive a call from another scammer impersonating a police officer. Many people ended up immediately wire money to an account using Money Gram. Seems like the IRS is an effective tool for scammers to lure vulnerable people.
-
Definitely a serious issue and a cohesive national policy is needed to secure election infrastructure. However, they don’t really need to hack election results to delegitimize the results. Russia has had success in Europe promoting far right parties which further destabilize an already delicate social climate. Or they could continue to release damaging personal email and documents from our politicians. Either could plant doubts in the public without actually targeting voting infrastructure.
-
Definitely a huge security issue and I don’t think people are paying enough attention to it yet. Anything that can connect to the internet can be hacked. Similar problem that many routers have, in that they are not designed and manufactured with security in mind, nor are they consistently updated with security patches. As much as I like the idea of a connected home, I don’t feel comfortable at the moment connecting my appliances to the internet.
-
Loi,
We all know that nobody reads those user agreements. And for people who actually read and accept them, I don’t think that they have the right to complain about anything. Plus, I think that using Apple devices is a personal choice. If someone worries about his/her privacy, he/she can always use pay phones.
-
Malware in android may be able to become an admin or root easier than the user it seems. In desktop OS’s knowing who the admin is usually clear and defined. Android has been improving this but I can’t recall an area that is focused on admin tools in the settings. Settings like sideloading apps or app permissions is usually per account. The tablets do have some admin settings but I haven’t worked with an updated tablet in a while. Having a procedure to check who controls the device, be it a user account or malware, would be helpful to busting malware.
-
This is interesting.. I know that Cyber Law is fairly new. I have read that police need very specific warrants when looking at hardware, software, logs, ect. For this specific case, I believe they would have to be looking for specific certain information during a certain time range. For example, they could look through text messages on 10/4/16 in regards to the murder of John Doe. If they found something on the suspect that related him to the sale of illegal drugs then they could not use that or hold that against the suspect until they got another warrant.
Regardless, interesting topic. I think that Apple should make certain information available to the police because the more evidence the better. I look at it like: it could help an innocent person be released. If someone committed a crime, the more evidence against them, the better. I am ok with that, as long as the police have to sign confidentiality agreements and can not release certain information (like texts) to the press.
-
It is frustrating to me that several aspects of our government system keeps getting hacked. We have some of the best infrastructure in the world, yet we can’t protect it. It is also crazy that you hear politicians and government officials say stuff like: Government hacking is fair game. Some of that information needs to be kept secret for national security. Sounds like we need to adjust some of our government spending and invest in cyber for our government and our USA company’s cyber security. It is good to hear those same officials say that most effort is spent on protecting US companies cyber but still a lack of government cyber security makes no sense!
-
Although, more and more companies are investing in cyber, cyber controls, best practices, etc., sometimes it takes companies to learn their lesson before they invest. I think big hacks like the Sony hack last year show companies that the investment is well worth it compared to the loss you may receive from an attack. It is unfortunate that companies have to learn the hard way sometimes but I guess that is just life.
Companies need to bring awareness to their employees, teach them about cyber best practices, and put in place controls to make sure that these best practices are being followed.
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years, 1 month ago
Presentation: Slides
Video found here: Video
Quiz w/solutions: Quiz w/solutions
-
David Lanter wrote a new post on the site ITACS 5206 8 years, 1 month ago
What are the issues of security that are unique to online banking in India?
-
What are the issues of security that are unique to online banking in India?
I think Neil did a great job in explaining the issues pertaining to e-banking in India. Let me turn the focus a little bit towards the issues faced specifically by the mobile banking in India.
The chief information security officer, Salvi’s mandate was to ensure that HDFC Bank’s online banking platform was secure from online risks. The online banking had two parts: net banking and mobile banking. “In India, the number of mobile subscribers had grown steadily from 60.85 million in 2005–2006 to 98.77 million subscribers in 2006–2007 to 165.11 million for the year ending March 2008.” Mobile banking was an obvious successor but still a new concept in India. And, looking at the steady growth of mobile subscribers it was certain that users would most likely be using mobile phones to carry out bank transactions. Thus, it became imperative for Salvi to include mobile banking platform as an integral part of strengthening of online banking at HDFC Bank.
The security issues that are unique to online banking in India are, authentication issues; PIN authentication method is used in mobile banking, which is an old method and has risks like identity theft. Users are still uncomfortable to use mobile banking as they don’t trust that the security mechanism provided by the banks can prevent attacks. For instance, Mobile phone being a small device, if stolen, the attacker can access the user’s password from the log files. Users also have a habit to store their passwords as drafts in their phone text applications.
Most of the users see privacy as a critical issue. It is very important for banks to educate their users on this issue and increase customer awareness. And it imperative that telecommunication providers like Reliance Communications, Airtel, Vodafone, etc. to formulate a joint security policy with banks to provide a sense of assurance their users.
Further, there are also security issues if the devices are jail-broken or rooted. Thus, it becomes important for banks to ensure that their application is preventing attackers from accessing the app in such a case.
In case of internet banking, computer systems are capable to process complex encryption programs but end-to-end security was still a concern since in mobile banking, since in order to apply sophisticated cryptographic system a mobile phone should have high computational capability.Work Cited:
-
Work Cited
Bose, Indranil, HDFC BANK: SECURING ONLINE BANKING
-
The HDFC Bank at the time of the article has just begun to set up online banking for their customers. While this bring with it all the issues attached to securing regular online banking, there are some issues that only arise because they are based in India.
One unique security issue to India is that the lack of internet connectivity has led to each connection actually worth multiple people. The overall penetration rate was listed as .2 as only 2.5 million people subscribe to the internet out of 1.1 billion people. The estimated number of actual internet users at the time was 38.5 million with an expectation to increase to 100 million in 2008. Shared connections are a security nightmare as it opens up many types of attacks on your customers. Replay attacks, malware already on the computer, or redirects to a pharming site could all be done on a communal computer.
In order to even be a bank in India, you have to follow the RBI’s (The Reserve Bank of India) guidelines. The guidelines state that security policy must be approved by the board of directors of a company. Use access controls like ids and biometrics. Use of a firewall is a must. Banks must test risk and vulnerabilities every year. The servers that are used for storing information need to have physical protections to prevent unwanted tampering. Finally, the banks in India must document all up-to-date practices in security so they can be reviewed.
Phishing is a new concept to Indian banks in 2006. HDFC was the fourth bank in India to even realize they were being targeted. Those who launch phishing attacks prefer large targets so they waited until Indian banks were large enough to attack. It also hurts that the new customers of India who are new to the internet aren’t explained the dangers of phishing well. With 1.28 million online customers, it can look very bad when they are vulnerable to phishing attacks as customers may still hold the bank at fault.
When outsourcing for security help, the companies that had good track records at the time are in America as they had been dealing with online banking issues for longer. HDFC was considering hiring RSA Security to run a secure server. This creates several security issues as you would have to constantly check with your contacts in the other company to see if security is being maintained. Another factor working against outsourcing to the experts is how transcontinental links at the time were not completely stable.
Overall, India proves a tricky environment to navigate when launching an online banking system. These obstacles, with proper policies and infrastructure, should all be manageable in the end.
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years, 1 month ago
How should Salvi address the issues before him?
-
David Lanter wrote a new post on the site ITACS 5206 8 years, 1 month ago
What are the challenges faced by Salvi?
-
Salve being a CIO of Indian Bank faced one of biggest challenge that its large number of customers were offline based and in order to bring customers to online banking the IS protocols should not be so rigorous as to cause inconvenience to customers. Although HDFC Bank was not pursuing market share as a business objective in its own right, securing regular annual increases in new customer accounts was crucial to business growth, and ensuring that existing customers stayed on with the bank was equally important. Thus Salve had to balance both the security and convenience of customers
Second he also came across the problem of location of its server the proposed IS infrastructure at HDFC Bank would include two types of servers: authentication servers (housing the software that would conduct the due diligence) and online servers (facilitating the actual transfer of money from one account to another)
The bank was in talks with RSA Security but dilemma was whether the online servers should be located at HDFC’s data centers and the authentication servers at RSA’s premises. The latter were outside of India, and maintaining server at far would present yet another potential point of systemic failure -
3. What are the challenges faced by Salvi?
The Indian customers had reliable trust with offline banking and when internet was on rise online banking systems attracted customers for the comfortable nature of online banking. But considering legacy systems and paper work it was not easy to transfer online banking that too with security. Maintaining security while giving customer the convenience of online banking while withholding the trust that Indian customer was the biggest problem faced by Salvi
• He had to establish a IS security framework which was new to the online banking process
• Protect online banking platform from online hazards while guaranteeing Authentication, authorization, integrity, privacy, non-repudiation
• HDFC faced phishing attack in 2007, affecting 28% of its customers. This was the time when online banking was not prepared for it and implementing corrective measure rather than preventive.
• With new online model, IS risks identification, measurement and monitoring of credit risk, market risk and operational risk
• The problem was also with setting up servers. Customers should not directly communicate with bank server this indicated use of PKI. Should the Authentication server and online server be located onsite or with vendor or on cloud? Onsite will have less potential failure. When outsourcing IS services, vendors must not store confidential data. While maturing IS systems and guaranteeing security with each additional layer the complexity of process was increasing which went against customer ease of access.
• As it was a new implementation, his major issue was finding loose ends when banking online and tighten security there.
• With growing mobile platform, they needed to implement different authentication for mobile and online banking. With mobile systems in use, technology integration while maintaining independence of IT, business integration with each business unit dealing with its own risks, risk integration
• Dormant accounts were also vulnerable to fraud. For new customer’s secure access would be provided while account is created but what about earlier customers? A fraudster can steal an ID and password of dormant online customer, make a false registration, set up himself as a beneficiary and transfer funds during the unguarded, interim period.
• While validating user, there is high possibility of false positive. With immature IS technologies, false positives were high and thus increased customer inconvenience. The dilemma that should the protocols authenticate the customer to the transaction.
What Salvi basically faced was adaptation to new architecture at the time when risks were unknown. It was more of a corrective strategy that went against the mission of engaging customer trust convinience. -
One of India’s leading private banks, HDFC bank revolutionized the Public Sector Banks (PSB’s) in 1994, reducing the slow and time consuming process of depositing and withdrawing funds by implementing 24/7 self-service technologies. The implementation of customer convenient technologies posed significant challenges to the availability, security, and integrity of a changing banking industry. The Reserve Bank of India (RBI) provided new provisions for new PSB’s entering the banking sector in 1994 to create competition for a newly re-organized banking sector. Expensive upgrades to legacy systems put the traditional brick-and-mortar PSB’s at a disadvantage in acquiring the new, younger, technology savvy depositor. As a premier provider of customer centric IS solutions, HDFC extended their remote banking services by entered the on-line and mobile banking system to target the non-traditional virtual banking customers. By adapting to the changing culture, HDFC had grown to 10 million customers, 684 branches, and 1,605 ATM’s across India from 1994 to 2007.
In August 2007, HDFC clients were sent a fraudulent email from a phishing hacker asking for sensitive account information. Phishing attacks take form of website links, phone calls, or email messages. The attack entices the recipient to perform an action that will compromise their identity to steal money or personal information. (Microsoft Safety & Security Center, n.d.)
Vishal Salvi had worked for HDFC bank as the Chief Information Security Office during the attack. He was confronted with many challenges in providing the employees and customers an easy-to-use, safe and secure information system, meeting and/or exceeding the RBI banking regulations. HDFC put a heavy focus on convenience for customers by investing in real-time technologies. The innovative technologies eliminated the need to visit a local branch, but posed new security risks for the customer. There had to be a balance between quick and secure. The system would have a multi-layered authentication process to identify the account holder and verify the transaction is accurate. Identification would require a user name and password called, “first level” authentication. Salvi decided to implement “second level” authentication, requiring another level of security fields to identify the user. The second level authentication is known in banking circles at “secure access” and requires the setup of commands unique to the user, such as and image, personal message, or answering a series of questions that will be automatically generated by the system during the log-in process. (Bose, September 24, 2016)
The multi-level authentication process provided Salvi a solution to minimize the effects of a phishing attack, but system availability and redundancy are expected to satisfy customer convenience. The HDFC IS infrastructure would consist of two sets of servers: Authentication Set & Online Set. The challenge faced by Salvi was to bring both sets of servers onsite, bring one set onsite & one set hosted, or to have both sets hosted by a 3rd party provider at an off-shore location. Bringing the servers in-house would reduce systemic failure because it would be supported by HDFC employees dedicated to each set of servers, but the cost of the equipment, payroll, utilities, and other factors didn’t make the solution cost effective. RSA security is a 3rd party security solutions provider, offering cloud computing solutions and involved in countering the phishing attack. RSA offered a scalable monthly fee solution, satisfying the security, ramp-up time, and budget.
You would think the authentication and outsourcing is a “no brainer”, but each step in both procedures reduces what HDFC considers to be convenience. It may take longer for the customer to log-in due to inexperience and/or forgetting security answers, or the system may decline an authorized transaction because it didn’t fall within the validation metrics of the authentication process. What about continuity? Will RSA provide acceptable recovery time if it goes off-line? How about access to the hosted environment, will HDFC employees have access to the co-location? Careful customer consideration and transparency would be required to maintain a secure environment, while meeting the expectations of the new and existing remote banking client.
Works Cited
Microsoft Safety & Security Center. (n.d.). How to recognize phishing email messages, links, or phone calls. Retrieved from http://www.microsoft.com: https://www.microsoft.com/en-us/safety/online-privacy/phishing-symptoms.aspx
Bose, I. (September 24, 2016). HDFC Bank – Securing Online Banking. Harvard Business Journal, 8.
-
There is another issue that is brought up in the case regarding server implementation, the issue of implementing secure framework within a short time and the cost required to do so.
With cloud The cloud model offered by RSA would take about 9 months while online model would take 15 months. With cloud HDFC Bank could opt for pay-by-use pricing, whereby the bank would be billed only for actual usage.
I think that while solving this problem or dilemma again the security issue was underseen. With the security issues going while the system was being transferred online, yet another cloud system was going to a new area to explore with new security threats. -
HDFC bank becoming a target to phishing attack Salvi, the CISO was faced with the below challenges:
1. How to ensure the security of online transaction while keeping customer convenience as a priority?
For online transaction HDFC used the adaptive risk modelling where risk score was assigned to each transaction based on some predetermined parameters such as pattern of use, size of the transaction and geographical location. Higher the risk score higher will be the intervention by the system. Intervention may be OTP’s, calls from the bank to verify the transaction, asking security questions to verify the authorization. HDFC had RSA Security as service provider to monitor an ongoing phishing attack and authorized it to shut down the online banking transactions temporarily till the user goes physically to the bank to get it enabled. The bank also introduced ‘cooling period’ wherein transfer of funds to a new person could be done by adding the person as beneficiary and transfer would be initiated only after 24 hours giving time for bank to check the transaction and also giving customer the time to report fraud. It also implemented 3 factor authentication using the three authentication requirements – defining what you are, what you have and what you know. Though these measures were necessary Salvi was concerned that by introducing so many security measure it complicated the online transaction and wanted to focus on customer convenience.
2. Should secure access be mandatory or leave it to discretionary?
Dormant accounts were easily targeted for phishing attack or other attacks as anyone could get easy access to the account without raising an alert. Salvi was planning to introduce second level of authentication- secure access for all online customers which would automatically disable the account if the customer was inactive for a defined period of time. New customers would be provided Secure Access with the online registration itself. This created dilemma about the dormant users in the list of existing online customers as they were not sure on how long the they should retain them in the unguarded system before disabling their account Salvi had to make sure that he provides a timeframe for the dormant account holders to gain secure access and also make sure that this period was small enough to be misused.
3. Should the bank use onsite model or cloud model
The proposed IS infrastructure had 2 types of server: authentication server and online server. Salvi had to decide the location of the server:
a. HDFC’s own datacenter:
In onsite model the rate of system failure was low as the servers would be in the same network. In house servers would be costly as the bank would have to think about the future requirements as well.
b. Offsite, hosted by vendor(RSA Security): Internet was the medium of communication which was always exposed to threats. One more question was whether online server was to be located at HDFC’s datacenter and have the authentication server at RSA’s premises. But as vendor location was outside India, transcontinental links were required, which was open to a risk of systemic failure. To set up this system, it would take one and half years.
c. Cloud Computing(RSA Security): Here the resources would be stored in virtual environment. The main advantage of this system was that the company would be paying only for the storage space used and could be expanded as and when the need arises. Cloud model would take 9 months to go live. The cloud model had multiple options for network connectivity- Internet: no additional cost but was less reliable, Build dedicated bandwidth- reliable but would require high investment, Proxy Server- hosted by vendor but the Bank will have less control.
As setting up IS system was not the main objective of the business but was to provide world class Indian Bank too much of investment in setting up the IT system could be a big concern for the firm. Salvi had to decide which model he should go with aligning with the business goals and still consider the profitability factor and maintain strong customer relations, -
Question 1: What are the challenges faced by Salvi?
Vishal Salvi, the Chief Information Security officer at HDFC bank at the time of the case, had several challenges facing him in his new role. As outlined in the beginning of the case, the three major dilemmas that Salvi faced were how to ensure security of online transactions, whether or not to make secure access mandatory or discretionary, and whether or not to use an onsite or cloud model for their information systems and databases. With the increased demand for online banking in India, a phishing scam that affected 1.28 million customers was the cause of these challenges for Salvi. While each dilemma is slightly different, each one is aimed to increase the security of the company.
The first challenge that HDFC bank and Salvi had to face was that of finding the right blend of security and convenience. In general, security at its core usually adds some level of inconvenience. While this is not necessary a bad issue, a lot of security practices are seen as unnecessary by many consumers. If HDFC creates strict security controls in accessing an online bank account, consumers might not understand the necessity of those controls and favor another bank instead. However, if security controls are not adequate then HDFC can be the target of data breaches and phishing scams. I think the pattern that most banks and businesses see, is that during the early stages of a business that security is not a high priority, mostly since they are not a large target. However, as the bank or business becomes more popular and successful, then stronger security controls are put in place. Since HDFC wants to establish authentication and validation controls which involve customer interaction, they need to be careful in which controls they want to implement without pushing away potential customers.
Salvi had answered the first challenge by establishing multi-factor access to online banking. This multi-factor authentication required that the user establish a list of security questions, establish personal messages, provide their address or telephone number, and other methods of authenticating that the user is the appropriate user. The problem was that HDFC had a number of dormant users who did not use the online functionality but instead used the bank or ATMs. It was easy for Salvi to establish that when the access control policy was implemented, any new customers going forward will have to use the multi-factor authentication. With the way the IS was established, there was a serious vulnerability for these users. While the case doesn’t identify Salvi’s actions, I would suggest to establish a timeline where users are required to establish the multi-factor authentication questions before the account is locked online.
The last challenge faced by Salvi was where to establish the location of a server and HDFC’s IT infrastructure. In my understanding, there are really only two methods for going about acquiring IT resources, which include purchasing or paying a service provider. In the case, Salvi had the option of purchasing its own data center to house at its headquarters or use the security service provider, RSA, which included either an offsite database or cloud computing. The difficulty is that each choice has their pros and cons. The major benefit of having the database “in house” is that it sits within the headquarters of HDFC, making it more accessible since it’s on the network. However, the con is that this option is the most expensive. The more inexpensive option of using RSA has issues of its own, with those being that it needed to create a safe means to access the data as well as rely on a third party.
Overall, Salvi had to face some serious challenges to address the security of HDFC bank. In most cases I examined, the answer is usually to implement a basic change or move focus from business efficiency to security. However, in the HDFC bank case, the challenges didn’t necessarily have a clear cut answer, making the decisions by Salvi that more difficult.
-
The ubiquity of the internet and banking reforms in India has made HDFC Bank one of India’s leading private banks with deposits over $15 billion in 2007. Along with the internet, the demand for online banking steadily increased and was considered to be the “banking of the future.” As Chief Information Security Officer for HDFC Bank, Vishal Salvi’s primary objective was to make certain that HDFC’s online banking was secure from cyber threats while maintaining a balance between security and customer’s convenience. The four challenges faced by Salvi are: addressing phishing attacks on HDFC Bank’s customers; implementing security controls without interfering with customer’s convenience; whether or not to add the “secure access” model to dormant online accounts; and deciding on new information systems sever location that would optimize its ability to deliver financial services to its customers.
Phishing is one of the nine most common online threats facing banks and financial institutions. To combat phishing targeted to its customers, HFDC contracted RSA Security to provide a 24/7 command center that would monitor for ongoing phishing scams and shut down online banking transactions as necessarily. Salvi also introduced a “cooling period,” where transactions to unknown third party would be held for 24 hours to allow the bank time to verify the transaction with the account holders. It also sent out awareness messages to its customer in an effort to educate them on the dangers of phishing. With all of these additional controls Salvi had to make sure that HDFC does not overdo it and create an inconvenience for the customers.
HDFC also had to ensure that security controls implemented on each online transaction is invisible to the customers. Some of these controls are user ID and passwords, tokens, account profiling, and even biometrics. With every layer of additional security control, the complexity of the systems grows, making it more difficult for a customer to make online transactions. Salvi had to decide whether the information security protocols should authenticate the account holder or authenticate the transactions. Identity authentication is focused on the proper identity of the account holder which may be verified using biometrics or security tokens. Transaction authentication is using instruments such as HDFC’s “adaptive risk modeling” to create a profile for bank to flag any abnormal transactions from an account.
Secure access, in banking terms, refers to additional security measures enforced by a system to authenticate the identity of a user. This may require the user to select a pre-chosen image, answer personal security questions, provide an address or phone number, or select a personal message. It may also require the account holders to provide a list of known beneficiaries, or third-party accounts, that the customer made periodic transfers to. Dormant accounts are accounts that had never made transactions over the internet although a customer have registered for online banking. Dormant accounts were very susceptible to fraud since the attackers can gain access to the accounts without raising any flags. Salvi had to decide if secure access should only be applied to active online accounts or dormant online accounts as well. He also had to decide how long the bank should wait before disabling a dormant account’s online privileges, since leaving it alone without secure access may provide an open window for hackers to gain unauthorized access to the account.
Lastly, Salvi had to decide how he would manage the IS infrastructure for HDFC’s growth. He had to choose whether the banks authentication and online servers should be located onsite or offsite. Having the servers onsite, at HDFC’s datacenters in India, would give the bank control of the system’s availability and security. The disadvantages of onsite servers are the upfront costs, management of idle capacity, and the inability to scale up or down efficiently with demands. Cloud servers gives the bank the advantages of scalability, pay-per-usage, and minimal initial investment costs. Cloud servers also requires an additional communication medium between the bank and the provider, that needs additional security measures. The main disadvantages of having the servers on the cloud are issues with connection reliability and no control over the third party’s security management processes. With this decision Salvi must also factor in the cost and time of implementing each type of infrastructure.
-
Hi Paul,
This is a very good summary on the case. Thank you for sharing. Aside from being expensive, having onsite server’s also requires additional physical security controls. Some other cons, as mentioned in the case, is scalability and idle capacity. For a growing online customer base, HDFC would need to ensure that the new onsite datacenter would have enough capacity to provide services to new customers, but not so much that the maintenance cost of unused capacity drains the bottom line. With offsite servers, not only that the bank would have to rely on third party for security but it also have to provide a medium that would not affect the Availability of critical systems. Overall, like you said, it’s a very difficult decision for Salvi to decide not only between time and cost, but also security and availability of the new IS infrastructure.
-
3. What are the challenges faced by Salvi?
The challenges faced by Salvi are:
• It was Salvi’s principal mandate to make certain that HDFC Bank’s online banking platform was secure from online hazards
• The two components of the online banking: Net banking and mobile banking, Mobile banking was a new concept in India and people were not that much friendly with it but since it was considered to be the banking medium of the future it needed to be promoted.
• For us at HDFC Bank, an IS framework was in the light of the changing ecosystem and was just at the beginning of the curve, which had three dimensions -technology integration, business integration and risk integration.
• It was a challenge to meet the following major aspects of all three dimensions:
o For technology integration, IS should be independent of the larger information technology (IT) scenario at the bank.
o For Business integration, business division in the bank should be accountable for the costs and risks associated with IS
o For the risk integration, employee should look at IS risks as part of overall risk management of the bank rather than as a standalone risk.
• Phishing was one of the nine common online frauds concerning the banks and HDFC was the fourth bank in India to encounter it but HDFC was quick to take corrective measures
• Another challenge was to ensure that the IS protocols were not so rigorous as to cause inconvenience to customers. It was important to secure regular annual increases in new customer accounts to ensure that existing customers stayed on with the bank.
• It was a challenge to keep IS transparent to the customer and at the same time making it effective from bank’s point of view.
• Reducing the false positive rate was a challenge since the IS technologies were not that mature and IS processes were not that much stabilized due to which it was time consuming for the customers. The customers perceive it as an inconvenience. Maintaining the bank’s competitive positioning was a challenge.
• Another challenge was managing the identity authentication of the account holder as well as transaction authentication and at the same time making it simple for customers.
• Managing the security of dormant accounts was a challenge. The bank need to make a decision on whether it should provide secure access to every registered online user or limit secure access only to active users. Time-frame was needed to be defined for dormant users to seek secure access before disabling their accounts at the same timekeeping the window small to prevent misuse during the interim period.
• Deciding the server location (whether should be onsite of offsite) was a decision to be taken. -
Q: What are the challenges faced by Salvi?
Based on the case, Salvi was being faced three challenges: how to balance the security of an online transaction and the customer convenience, whether secure access was mandatory or discretionary, and whether he chose an on-site model or a cloud model.
The first challenge that Vishal Salvi was being faced was the balance between the security of an online transaction and the customer convenience. In general, as per the case, each online transaction required two minimum requirements for approving an online financial transaction: validation and authentication. Validation required a customer’s user ID and password, which allowed the security system of the bank to know the account holder. Authentication required six-digit number from a customer’s physical device, which allowed to check the person’s identity. Furthermore, other additional checks included the size of transactions, locations and IP address of customers.
However, with the increase risks of online banking, Salvi wanted to increase the security of online banking, however, he concerned that implementing new security system would influence customer convenience. If Salvi decided to continue using the same level of security, it was true that customers still felt convenient to use online banking, but it was also true that the low level of security was putting customers at high risks. One the other hand, if Salvi decided to increase the level of security, the security system would be trustworthy but the complexity of the system would push the customers away and lead a loss of customers. This was Salvi’s first challenge, the balance between security and convenience.the second challenge that Vishal Salvi was being faced was that whether the secure access mandatory or discretionary. A number of online banking users that registered in HDFC would almost never use the internet, instead, use physical branches or ATMs. Those dormant accounts were easily targeted by fraudsters without raising any alert when they entered online accounts. So Salvi was planning to implement a second level of authentication for all online customers to ensure their security, as known as “secure access”. The second level of authentication included details of account holders and increased the process of validation by the system. Furthermore, Salvi said that HDFC would disable access for those who will not use internet, and once they needed, they needed to gain Secure Access. For new customers, Salvi planned to provide Secure Access once they registered an online account.
Even though Salvi had already had his plan to implement Secure Access, he still could not decide whether it was mandatory or discretionary. If Salvi decided to make Secure Access mandatory, it would be an optimal security and he had already had plans to implement, but the inconvenience would impact customer’s experience of using online banking and lead a large loss of customers. On the other hand, if he decided to keep it discretionary, it would be convenient for customers, but the higher risks of security would be a big concern. This was Salvi’s second challenge that whether Secure Access was mandatory or discretionary.The third challenge that Vishal Salvi was being faced was that whether he used an onsite model or a cloud model. Based on the case, an on-site model would carry a low rate of systemic failure because its servers were built within HDFC’s own local area network. A cloud model’s advantages were fluid and elastic. It would require a separate connection between HDFC and its IS vendor by internet, and the vendor’s location was outside of India, which created additional concerns of systemic failure and transcontinental links.
Salvi concerned that if he chose the on-side model, it would require a longer timeframe and more expenditures than the cloud model that required shorter timeframe compared with on-site model, and allowed to use pay-by-use pricing.Overall of these three challenges, Salvi’s goal was to keep online banking secured. However, before he made any decisions, he had to balance several elements including customer convenience, security risk, timeframe and expenditures.
-
What are the challenges faced by Salvi?
In August 2007, HDFC Bank, one of India’s leading private banks was a target of a phishing attack. Customers received e-mails claiming to have originated from the bank and seeking sensitive account information, including password and personal identification codes. Phishing is one of the most common online frauds related to banks and financial institutions and due to India’s growing prevalence of online banking, banks have to set up countermeasure to prevent such attacks. Vishal Salvi, HDFC Bank’s CISO, would like to improve HDFC Bank’s information security to prevent such attacks from happening again however, he is faced with customer convenience, secure access and server location challenges in his goal to improve security.
Customer Convenience
The first challenge Salvi faced is the impact of customer convenience while attempting to make online banking more secure. One of the primary purpose of offering online banking is to make banking more convenient and available to customers where ever they are as long as they have internet access. Salvi intend to implement additional layers of protection by implementing systems which authenticates the identity of the account holder or the transaction. The check points involved in authenticating the account holder require authentication instruments such as biometrics (“what they are”) and tokens (“what they have”). Authentication of transactions on the other hand concentrates on the integrity of the transaction process. It relies on internal systems which analyzes a customer’s historical transaction amount and recipient and raises a red flag if any transaction is out of the customer’s normal transaction activity. Salvi is contemplating on which security to implement by weighing the cost-benefit of security vs. customer convenience.
Secure Access
The second challenge Salvi faced is establishing secure access for dormant users. Salvi is planning to introduce a second level of authentication for all HDFC bank’s online customers. This introduces another authentication instrument where individual customer incorporate specific details of authentication into their account such as security questions and images (“what they know”) which will be part of the validation process of the customer’s online banking. Activating this new level of security is a non-issue with active or new users however the problem lies in dormant users, which represent approximately 20% of HDFC’s customers registered online. These accounts are vulnerable to attackers and fraudsters because the actual users do not monitor their accounts. If a perpetrator is able to gain access to the dormant accounts, they can set the secure access of those account for themselves and gain complete access to those accounts. Salvi is faced with the decision on whether to activate the secure access feature, disabling dormant user account and risk losing a significant number of registered user accounts.
Server Location
The third challenge Salvi faced is establishing server locations for the authentication servers and online transaction servers. By establishing the proposed IT infrastructure mentioned above, Salvi will need to decide on whether to have the servers built in-house or outsource them to RSA Security. RSA Security had built up competent could-based servers which allows data to be stored in the virtual world. The main advantage of outsourcing to RSA Security is flexibility it provides. HDFC can use data storage as needed without being reliant on server capacity. RSA Security has offered a bundled package to store the hardware, software, networks, services, and interfaces of HDFC in the virtual world with a pay-by-use pricing. With what RSA Security is offering, it will be the wise option to set up the authentication servers in the cloud. Salvi would then need to decide on a network connectivity, whether through internet (cheapest but unreliable), dedicated bandwidth (costly but reliable) or a proxy server hosted by the vendor where hardware and software architecture would need to be installed slowly in the banks own infrastructure.
-
3. What are the challenges faced by Salvi?
According to the beginning of the case, Vishal Salvi, the new Chief Information and Security Officer of HDFC was facing three dilemmas in strengthening the bank’s online security following a phishing attack in 2007, affecting 1.28 million online banking customers. Those challenges.
First dilemmas: How to ensure the security of an online transaction while still keeping customer convenience as a priority?
The first security challenge for Salvi was to find the right balance between convenience and security. These two components were conflicting with each other where customers were seeking simplicity and the system to be more trustworthy whereas HDFC bank aimed to increase the complexity of security of online banking to avoid data breaches and phishing attacks. Customers could be discouraged in online banking if the bank set up too strict and complicated security controls and policies. I would describe that online banking security as an onion, which has multiple layers to protect the money and personal information of online banking users.
To response to the challenge:
Multi-factor authentication:
Salvi established the multi-factor authentication in response to the security challenge. This multi-factor authentication requires the users to select a security image, establish a personal message, provide correct address or telephone number, and answer security questions. Nowadays, this process has been implemented by most banks to ensure the right identity of the account holders.
RSA Security:
In addition, Salvi also signed on with RSA Security, a third party security provider, to set up a 24/7 command centre to monitor an ongoing attack as well as shut down the bank’s online transactions temporally under authorization from HDFC bank.
Cooling period:
Moreover, the bank account holders were required to establish a list of “beneficiary”. Transfer of funds to a new person who was not listed would take at least 24 hours. The time window would give time for the bank to check the transaction and authorization of the account holder.
Educational alert:
After the security disaster happened, HDFC bank frequently educated the account holders about hazard of phishing by sending awareness messagesSecond dilemma: whether he should make secure access mandatory or leave it discretionary.
Since there were still massive amounts of registered online customers who would never use the internet, even though they had registered for online banking and would instead use offline media such as ATM or visit a branch in person. These type of users posed a risk because fraudster could gain entry through them without raising an alert.
To response to this challenge: HDFC bank would disable the access for dormant customers who do not use the online medium regularly. Then they would need to visit a branch in person with an ID to gain secure access once again.Third dilemma: Whether he should go for an onsite model on for the cloud model in terms of time, money and security.
HDFC bank was in conflict with choosing the right server location to include two types of servers: authentication serves and online servers because each model would have it own pros and cons
Onsite model: it would be located in HDFC’s data centres in India, carrying a low rate of systemic failure. The main advantage would provide the bank higher data availability and security. The disadvantages of onsite servers would be higher costs.
Cloud computing: Business application would be stored in the virtual space of the internet and shared by everyone. The bank could customize according to its computing need. Cost would be one of the main advantages because of pay-by-use pricing. Another main advantage would be the scalability for the future. The main disadvantages would be the lower reliabilities with its network connectivity and data security for the customers.Overall, these three dilemmas enabled Salvi to reinforce the information security defenses at HDFC. In order to maximize the information security and minimize the vulnerability subsequent to a phishing attack on the bank’s customers, I believe both parties, the banks and their customers would have the responsibility to secure themselves by having the right attitude toward account protection and certain online behaviors.
Source: HDFC Bank: Securing Online Banking, Harvard Case
https://cb.hbsp.harvard.edu/cbmp/content/55253616 -
What are the challenges faced by Salvi?
As Salvi said, there’re three major dilemmas. How to ensure the security of online banking while still giving priority to customer convenience? Whether secure access should be made mandatory or discretionary? Onsite models or the cloud model?The first one. The emergence of phishing attack and another online frauds along with ever-changing external environment put a high demand on HDFC bank’s information security framework. In order to secure each online transaction from hazards, multiple standard checks were implemented, validation and authentication were minimum requirements, then complement additional check points, such as based on the risk score of each transaction or the profile of the customer.
Each new layer will add to the complexity of the process, may lead to customer inconvenience, what’s worse, potential customer losing. So, how to achieve a tradeoff between security of the online process and customer convenience matters a lot to retain old customers and attract new customers.The second one. Salvi was planning to introduce a second level of authentication for all online customers to ensure security, cause there’s a majority of dormant users who are vulnerable to online frauds without raising an alert, Salvi just wondered whether provide secure access to every registered online user or only to active users, this challenge is pretty similar to the first one, balance the security and customer convenience.
The third one, onside model vs could model. An onsite model, as an integral part of HDFC’s own local area networks, carried a low probability of systemic failure, while cloud model faced potential systemic failure caused by low reliable internet or upfront investment on dedicated bandwidth. Besides, cloud model may scale up or scale down relevant computing services depending on users’ need, while onsite model was idle and not scalable. In addition to fundamental issues of IS, time and cost should be taken into consideration, an onsite model would take longer than cloud model, and pay-by-use pricing offered by cloud model are more sustainable and flexible.
-
Shahla Raei
MIS 5206
HDFS: Securing Online Banking
What are the challenges faced by Salvi?As a CIO of HDFS bank, Salvi was working on strenthing bank’s information security framework.
Here is chanllanegs that Salvi was dealing with :
– Keep secure newly established IS framework.
– He was concerned about IS security in five different aspect to keep online transaction secure; Authentication, Authorization, Privacy, Integrity and non-repudiation.
– Moving customers from offline banking and online banking.
– All banks are required to conduct risk management and analysis of security vulnerability assessment at least once a year. At HDFS the initial risk management model, and he wanted to make sure that all platform are secure.
– Phishing was one of the most occurred online fraud concerning Salvi.
– Need to ensure that the IS protocols were not so rigorous as to inconvenience to customer.
– To reduce the false positive rate.
– Securing access and considering second level of authorization. (can distinguish the access between returning users or new users)
– By implementing mobile platform the bank needs to implement different authentication levels. -
Salvi was faced with a number of unique challenges he was forced to address by enabling HDFC as an online bank. The first challenge addressed was the dilemma of striking a balance between customer convenience while implementing controls to ensure customer’s mobile and internet banking was secure and maintained their “trustworthiness.” The first questions related to securing and implementing these controls was whether or not they need to authenticate at the transaction level or the account holder level. They ultimately decided to focus on the authentication of the account holder by implementing a combination of authentication of an electronic persona by use of things like bio-metrics as well as tokens from RSA that were what you have. The only way for an individual to access this code was to have both of these correct and even today is a fairly common authentication control used.
The next dilemma that Salvi addressed was the issue of secure access (a second layer of authentication for the account holder). Here he implemented a system which would require the customer’s to predetermine and set beneficiaries of the account or authorized users. At this second level of authentication customer’s were required to select an image, message, customer’s info such as address or phone number and answers to unique questions that were previously answered by the account holder. This created the issue for dormant online accounts, or account holders that had accounts for some time but were not using any of the online features and did not register for use. This left a gaping vulnerability in the system because anyone intent on committing fraud it would be fairly easy to register the accounts for online use with readily available information and being able to create their own answers to the validation questions.
The final dilemma that Salvi faced was where to house the authentication servers and the online servers (where the actual banking takes place). His options were to either house the m onsite at their own data centers or to leverage a service provider for cloud computing. Both options had their advantages and disadvantages. If he were to house them in their existing data centers it would be easier to ensure the availability of the servers because it would be integrated with their own LAN and there would not be another communication link required to ensure up-time for availability purposes but also to secure against any egress points. With the cloud option, even with the additional network to worry about for availability, the cloud option seemed to be the better option. With every business, their goals is to be as scalable or elastic as possible and to have the agility to change to keep up with any unforeseen circumstances such as customer tendencies and unexpected growth. In addition, with the cloud model they did not have to invest any internal resources into the ongoing maintenance of the hardware or the software, patches, failed hardware, etc. This responsibility is all outsourced to the cloud provider .Also, if Salvi wanted to leverage a hybrid solution where some of the solution sat in their own data center but they wanted to leverage the cloud for certain features it is an a la carte offering, meaning the customer does not have to purchase an entire hosted solution but can rather pick and choose what he would want to use of a storage, database, integration, testing and infrastructure. Also, even though the cloud model would take about 6 months longer to implement and roll out the factors mentioned above and the tax implications and the ability for HDFC Bank to write-off a significant portion of the cloud computing costs as operating costs since it was a pay by use model, whereas the internal option would require a balance when ordering the necessary hardware to take into account future grown but not wasteful spending by overestimating and having too many idle resources purchased and sitting in their environment.
-
Wen Ting Lu
MIS 5206
Case 1 HDFC Bank – Securing Online Banking
In this case analysis, I will describe the three major challenges that Vishal Salvi, the new Chief Information and Security Officer was facing, assess the pros and cons of each alternatives, and finally follow by the recommendations on how to overcome the challenges.
The first challenge that Salvi facing was improve transaction security and mitigating security risks while ensure customer convenience to develop good customer relationship. Salvi wanted to strengthen its online banking security by use a combination of validation and authentication for every transaction. In which each transaction had to have proper validation in terms of a user ID and password, also the transaction also required proper authentication, which proves “what the customer has”. However, at the same time Salvia was concerned about implementing this new security system will impact customers’ convenience and make them to breezed through the online with security assess barriers. My recommendation to resolve this challenge is to implement the new secure system –two factor authentication only to unrecognized devices. The reason is that it not only protect the online banking secure, it also make it convenient to the customers who constantly using the same online banking devices.
The second challenge that Salvi facing was whether to implement secure access for all online users, or make it discretionary and limit secure access only to active users. According to the statistics mentioned in the article, about 20% of the registered online customers were dormant users. These users never use the internet and instead they would prefer use offline media such ATMs or visit a local branch in person. However, dormant account were vulnerable to phishing attacks and it would provide a great opportunity for hackers gain entry without any alert. There are two alternative courses of action for this challenge. The first action is prohibit the logins from dormant users without warning. This will quickly resolve the dormant account vulnerability, but it will bring inconvenience to the customers because their account have been disabled. The second action is give dormant account users warning before disable their accounts and rewrite the current IT governance with dormant accounts in minds. This will prone to irritate customers, and address the dormant account vulnerability. My recommendation of course of action to resolve this challenge is rewrite IT governance policy and give warning to dormant account users that their account is under the threats. By rewrite the IT governance policy will not only allow HDFC bank to have a companywide IT policy that will state how to deal with dormant accounts, it could also be marketed to the customers as a good reputation of care for customers’ accounts safety. At the same time, an education platform can be created to help customers understand why the changes are being made in secure access and awaken dormant account users the importance of online security.
Lastly, Salvi was facing the challenge of made determination of where to located HDFC bank’s servers, either onsite or offsite as a cloud model. There are both pros and cons of onsite and offsite models. In the article it mentions that an onsite model carried a low rate of systemic failure because the servers would be an integral part of HDFC’s own local area network. On the other hand, an offsite model required a separate medium of communication between HDFC Bank and the IS vendor-the internet. However, onsite model is idle and not scalable as expansion and contraction cannot be done depending upon the needs of the users in computer services because this model is a fixed capacity of a data center. Compared with onsite model, the offsite cloud model is are to expand and contract depending upon the need of the users and made possible for the users to scale up and down in the computing services. My recommendation of course of action to resolve this challenge is to implement cloud based solution because technology grow rapidly and it is hard to predict, with elastic capacity cloud model resolved this issue. In addition, offsite cloud model has the benefit in pay-by-use pricing. -
Vishal Salvi, Chief Information Security Officer of HDFC Bank, has the challenge to make several very tough decisions, which include: how does he ensure the security of an online transaction while still keeping customer convenience as a priority, should he make secure access mandatory or should he leave it discretionary, and should he go for an onsite model or for the cloud model?
Salvi looked-for ways to resolve how he would ensure the security of an online transaction while keeping convenience high for customers. One way Salvi decided to do this was by confirming that the bank would introduce a 24 hour “cooling period” where funds would not transfer to a not listed account until after the time period. This would give the bank time to check the transaction and it would allow the bank user to alert the bank if they noticed something was wrong. Salvi would also have the bank send a phishing awareness message to educate customers on its hazard. These both go along with his strategy of security without inconvenience.
Salvi addressed the issue of mandatory secure access with plans to enforce second level authentication. One way Salvi would do this is with making sure that every large transactions would have standard validation and authentication “checks”. However, he had to decide whether he wanted to have the bank authenticate the identity of the account or authenticate the actual transaction which is a convenience issue. Another issue with secure access is that dormant accounts were extra vulnerable to attacks so Salvi also had to decide whether the bank should provide secure access to every registered online user or limit secure access to only active users. Salvi seemed to be leaning towards the option of making dormant account users lose access to online accounts and having to actually come into the bank.
In regards to the location of the servers, Salvi has two options: onsite model or offsite/cloud model. The benefits of the onsite model are: a low rate of systemic failure, total control of their network and data, better security against hackers, and a better client/customer relationship. The negatives of the onsite model are: longer implementation, increased costs, high upfront investment, the requirement of specialists to protect against cyber-attacks like phishing, and the requirement of each department to maintain all software and hardware.
An offsite/cloud model is more fluid and flexible, less upfront investment, the cost of the cloud is not fixed and is directly related to the amount of bandwidth used and the amount can be written off as financial expenditure, cloud is convenient for the customer to bank online, and less employees need to be hired with this model. The cons of the offsite/cloud model are: a third party controls their data, the increase of system failure due to the separate medium of communication, HDFC has no control of their servers, may have to purchase data if they end their partnership with the cloud company, and there are questions concerning transactional links.
All of these decision will be tough because Salvi will have to make the decision based on the bank’s core activities (providing and facilitating financial services) and not hardware and software maintenance, upkeep of websites, management of data centers, and provision of links at ATMS. These types of decisions will only get harder as more and more users convert to online accounts. Salvi will also base his decisions on his ability to ensure security without inconveniencing the account holders so that the bank can secure regular annual increases in new customer accounts while ensuring that existing customer stayed on with the bank. -
Priya & Vaibhav,
The decision to move functions to the cloud, or to outsource is a difficult decision to make. The two main factors I see in this decision are:
1. Control – Do you want to have the ability to control the environment? Make changes, add & remove controls, ect. I see difference between a companies cloud solution vs. on-site solution is control and not so much functionality.
2. Cost – Do you want to pay for it upfront or forever with a monthly cost. You will have to try and estimate a break-even point based on the number of users / licenses. You will have to include variable costs like: Support, but also fixed costs like: Hardware.This is why it is important to have a council made up of individuals who are using the solutions.
-
Fred – Although I believe control and cost are factors I do not think they are the main factors.
I think all of these decision will be based on the bank’s core activities (providing and facilitating financial services) and how they can provide these services in the best way possible that will allow for annual increases in new customer accounts while ensuring that existing customer stayed on with the bank. With that said, I think the two main facors are 1)security (without inconveniencing the account holders) and 2) growth.
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years, 1 month ago
Presentation: Slides
Video:Video
Quiz w/solutions: Quiz w/solutions
-
David Lanter wrote a new post on the site ITACS 5206 8 years, 1 month ago
What is meant by the term “acceptable information system security risk”? Who within the organization determines what is the acceptable level of information system risk? How does an organization determine what is […]
-
The term “acceptable information system security risk” means that the risk of information system security is not high enough for the organization to worry about it. In fact, accepting level of risk occurs when the cost of managing the risk outweigh the cost of handling the loss.
The authorizing official (or designated approving/accrediting authority) is a senior management official or executive with the authority to formally determines what the acceptable level of information system risk is.
In order to determine what is an acceptable level of risk, the organization must perform a
security risk analysis, which is part of a 9 step risk assessment process, that should involve the following:
1-Control Analysis
2-Likelihood determination3-Impact Analysis (determine impact to the systems, data, and the organization’s mission.)
Impact levels are described using the terms of high, moderate, and low.4-Risk Determination
The level of risk to the system and the organization can be derived by multiplying the ratings assigned for threat likelihood (e.g., probability obtained in step 2 of risk analysis) and threat impact (obtained in step 3 of risk analysis).For example, the probability assigned for each threat likelihood level is 1.0 for high, 0.5 for
moderate, and 0.1 for low, and The value assigned for each impact level is 100 for high, 50 for moderate, and 10 for low.
Then using a risk scale the risk should be classified as low(from 1 to 10) , moderate(10 to 50) or high (50+)If an observation is described as low risk, the system’s authorizing official must determine whether corrective actions are still required or decide to accept the risk.
-
The term “Acceptable information system risk “is usually defined in terms of practical implementation that inspite of building security measures and risk mitigation features within an organization the risk can never be reduced to zero .When risk can not be reduced to zero, so it’s important to determine how much to spend on lessening it to an acceptable level of risk.We can explain it with an example that despite of the measures taken by bank to secure the online banking system there are always attempt by hackers to hack into the system and this can never be reduced to zero so its important to determine how much to spend to bring the system to an acceptable level of risk
Acceptable risk levels should be set by management and based on the business’s legal and regulatory compliance responsibilities, Information security managers play an important decision in deciding the acceptable level of risks to balance the company operational costs and built a robust security mechanism.
To conduct a risk analysis some of the steps are being defined
1)Control analysis-Analyzing the controls to be used in the organization to protect the system
2)Likelihood determination-Likelihood ratings are described in the qualitative terms of high, moderate, and low, and are used to describe how likely is a successful exploitation of a vulnerability by a given threat
3)Impact analysis-This step usually defines calculating the impact in case the risk occurs in the organization and level of damage it may cost.The impact levels are also determined as low,moderate and high
4)Risk determination-Once the likelihood of risk and its impact has been determined we have to calculate risk by multiplying the ratings assigned for threat likelihood (e.g., probability)
and threat impact.
The probability assigned for each threat likelihood level is 1.0 for high, 0.5 for
moderate, and 0.1 for low.
The value assigned for each impact level is 100 for high, 50 for moderate, and
10 for low.
For example likelihood of risk is high so has been given the probability of 1 and impact to organization is moderate so assigned value 50.
The risk to organization finally is 50*1=50 in case the vulnerability is exploited -
Risk, as defined in ISO 27000 series, is the potential that a given threat will exploit vulnerabilities of an asset or group of assets and thereby cause harm to an organization.
Acceptable information system security risk essentially means the level of harm the organization is willing to accept in an event that a threat should be successful in exploiting a vulnerability. It is impractical for organization to eliminate information security risk completely. Even after security controls are implemented to lessen the occurrence and/or impact of an Information security event, there will still be some residual risk. If the residual risk has not been reduced to an acceptable level, the risk management cycle is repeated until enough controls are implemented to make the residual risk acceptable.
Acceptable information system security risk is dependent on the organization, its resources, and risk appetite. Each organization has its own acceptable risk levels which is driven by legal and regulatory compliance responsibilities, its threats, and its business drivers. Management has the responsibility to set the organizations acceptable risks levels because they understand the business drivers and ultimately responsible for meeting business objectives.
There are several constraints that plays a role on how an organization determines its acceptable level of risk:
1. Time-frame to implement
2. Financial or technical issues
3. The way the organization operates or its culture
4. The environment in which the organization operates
5. Legal framework and ethics
6. Ease of use of security measures
7. Availability and suitable of personnel
8. Difficulties of integrating new and existing security measures
Due to these constraints, organizations may not be able to implement appropriate security controls or the cost of implementing controls outweighs the potential of a security event occurring. The organization must conduct the appropriate Risk Assessment process for each potential risk to the organization. -
The term “acceptable information system security risk” is a determined in the risk treatment process which is the fundamental goal of going through the risk assessment and other prerequisites to the risk treatment phase of risk management methodology. This is the idea that after going through the context evaluation and risk assessment phases of the methodology, and when analyzing what the appropriate course of action is to minimize the cost of implementing controls to mitigate the risk identified (ultimate goal of the overall process) it is determined that the organization will live the risk and the potential consequences of the security event taking place against the asset. This occurs when either the risk is deemed to unlikely to occur or the cost of implementing any controls to mitigate the identified risk is too costly to implement and fails the cost-benefit analysis.
The acceptable level of risk should be decided by the steering committee within an organization. The steering committee should have the necessary stakeholders from all sides of the business that are impacted by the identified risk. This would include the executive management of the lines of business as well as executive management from the owner of the overall risk management process, i.e. CISO or CIO. It is important that all aspects of the business be included when creating a security steering committee or over site committee.
-
What is meant by the term “acceptable information system security risk”? Who within the organization determines what is the acceptable level of information system risk? How does an organization determine what is an acceptable level of risk?
The term “acceptable information system security risk” is the level of risk that a company is able to tolerate. This could mean that the impact of the risk would not adversely affect the company too much if the risk were to occur or the risk is deemed too unlikely to happen.
The level of the acceptable level of risk is determined by the senior management of the organization. They will determine the level of financial impact the organization is able to absorb and the probability of risk that the organization is willing to accept.
The acceptable level of risk of an organization is determined through conducting a risk analysis.
The steps of the risk analysis are:1) System Characterization – Knowing what exactly in the organization is at risk
2) Threat Identification – Knowing what or who are the threat that could lead to the risk
3) Vulnerability Identification – Knowing the potential flaws that could lead to the threat
4) Control Analysis – Analyzing the control that are implemented or could be implemented to reduce or eliminate the probability of risk
5) Likelihood Determination – Estimating the probability ratings of risks in defined terms such as low, medium and high.
6) Impact Analysis – Estimating the level of damage if the risk were to occur in defined terms such as low, medium and high.
7) Risk Determination – The level of risk can be determined using a risk-level matrix by multiplying the likelihood and impact ratings determined beforehand and defined in terms such as low, medium and high.
8) Control Recommendation – This is where the acceptable level of risk is determined. A cost-benefit analysis is conducted to determine if a control investment is worth the risk it could mitigate
-
Question: What is meant by the term “acceptable information system security risk”? Who within the organization determines what is the acceptable level of information system risk? How does an organization determine what is an acceptable level of risk?
Generally, the acceptable information system security risk includes two situations:
1. The information system security risks are initially in an acceptable level. For example, many employees may forget their user name or passwords, and not allowed to access their PCs. In this case, employees forget their passwords is a high frequency low damage risk, and most of information systems existing process can allow employees find back their passwords, so the risk is in an acceptable level.2. The frequency and damage of the risks are mitigated to an acceptable level. For example, the firewall of a core servers is a protective control which can prevent the core servers of an organization from hacking. Moreover, with the assist of corrective controls like backup systems and disaster recovery plans, the frequency and damage of risks are acceptable.
The head of IT department or management like CIO of the organization usually is the one who determines what is the acceptable level of information system risk.
To determine what is an acceptable level of risk, I think the decision maker should compare the cost of mitigating the risks and what the potential damage the risks may cause. For example, if the company is a new-start company, spend millions to build a top level firewall is too expensive. In this case, the company can spend less money and build a backup system instead. Even if the attacks damage the servers, the backup system can ensure the business recover in a short time. Since the new-start companies usually don’t have too much valuable information assets, therefore, by using the corrective control can mitigate the risks in an acceptable level.
-
The acceptable information system security risk is essential the level or risk that an organization is willing to tolerate. It is impossible to prevent every risk, nor is it feasible to implement every possible control, or risk prevention/mitigation. Therefore it is necessary to allocate resources to the risks with the most probability and/or the highest impact. Some risks may be extremely rare but have a high impact so a company might decide to accept that risk because the probability is so low that resources are better spent elsewhere. Alternatively a risk may have a high probability and very low impact, so controls/mitigation may either be less of a priority or not addressed.
Credit cards are an excellent example of the latter. Credit cards in Europe utilized the EMV chip for decades because it was more secure while those in the US did not. Although effective at reducing fraud, many companies decided it would be more expensive to implement the technology than the current fraud. However, credit card fraud grew so prolific in recent years, the cost became too onerous and the chips were eventually implemented. Clearly the decision was made on the impact vs risk mitigation costs.
Acceptable level of risk should be determined by management. Should include CIO, IT security subject matter experts, legal and regulatory considerations, and financial implications of impact and cost to implement controls.
-
Paul,
This is a good explanation of acceptable risk level. Organizations will sometimes have to make the decision on how much controls will be needed to reduce their risk to an acceptable level. Like an example given in class, the chances of a thermal-nuclear war is very low, but if it happens then the impact would be devastating. There’s probably nothing that an organization could do to prevent the event from happening, but they can reduce the impact by, exaggerating of course, building a facility underground. The cost of such endeavor may too extreme for the company to handle, so they might simply choose to accept the risk based on the resource that they have available.
-
Brou,
Good way to put it: “when the cost of managing the risk outweigh the cost of handling the loss.” I would just like to add that, In the real world, attaining zero risk is impossible. But after risk avoidance controls are in place, the residual risk shouldbe acceptable. There are different degrees of risk that consequently require degrees of safety.
-
The term “acceptable information system security risk” means that the risk of information system security is not high importance for an organization to worry about. No organization is ever totally without risk, but there are steps that can be taken to establish an acceptable level of risk that can be properly mitigated.
Acceptable risk should be determined by management based off the business’s regulatory compliance and its business objectives. When determining risk a business must measure loss of revenue, unexpected costs or the incapability to carry on production that would be experienced if a risk actually occurred. Information security professionals need to serve as the transition between the threats and management.
-Identifying company assets.
-Ranking assets in order of priority
-Recognizing each asset’s potential vulnerabilities
-Calculating the risk for the known asset
The countermeasures to mitigate the calculated risks and carry out cost-benefit analysis for these countermeasures are up to senior management and from there they can decide how to treat each risk. -
The main aim of Risk Assessment to help the decision making process to verify if the risk has come to a acceptable level or not. and what measures can be taken to provide its acceptability.
When the cost of risk is smaller than the mitigation cost, it is reasonable to accept risk.In this case however the organization must be able to provide the rationale behind risk acceptance. In order to assess the level of risk organization must estimate and access the likelihood and impact of occurrence
The risk assessment process defines how to calculate the likelihood and impact –
1. Identifying Threats – identify business,environmental, natural threats
2. Identifying Vulnerabilities – Conduct vulnerability scans, penetration testing
3. Relating Threats to Vulnerabilities – Relate threat to the vulnerabilities
4. Defining Likelihood –
It is the probability that a threat caused by a threat-source will occur against a vulnerability.
-Low -0-25% chance of occurrence of risk
– Moderate -26-75% chance of occurrence of risk
– High -76-100% chance of occurrence of risk
5. Defining Impact
Impact can be defined in terms of confidentiality, availability and integrity and quantified in terms of low. moderate and high.
6.Assessing Risk – Draw a likelihood and impact matrix to determine risks and its levelTypically, business managers, not IT security personnel, are the ones authorized to accept risk on behalf of an organization.
It depends upon the business what is level of risk that the business can tolerate. Ingeneral it can depepnd upon folowinf factors,
– Legal/Government rules
– Timeline to implement mitigation action
– Organizational policies, objectives
– Interest of stakeholders -
It depends upon the business what is level of risk that the business can tolerate. In general it can depend upon following factors,
– Legal/Government rules
– Timeline to implement mitigation action
– Organizational policies, objectives
– Interest of stakeholders -
The term “Acceptable Information System Security Risk” outlines the Information Security Risks and the level of exposure the company is willing to endure.
The management is responsible to identifying the risks and deciding what is an acceptable level because they know the operation of the business and the impact behind each function.
The level of business risk a company holds is dependent on an organizations unique variables. The management will build a risk profile to determine what is an acceptable level of risk. This will help assign value to determine what mitigation techniques will be used and amount of money will be spent on the risks.
Two companies may perform similar operations but the management for each company may set different risk levels for the operation. There is no right or wrong answer. It depends on managements perceptions.
-
The term “acceptable information system security risk” reminds me of one of other terms – “risk appetite”, Risk appetite is the amount of risk, on a comprehensive level, that an entity is willing to accept in pursuit of value. The risk falls into the range of “risk appetite” could be deemed as “acceptable information system security risk”, that is the cost of implementing appropriate measures to reduce risks outweighs the potential loss once risk occurring.
The way to determine acceptable level of risk is risk analysis, there’s steps for Risk Analysis:
1 Control Analysis
2 Likelihood Analysis – to consider a threat source’s motivation and capability to exploit a vulnerability, the nature of the vulnerability, the existence of security controls, and the effectiveness of mitigating security controls
3 Impact Analysis – considering impact to the systems, data, and the organization’s mission and the criticality and sensitivity of the system and its date to determine the level of risk to a system is impact
4 Risk Determination – to obtain the level of risk to the system and the organization based on previous analysis by multiplying the ratings assigned for threat likelihood (e.g., probability) and threat impact. -
Acceptable information system security risk is the level at which companies are willing to accept depending on whether the impact on the company and the cost to fix it is low. It also has to do with the idea that the risk isn’t affecting their customers too much. The CIO and CEO determine the level of acceptable risk because 1) The CIO is in charge of IT and sets policies and procedures to mitigate any sort of risk and solutions to solve system security occurrences and 2) The CEO overlooks the company as a whole, making sure the assets of the company is safe. Together these two can perform risk analysis to create strategies to determine what level of risk they are willing to accept through various methods such as a cost analysis. If a risk was low and has barely any impact on the company, they will accept it or if it’s too high, they try to find ways to bring it down to an acceptable level or create stronger policies to prevent it from affecting the company too much. They look at scenarios in which the probability of certain event occurring is either low, moderate or high. If a risk is low impacting and doesn’t require much to fix, the organization will accept the risk and won’t worry too much about it. But at moderate and high levels, they tend to look at it more closely and figure out what ways they can use to mitigate or eliminate it.
-
Paul, you make a great point. It’s good to include stakeholders and get their ideas on what is an acceptable level of risk by utilizing a steering committee. The CIO/CISO are major players in this because they can give a more informative and closer insight since they deal with the systems on a daily basis and the CEO is another major player because the CEO is overlooking the company and how costly some risks can impact the organization.
-
My weekly news post is about a video that relates Wells Fargo fraud. As we talked about it last week, Wells Fargo was fined $190 million because of 1.5 million fake accounts created by multitude employees. Out of the $190 million fine only $5 million will go to the victims.
The company fired more than 5,000 employees and said they will invest in training and improve their control. The outrageous thing is that nobody is going to jail. A fraud has been committed and no one is being held responsible for it. This kind of fraud should result up to 15 years in prison.
Plus, the fine represent only 3% of Wells Fargo revenue ($5.6 billion) in the second quarter of 2016. The government should be stricter, otherwise other banks will do the same knowing the punishment won’t be hard.https://www.facebook.com/BenSwannRealityCheck/videos/1205025702895711/
-
I agree that two similar companies may have different risk management practices, and there is no one superior strategy. However, there some risk management practices that provide excellent framework/guidelines. For example, transferring risk, or purchasing insurance is sometimes not advisable unless there are regulatory requirements to consider. A risk that has a high probability and low impact should not be insured, but rather retained by the company. Insurance is generally not a good risk management practice for low impact risks, regardless of frequency. However, a company may decide that it does not want to retain the risk and would rather predictability. In this example, insurance would not be a good choice. However different companies may pursue diverging risk mitigation practices with positive results.
-
I totally agree with your opinion Deepali. From a decision maker’s perspective, balance the cost of risk and the cost of governance the risk is very important. For example, the risk with high frequency and high damage should be handled firstly. Management of an organization should also consider specific circumstance to decide which is the best way to mitigate the risks.
-
Yes, I also think the organization can mitigate the impact even if it might not prevent the risk from happening. Comparing with the preventive control, I think corrective control like backup systems and disaster recovery plan also have important position in mitigating the risk. If an organization is new-start, it might no need to invest millions in building a top level firewall, but an available backup system can fit what it needs.
-
Great post Ming Hu. You brought a good point about Risk Appetite. I read in detail about it,
An organization should consider risk appetite at the time of aligning organization goals.
To determine risk appetite following steps should be taken:
1. Develop risk appetite
2. Communicate risk appetite
3. Monitor and update risk appetite
However there are 2 important aspects
(1) articulating risk appetite is too difficult
(2) Communicating risk appetite does not contribute to growth of organization.
However the costs to manage risk sometimes outweighs the main objective of business.Determining risk appetite is an element of good governance that managements and boards owe to stakeholders. -
Hi Priya,
Thanks for giving a great explanation of how an organization accesses risk and verify if the risk acceptable or not. Actually, to assess risk, an organization can create a Sample Risk Management Table including risk
risk Description, impact, likelihood, risk management strategy, cost residual risk after implementing risk management strategy. so that they can determine the level of risk they are able to accept or tolerate. -
Ming, Great Answer. I have heard it referred to Risk Appetite as well. I think a company’s Risk Appetite is also affected by Company Culture. Some company’s are by design riskier than others and vice versa. That is because many companies survive off taking risks because it is the nature of their business. For example, Life Insurance companies are often times take high risk because it is necessary in that field.
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years, 1 month ago
What is an information risk profile? How is it used? Why is it critical to the success of an organization’s risk management strategies and activities?
-
An information risk profile is an evaluation of the types, amounts and priority of information risk that an organization finds acceptable and unacceptable (risk appetite).
Organizations use a risk profile as a way to mitigate potential risks and threats.
An information risk profile is critical to the success of an organization’s information risk management strategy and activities because it provides valuable insights into an organization’s information risk appetite and expectations for information risk management.
-
What is an information risk profile? How is it used? Why is it critical to the success of an organization’s risk management strategies and activities?
An information risk profile documents the types, amounts and priority of information risk that an organization finds acceptable and unacceptable. This profile is developed collaboratively with numerous stakeholders throughout the organization. It is used to manage and apprehend risks in the organization.
Plus, risk profile is critical to the success of an organization’s risk management strategies and activities because it is the tool that the organization use to benchmark different risks it can face. By knowing what it can accept or not allows the organization to develop appropriate strategies.
-
What is an information risk profile? How is it used? Why is it critical to the success of an organization’s risk management strategies and activities?
An information risk profile records different kinds of information risks based on their types, amounts and priority, which measures the amount of risk that an organization wants to accept. The elements of this profile include many different kinds of opinions from stakeholders related to the organization.
An information risk profile should include guiding principles because they provide accurate information and help evaluate threats, vulnerabilities and risks to an organization. It helps the organization manage and mitigate risks to reduce the possibilities of all kinds of risks.
It is critical to the business because it helps the organization reduce the possibilities of risks. And also it allows decision makers to make decisions. In addition, the organization also analyzes the acceptability of risks. -
An Information risk profile documents types, amounts and priority of information risk that an organization finds acceptable or unacceptable. It is a quantities analysis of the type of threats of an organization.
This profile should include guiding principle aligned with both its strategic directive and supporting activities. This is developed by stakeholders through the organization, including leaders, data and process owner and enterprise risk management. The information risk profile should include the organization’s data classification schema and a summary of the control requirements and objectives associated with it.
Risk profiling is an important tool for investment process. Decsiosn makers in a company can reference to information risk profile that developed and endorsed by organization business leader. The profile provides important insights and guidelines associated with information risk identification and management.
-
Very well explained Brou. I would like to add an example to this.
If a drug company does not properly test its new treatment through the proper channels, it may harm the public and lead to legal and monetary damages. Failing to minimize risk could also leave the company exposed to a falling stock price, lower revenues, a negative public image and potential bankruptcy.
-
Shahla, since you brought up the topic of investment i want to point out that organizations should be careful about over relying on risk-profiling tools because in a banking system for example, they only assess a client’s attitude to risk and capacity for loss. By only using this approach, advisers could be failing to take into account clients’ overall investment objectives or other key factors that need to also be considered.
I think organizations overall shouldn’t over rely on risk profile although it is a crucial step in managing risk.
-
What is an information risk profile? How is it used? Why is it critical to the success of an organization’s risk management strategies and activities?
According to the ISACA, An information risk profile documents the types, amounts and priority of information risk that an organization finds acceptable and unacceptable. An organization’s information risk profile should include guiding principles aligned with both its strategic directives and the supporting activities of its IRMS program and capabilities. This information should be listed early in the profile to allow the reader to understand its context and intent. Common guiding principles include the following:
Ensure availability of key business processes including associated data and capabilities.
Provide accurate identification and evaluation of threats, vulnerabilities and their associated risk to allow business leaders and process owners to make informed risk management decisions.
Ensure that appropriate risk-mitigating controls are implemented and functioning properly and align with the organization’s established risk tolerances.
Ensure that funding and resources are allocated efficiently to ensure the highest level of information risk mitigation.An information risk profile is critical to the success of an organization’s information risk management strategy and activities. It provides valuable insights into an organization’s information risk appetite and expectations for information risk management. Information risk and security professionals and programs that effectively leverage this information in their actions and activities can be confident in their alignment with business requirements and expectations.
-
Definition: Information risk profile is an evaluation of organization’s willingness (usually rated in high, moderate and low) to take risks, as well as the threats to which an organization is exposed.
How to use: A risk profile is important for determining a proper investment asset allocation for a portfolio. Organizations use a risk profile as a way to mitigate potential risks and threats.
Why it is critical: according to ISACA’s article, an information risk profile is critical to the success of an organization’s information risk management strategy and activities. It provides valuable insights into an organization’s information risk appetite and expectations for information risk management. Information risk and security professionals and programs that effectively leverage this information in their actions and activities can be confident in their alignment with business requirements and expectations
-
What is an information risk profile?
An information risk profile records different categories of risks depends on its types, amounts, and priority, and the organization will classify the acceptable and the unacceptable.
How it used?
The information risk profile provides important insights and guidelines associated with information risk identification and management. The ERM function can leverage these information provided by the profile as it calculate the overall enterprise risk and develops control objectives and management practices to effectively monitor and manage it.
Why is it critical to the success of an organization’s risk management strategies and activities?
In my opinion, the information risk is critical, because it reduces the friction between decision makers and IRMS, and helps the Information risk and security professionals and other related programs to be confident in their alignment with business requirements and expectation
Frictions exist between decision makers and (information risk management security (IRMS), cause of misunderstanding of each other’s activities and motives. The appearance of the information risk profile can reduce the friction, as it is mutually developed, and both of IRMS and decision makers can use to guide their respective activities.
It provides valuable insights into an organization’s information risk appetite and expectations for information risk management, so that the Information risk and security professionals and other related programs could be confident in their alignment with business requirements and expectations. -
What is an information risk profile? How is it used? Why is it critical to the success of an organization’s risk management strategies and activities?
According to ISACA, an information risk profile is a quantitative analysis that documents types, amount and priority of information risks that an organization finds acceptable and unacceptable.
An organization’s information risk profile should be structured and formatted in a fashion that quickly demonstrates its value and intent to the organization, is easily understood and applicable to the organization as a whole, and is viewed as useful and beneficial to its leaders and stakeholders. The following can be useful in meeting these goals.
How it’s used:
Guiding Principles and Strategic Directives
An organization’s information risk profile should include guiding principles aligned with both its strategic directives and the supporting activities of its IRMS program and capabilities. This information should be listed early in the profile to allow the reader to understand its context and intent.
Common guiding principles include the following:
¥ Ensure availability of key business processes including associated data and capabilities.
¥ Provide accurate identification and evaluation of threats, vulnerabilities and their associated risk to allow business leaders and process owners to make informed risk management decisions.
¥ Ensure that appropriate risk-mitigating controls are implemented and functioning properly and align with the organization’s established risk tolerances.
¥ Ensure that funding and resources are allocated efficiently to ensure the highest level of information risk mitigation.Why critical?
An information risk profile is critical to the success of an organization’s information risk management strategy and activities. It provides valuable insights into an organization’s information risk appetite and expectations for information risk management. Information risk and security professionals and programs that effectively leverage this information in their actions and activities can be confident in their alignment with business requirements and expectations.
-
What is an information risk profile?
-An information risk profile is a quantitative analysis that documents the types, amounts and priority of information risk that an organization finds acceptable and unacceptable. This profile is developed collaboratively with numerous stakeholders throughout the organization.How is it used?
– An organization’s information risk profile should include guiding principles aligned with both its strategic directives and the supporting activities of its IRMS program and capabilities.
– Also, transparency is a key aspect to the success and adoption of an information risk profile.
– The information risk profile should include a current-state analysis of identified information risk factors that have a reasonably high probability of occurrence and would represent a material impact to business operations if realized. The current-state representation should also include the organization’s IRM views, expectations and requirements.
– The information risk profile should include the organization’s data classification schema and a summary of the control requirements and objectives associated with itWhy is it critical to the success of an organization’s risk management strategies and activities?
– It provides valuable insights into an organization’s information risk appetite and expectations for information risk management. Information risk and security professionals and programs that effectively leverage this information in their actions and activities can be confident in their alignment with business requirements and expectations. -
What is an information risk profile? How is it used? Why is it critical to the success of an organization’s risk management strategies and activities?
The information risk profile of an organization is produced in collaboration with various stakeholders in the organization. The list of stakeholders can include, business leaders, internal and external audit, legal team, enterprise risk management, compliance team, process owners, etc.
An organization may choose to mark a specific risk acceptable and unacceptable, which is decided using the types, amounts and priority of information risk, and is documented in the information risk profile.
It ensures availability of key business processes. It also identifies and evaluates threats, vulnerabilities, which is crucial in making informed risk management decisions by the business leaders and the process owners.It is important that proper risk mitigating controls are implemented and are also functioning properly.
-
2. What is an information risk profile? How is it used? Why is it critical to the success of an organization’s risk management strategies and activities?
The Business and Information Risk management security professionals disagree to the risk factors because the business believes in taking risk to achieve their business activity and the IRMS professionals try to mitigate the risks and ensure that their organization’s information infrastructure and assets are protected properly. The best method to reduce the tension is to mutually develop and maintain an information risk profile that they both can use as a guide.
Information risk profile contains both acceptable and unacceptable risks- the type, amount and priority. It should demonstrate its value and intent to the organization, be beneficial to the leaders and stakeholders and should be easily understandable.
Risk profile provides a base for the business leaders to consider them and adjust the organization’s risk profile to business objective by modifying the requirements. This way both the IRMS and Business leaders work together to align with the organizations information risk management expectation.
Source: http://www.isaca.org/Journal/archives/2013/Volume-4/Documents/13v4-Key-Elements.pdf
-
An Information Risk Profile is a description of the overall IT risk to which the enterprise is exposed (Risk IT Framework p. 101). The Risk Profile will identify how much value / loss is associated with the risks accepted by the organization.
The Risk Profile is an important document because it outlines the valuable assets of an organization, defines the risks that may hinder the businesses assets, determines the risks management is willing to accept, and the expectations for mitigating the risks. Accurately outlining the values and risks will enable organizational leaders to manage information risk.
-
2 As per ISACA’s Risk IT Framework, the Risk profile of the enterprise is the overall portfolio of identified risks to which the enterprise is exposed. The Risk profile is gives a picture of
• the key business processes, associated data and capabilities and the type of risk the process is exposed to
• accurate identification and evaluation of threats, vulnerabilities and their associated risk
• information on risk-mitigating controls already in place and whether they functioning as per the Organization’s acceptable risk levels
The Information Risk profile helps business leaders and process owners to make informed risk management decisions. It communicates whether the funds and resources available are utilized effectively to best mitigate risks in a way that the risk posed is within the company’s acceptable risk threshold. It also serves as a brief risk response plan and helps in planning and tracking risk mitigation activities. -
The information risk profile is the portfolio of all the identified IT risk that the enterprise is exposed to.
This is really important since it weighs the impact of the IT investments a company can make. This allows executives to make decisions based on the likehood of success and the perils of failure. The goal of the decisions is to reduce the overall risk facing the company. Risks can be chosen to be accepted, mitigated, offset, or removed.
-
What is an information risk profile? How is it used? Why is it critical to the success of an organization’s risk management strategies and activities?
In the article, “Key Elements of an Information Risk Profile”, Isaca defines an information risk profile as: “An information Risk Profile documents the types, amounts and priority of information risk that an organization finds acceptable and unacceptable. This profile is developed collaboratively with numerous stakeholders throughout the organization, including business leaders, data and process owners, enterprise risk management, internal and external audit, legal, compliance, privacy, and IRMS.”
An information risk profile is critical to the success of an organization’s information risk management strategy and activities. A risk profile is often used when it comes to making decisions, developing, and/or creating an asset allocation portfolio. It is used as a guide to minimize risk and achieve business goals. Organizations tend to use the valuable insights that come from analyzing an organization’s risk profile, specifically information risk appetite and expectations for information risk management, to mitigate potential risks and threats. An information risk profile is needed because organizations identify and embrace risk to achieve business goals.
-
Well explained Alexandra!
I would also like to add that the risk profile will help organization determine priority of IT requirements.
It also proves as a plan to manage risks,target spending,, preparation for impacts. This is a proactive means of handling risk. -
Hi Abhay,
I have never thought of the stakeholders who should participate in determining the risk profile. This is a great and clear list. Each of them have different responsibilities to determine the types, amount and priority of information risk. Many companies hire independent auditors to help discover any risks, so they can be properly addressed before they become external issues.
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years, 1 month ago
What is meant by the term “acceptable information system security risk”? Who within the organization determines what is the acceptable level of information system risk? How does an organization determine wha […]
- Load More
For the “health” of the business, it is very practical to do testing of the Business Continuity Plan (BCP). However, testing (by nature) can be disruptive and intrusive. In the “Disaster Recovery and Business Continuity Planning” article by Yusufali Musaji that we read this week, he gives four methods for testing.
They are:
– Hypothetical
– Component
– Module
– Full
In descending order, they become more burdensome to implement. However, something that Mr. Musaji writes about in his “Setting Objectives” section stood out to me. He mentions documentation testing and “third-party” evaluations of offsite locations (a backup data center, for example). Documentation seems to be the bane of most company’s/employee’s existence, but their is certainly a need for it in BCP. This documentation can serve as the foundation of what to do in the event of outages or changes, as long as it is high quality and people are trained in how to use it. Furthermore, reaching out to a third-party to conduct BCP evaluations frees up your own in-house resources, so the company can be more productive while evaluations are conducted.
BCP is a plan that allows a business to plan in advance what it needs to do to ensure that its key products and services continue to be delivered in case of a disaster,.A business continuity plan enables critical services or products to be continually delivered to clients. Instead of focusing on resuming to a complete strength after a disaster, a business continuity plan endeavors to ensure that critical operations continue to be available.
It is very difficult and less practical to conduct BCP test in an organization.But testing can be a major challenge to many organizations because it requires
1) Management support,
2) Time for preparation and execution,
3) Funding
4) Structured process from pre-test through test and post-test evaluation
5)Clients do not cooperate to test as post test the results suggests some solution which will be very costly to implement
The full BCP test verifies that each component under each module is workable and the complete strategy and objectives are satisfied.So in most cases you wouldn’t be able to perform the full testing but you will be able to test all the parts of business continuity separately
Component and module testing can be a good option as an alternative.It helps verify details and procedures of individual processes .The emphasis can be laid on more critical components.
2. Is it practical to conduct a thorough test of a Business Continuity Plan? Why might it not be practical? If it is not practical, what alternative ways can you recommend for testing a BCP?
It isn’t practical to conduct a thorough test because the plan would affect everyone who is utilizing the environment. This would be a huge project that wouldn’t make business or financial sense. An alternative way to test the BCP plan is to conduct it on a pre-defined environment. Conduct a beta-test on “dummy” users. All companies will perform beta-testing before pushing things down to the end users.
A thorough test of a BCP is not practical. There would be great expense and it can cause disruption to employees. Also, the organization may be outsourcing some of IT and cannot view inside their operations.
We can do a lot of disaster recover testing in general though insted of a thorough test. There are four main categories of testing; hypothetical, component, module, and full. Hypothetical tests are there to prove that there is a plan in case something breaks. This is the fastest method of testing.
Component testing is a chunk of instructions from the BCP to be performed, usually for one feature. This can be for verifying compatibility for things such as tape storage, recovery, or security packages.
Module testing is testing that multiple components will work together after being recovered.
Full testing is to check that all the components can be up and running in a certain acceptable amount of time.
source; disaster recovery and BCP
Fred, Thanks for you post. I don’t agree that thorough testing for the BCP/DRP is impractical or that it doesn’t make business or financial sense. Like you said, testing is required before anything is set in to production. Some products may even go through hundreds of tests before being approved. So why shouldn’t a BCP/DRP be put through the same rigor? You don’t know if it will work as intended unless you test it. Yes, some aspects may be expensive, like flipping the switch on the cold-site, but I think it’ll be more expensive to find out that the switch doesn’t work when an disastrous event have already occurred.
Nice post Vaibhav. Sometimes it is not possible to have a full operational BCP test as it can be expensive and also result in a loss of productive time.To conduct a full operational test , the organization should have tested the plan well on paper and locally before completely shutting down operations.
Other alternate methods are
1. Desk-based evaluation/paper test: A paper walkthrough of the plan where what happens if a particular service disruption occurs is studied.
2. Preparedness test: It is a localized version of the full test wherein actual resources are used in the simulation of system crash. It is cost effective way to know if the BC plan is good.
Usually before conducting a full operational test both paper test and preparedness test is done to ensure that the operations do not come to a standstill.
Methods to test BCP:
1. Checklist test
A checklist test determines if the plan is current, if the backup site has adequate, correct telephone numbers and contact information are available, emergency forms, copies of the plan and any supplemental documentation are available.
2. Structured walk-through test
This test done team-wise or department wise where in a detailed walk-through of the various components of the plan is conducted. The type of disaster and parts of the plan that needs to be tested is decided by the team leader.
3. Emergency evacuation drill.
A facility evacuation should be conducted at least once a year with all employees to be sure that the employee understands how the evacuation should proceed, where to go, whom to reach out to, how to handle personnel with physical limitations in case of emergency etc.
4. Recovery simulation
In this testing, the team uses equipment, facilities and supplies as they would in a disaster situation provided in the plan. It checks if the team is able to carry out critical functions using the recovery and restoration procedures.
Loi,
I do think the test should be conducted, but not a thorough test. It is impractical and too expensive. I say this because I use the Merrian Websters definition of Thorough as, “Including every possible part or detail”.
Using this definition, I believe a thorough test shouldn’t be performed.
With that being said, I do believe tests should be conducted and for those processes that require more cumbersome testing, should be conducted on a smaller scale, and on a replicated environment vs. the actual, until the replicated environment test is successful, then you would move to the actual on a much smaller scale..
This conversation was intreging and decided to ask Bob Deliosi, the tour guide from Sungard this question. Here was his response and some stuff on Sungard. He is going to send me a link to the Mobile truck they use for clients’ BCP’s.
Fred, It was my pleasure, all were very interested.
Here is a link to some Sungard AS Youtube stuff. Looking for the Truck video.
Companies typically do not shut down production services for a BCP test.
Typically, they isolate a DR network and that small team works in that arena testing for a number of hours/days.
Here is the link to show how companies responded to Hurricane Sandy, a few short years ago.
Check out the end when they talk about mobile trucks and how companies worked out of the trucks.