-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
What is the difference between identity management and access management?
Why is it important to a business to care about the difference between identity management and access management?
What is the one […] -
David Lanter wrote a new post on the site ITACS 5206 8 years ago
What is the one interesting point you learned from the readings this week? Why is it interesting?
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
Recorded lecture: Video
Lecture presentation: Slides
Additional material (not covered in recorded lecture) students are responsible for learning: Read this
Quiz w/solutions: Quiz w/solutions
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
Question 1: How would you determine if an organization’s network capacity is adequate or inadequate? What impacts could be expected if a portion of an organization’s network capacity is inadequate?
Question 2: […]
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
How would you determine if an organization’s network capacity is adequate or inadequate? What impacts could be expected if a portion of an organization’s network capacity is inadequate?
-
How would you determine if an organization’s network capacity is adequate or inadequate? What impacts could be expected if a portion of an organization’s network capacity is inadequate?
Network capacity planning is an effective way to determine the organization’s network capacity adequate or not. A key feature of network planning is determining how much bandwidth the network actually needs . QoS is a vital feature in network capacity planning. All links have congestion points and have periodic spikes in traffic. QoS polices are essential to ensure traffic spikes/congestion points are smoothed out, and more bandwidth is allocated to critical network traffic. Without proper QoS policies in place, all traffic has equal priority, and it is impossible to ensure your business-critical applications are getting sufficient bandwidth. For instance, without detailed knowledge of the type of traffic passing through a network, it is not possible to predict if QoS parameters for services like VoIP are meeting target levels. Â Network traffic monitoring will give you the visibility that you need to properly plan network capacity and ensure QoS. Ipswich Flow Monitor network traffic monitoring is invaluable to help you understand bandwidth requirements and for network capacity planning.
Inadequate capacity planning can lead to the loss of the customer and business. Excess capacity can drain the company’s resources and prevent investments into more lucrative ventures. The question of when capacity should be increased and by how much are the critical decisions. Failure to make these decisions correctly can be especially damaging to the overall performance when time delays are present in the system.
-
Question 1: How would you determine if an organization’s network capacity is adequate or inadequate?
What impacts could be expected if a portion of an organization’s network capacity is inadequate?
The capacity are the available resources for the network. To determine if the capacity of the network is adequate, you would conduct a Performance test and compare it to the capacity of the network. For example: If the machine has 8 GB of RAM installed, the capacity of RAM for the system is < 8GB (Actually it would be even less than 8GB because the system will shut down if it hits a certain threshold, which is set closer to 7GB).The impact of a system that reached its max capacity would be system shutdown. To make this system functional, you would have to increase the capacity of the resource. In my example, you would add more RAM, if the board / system was able to utilize the added resources (The mother board may not be able to support the additional capacity for the resource).
This is what happens during a DDoS (Denial of Services) attack. -
Question 2: Suppose an organization is only able to filter and selectively block either: a) network traffic coming into its intranet from the internet (incoming) or b) network traffic going out from the intranet to the internet (outbound). With respect to each of the 3 information system security objectives (i.e. confidentiality, integrity, and availability), if you could only filter and selectively block one network traffic direction which one you would you concentrate on and why?
I would block all incoming traffic because it would be better for the 3 information system security objectives, and still allow for business to operate, it would just operate the way businesses operated in the 1980’s. No email, only phone calls and face-to-face meetings. No website visiting, only getting in your car and visiting the store, or website at another location, but no incoming traffic.
This would only allow a security breach from inside the organization -
How would you determine if an organization’s network capacity is adequate or inadequate? What impacts could be expected if a portion of an organization’s network capacity is inadequate?
Network capacity is the maximum capacity of a link or network path to convey data from one location in the network to another. Network capacity planning is a method to determine whether the organization’s network capacity is adequate or not. As a key feature of network planning is determining how much bandwidth the network actually needs.
The common approaches to Network Capacity Planning as following below:
1. Long-range views of average utilization: this will show a long-term trend of utilization, but the long-term view will average out those spikes of high utilization, thus hiding the problem.
2. Peak utilization, e.g. showing the busiest minute for each day in a month: This shows what days had a busy minute, but doesn’t give insight into the amount of time during which time a link is congested.
4. Traffic totals: easy to show all links on a single view, showing the links with most traffic and even periodic trends to show month-by-month usage. However, it does not give any indication of congestion except in extreme cases.Inadequate network capacity could make the organization network unstable, unresponsive – or worse – unavailable. The bad network performance would result in ineffective service to the business and unfulfilled customers. It would lost possible customers by low level service, so network connectivity is important to an organization’s effectiveness and efficiency.
-
How would you determine if an organization’s network capacity is adequate or inadequate? What impacts could be expected if a portion of an organization’s network capacity is inadequate?
Network capacity must be able meet service-level agreement (SLA) targets of delay, jitter, loss, and availability. SLA requirements for a traffic class can be translated into bandwidth requirements. The ability to meet SLAs is dependent on ensuring that core network bandwidth is adequately provisioned, which depends on network capacity planning.
1. Measure the traffic (aggregate) and forecast the use by the traffic. The bandwidth must be able to handle traffic easily.
2. Test if the bandwidth is always significantly over provisioned to meet committed SLAs
3. Perform simulation testing to overlay the forecast demands
4. Test should be stimulated taking failure cases into consideration.
5. Forecast the usage with the provisioned bandwidth. IF the results vary then capacity is inadequate
6. Possibility and investigation of congestion – The distribution of bandwidth must be such that network availability is good even during high traffic.
7. Costs – It is important to consider overload on network and no situation should be under provisioned, but the costs will increase if the network is too much over provisioned and not utilized
Companies can perform futuristic situation based customized testing to test the capacity is adequate or not.
– What will the response time to if traffic doubles?
– How will applications perform after addition of new application, how many users would it have?
– How will service levels be affected if VMs are active to their full capacity?
– How will changes to I/O devices, network bandwidth, and the size and number of CPUs affect daily operations?
Companies can also perform automotive calculation to determine network capacity adequacy
– Which of my applications have failed to meet SLA in last 6 months?
– Where will the future bottlenecks be?
– How long will it be before my current configurations will fail to meet service levels?
– What will response time will applications need during next month?
In case the capacity is inadequate then many things could go wrong mainly the unavailability of network. Network performance will be slow which will affect daily business tasks and total time not utilized would be a lost especially during peak hours. This will leave systems and resources inefficient incurring costs.http://www-07.ibm.com/services/pdf/nametka.pdf
http://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cust_contact/contact_center/icm_enterprise/icm_enterprise_10_0_1/Design/design_guide/UCCE_BK_UEA1023D_00_unified-cce-design-guide/UCCE_BK_UEA1023D_00_unified-cce-design-guide_chapter_01101.html -
To know if the organization’s network capacity is adequate, the organization must partake in Capacity planning and Performance Management. Capacity planning is the processes for determining the network resources that is required to prevent a performance or availability impact on business-critical applications. Performance management is the practice of managing and monitoring network response time, consistency and quality. Some of the tools within this process is what-if analysis, base-lining, trending, exception management and QoS management.
The first part of capacity planning and performance management is to gather configuration and traffic information. This allows the organization to observe statistics, collect capacity data, and analyze traffic to create a baseline for the organization’s network capacity. The baseline consists of inventorying resource (software, applications, network communication, VOIP, etc,), users, and the bandwidth that is required to enable the organization to run it’s day-to-day and critical business applications.
Once a baseline is established then a trend analysis can help the organization identify network and capacity issues to understand future upgrade requirements. For example, a new internal portal allows the organization’s employees to share videos of community service events. As more users are aware of the new portal, it receives more traffic and the performance of other web-application is reduced.
Once the problem is identified, the organization will plan for the changes and do a what-if analysis to determine the affect of a network change. After it is evaluated then the changes are implemented.
Inadequate capacity can lead to employee not being able to get the resources they need to do their job. For example, if a Supply Chain application was experiencing performance issues, due to inadequate capacity, and the employee was unable to order production materials; then time and money is loss with idle production capacity and resources.
-
Good Summary Jianhui,
I think that the “Long range view of average utilization” is probably not the best indication of required network capacity. As we know, average is just a faux number and will not absolute. What I mean by this is; 1) it doesn’t account for capacity utilization during peak hours, 2) it overstates capacity requirements for idle network capacity. If you build your network based on average, then you will experience latency or performance issues during peak hours of usage. I guess, the hardest part is tradeoff between how much capacity the organization wants to have available at all times and how much they are willing to spend to maintain it.
-
Fred, good post, I believe the performance test is the most common way to help an organization determine whether or not its network capacity is adequate. Then they can compare the result with the previous results or the capacity to check the network capacity. It is critical for the network to have excessive/more capacity to avoid DDoS attack.
-
How would you determine if an organization’s network capacity is adequate or inadequate? What impacts could be expected if a portion of an organization’s network capacity is inadequate?
Network capacity is the maximum capacity of a link or network path to convey data from one location in the network to another. (From IT Law Wiki)
One way to effectively identify the parameters that can affect the network’s performance or availability of an organization in a predictable time is network capacity planning. Through conducting performance simulation and capacity analysis based on traffic information, infrastructure capacity and network utilization, the results provide indications of the expected loading in the network. then planners can compare it against the provisioned network capacity, to determine maximum capability of current resources and the amount of new resources needed to cater to future requirements.
Inadequate network capacity may lead to overload, or crash of network.
-
Priya,
This is a very good explanation to this question. You mentioned all important points which should be considered to determine if an organization’s network capacity is adequate or inadequate. One point I like the most is to test should be stimulated taking failure cases into consideration. It is very important to manage the network downtime and failures to prevent loss of business. BCP DR should always be considered of prime importance while answering the adequacy of the network.
Let’s take an example of banking industry. A network downtime for a bank can make it’s customers unable to lead to unavailability of online banking to it’s customers and can also stop the daily activities of a bank. This can lead to financial loss to it’s customers as well as to the bank too. Hence this should be taken care of while designing the network and testing it’s adequacy to meet any kind of unwanted event. -
How would you determine if an organization’s network capacity is adequate or inadequate? What impacts could be expected if a portion of an organization’s network capacity is inadequate?
Network Capacity Planning would be an ideal way to identify if the current infrastructure can support the the amount of resources necessary for the applications to operate sufficiently during peak hours of the business. Depending on the network in question and the service provider for the network, most provide online tools to enable the client to monitor their traffic in real time as well as set utilization reports to run automatically at certain times of day/month. Also, there are a number of techniques that can be used such as traffic shaping and quality of service/class of service to prioritize traffic and to ensure that the mission critical applications get the necessary bandwidth needed during peak hours.
If a new application was rolled out that was a resource “hog” and was tying up all of the necessary bandwidth that had negative performance consequences for other applications using the same bandwidth. This can be very costly to a business, specifically in the financial sector, where the need for real-time information in order to effectively make money. Any type of lag or jitter in these instances can be devastating in the financial markets. All service providers will back specific type of lines with SLAs surrouding availability, mean time to repair, jitter, latency, etc. It would be good practice to also compare
-
Inorder to determine network capacity is adequate or inadequate we need to have a network capacity planning which includes finding out
1) Traffic Characteristics-Type and amount of traffic
• Traffic volumes and rates
• Prime versus non-prime traffic rates
• Traffic volumes by technology2) Present Operational Capacity
• WAN percent capacity used
• LAN percent capacity used3) Evidence of Congestion
• Packet discards which can be checked with ping operation
• Top error interfaces4)Network growth over a period of time.
Requires detailed view into current bandwidth usage, combined with historical accounts of capacity usage5)QoS
To check if Business-critical applications are getting sufficient bandwidth.If the network capacity is inadequate it will lead to slow network performance which will disrupt the company critical operations.
Delay in deliverable which can tarnish company image
During the peak office hours of morning the slowdown in network will also waste important time of employees if critical applications are running slowhttp://www-07.ibm.com/services/pdf/nametka.pdf
https://www.ipswitch.com/resources/best-practices/network-capacity-planning -
How would you determine if an organization’s network capacity is adequate or inadequate? What impacts could be expected if a portion of an organization’s network capacity is inadequate?
Capacity planning is the process of determining the production capacity is adequate or inadequate needed by an organization to meet changing demands for its products.
There are three steps for capacity planning:
1. Determine service level requirements
2. Analyze current capacity
3. Plan for the future
To manage capacity effectively and to provide adequate bandwidth for critical services, these are some questions people should keep in mind: How much bandwidth does your business need? How close towards maximum utilization are your servers? Which network interfaces will be most utilized 30 days from now?
If portion of network capacity is not adequate it will lead to loss of customers and business because network may be interrupted and therefore unable to perform the service for clients. In addition, there are negative impact expected on business operations flow and possibly data corruption.Source: https://www.sevone.com/supported-technologies/capacity-planning-and-bandwidth-management
-
Network capacity is the measurement of the maximum amount of data that may be transferred between network locations over a link or network path. The network capacity measurement is complex and there are many different variables (network engineering, subscriber services, rate at which handsets enter and leave a covered cell site area) and scenarios that make actual network capacity rarely accurate. With that said, a key indicator of inadequate network capacity would be too much network traffic causing bottlenecks in your process and a slow network which could be caused by an overload of network errors and network congestion
-
Great answer, Wenlin. You’ve correctly pointed out that “QoS policies are essential to ensure traffic spikes/congestion points are smoothed out, and more bandwidth is allocated to critical network traffic”. I’d like to add that to quantitatively measure quality of service, various aspects of the network service are often considered, such as error rates, bit rate, throughput, transmission delay, availability, jitter, etc. If a portion of a network’s capacity is inadequate one can expect problems such as dropped packets, latency, out of order delivery and errors due to interference and low throughput.
-
Well put, Vaibhav. You’ve covered the important points for check network capacity adequacy very well. I especially liked that you’ve mentioned that network growth over time is also an important area to look into. It is easy to overlook slowly but gradually degrading network performance till a major incident occurs. To ensure that such a scenario doesn’t take place, it is important to look at trends for network performance parameters so that any capacity related issue can be identified at the earliest and dealt with appropriately.
-
I want to add the some approaches of how to evaluate the network capacity to your comments. .
1. Long-range views of average utilization: this will show a long-term trend of utilization, but the long-term view will average out those spikes of high utilization, thus hiding the problem.
2. Peak utilization, e.g. showing the busiest minute for each day in a month: This shows what days had a busy minute, but doesn’t give insight into the amount of time during which time a link is congested.
4. Traffic totals: easy to show all links on a single view, showing the links with most traffic and even periodic trends to show month-by-month usage. However, it does not give any indication of congestion except in extreme cases.
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
Suppose an organization is only able to filter and selectively block either: a) network traffic coming into its intranet from the internet (incoming) or b) network traffic going out to the internet (outbound). […]
-
Question 2: Suppose an organization is only able to filter and selectively block either: a) network traffic coming into its intranet from the internet (incoming) or b) network traffic going out from the intranet to the internet (outbound). With respect to each of the 3 information system security objectives (i.e. confidentiality, integrity, and availability), if you could only filter and selectively block one network traffic direction which one you would you concentrate on and why?
I would block all incoming traffic because it would be better for the 3 information system security objectives, and still allow for business to operate, it would just operate the way businesses operated in the 1980’s. No email, only phone calls and face-to-face meetings. No website visiting, only getting in your car and visiting the store, or website at another location, but no incoming traffic.
This would only allow a security breach from inside the organization -
Although this decision would greatly be dependent on the type of business and the situation which calls for such a choice to be made, I personally, would choose to block outbound traffic. My decision is based on the below reasons, keeping in mind the objectives of CIA ( confidentiality, integrity, availability) and assuming that this is only for a short duration (may be due to an upgrade activity or other infrastructural change taking place)
1) Since most resources and tools that the employees use will be on the intranet, employees would be able to perform their duties either way
2) A considerable portion of incoming traffic would be incoming emails from clients and customers. Receiving these emails is very critical. Emails which require immediate action or response need to reach the employees so appropriate action can be taken. For outbound communication, Employees can always contact customers via phone and inform them to expect delay. Non-urgent emails can be responded to later.
3) Blocking outbound traffic would also mean the confidentiality and integrity of company information is maintained. Availability of company resources to internal employees is not hampered in any case so from CIA objective perspective too, this decision would be the right one.
4) Allowing inbound traffic could be opening doors to cyber-attacks, phishing, viruses etc however, having the network secured by firewall and antivirus would greatly reduce the probability of such an attack being successful. -
Suppose an organization is only able to filter and selectively block either: a) network traffic coming into its intranet from the internet (incoming) or b) network traffic going out to the internet (outbound). With respect to each of the 3 information system security objectives (i.e. confidentiality, integrity, and availability), if you could only filter and selectively block one network traffic direction which one you would you concentrate on and why?
I would say “b” because blocking the traffic going out to the internet is like cutting the organization from the outside world. The internet will only work between the company network. Blocking the traffic going out will definitely reduce the risk of an attack. Plus, employee would be focus on what they are supposed to do since they can’t access other sites such as Facebook, shopping sites, illegal sites…
It would be counterproductive to block the incoming network because it would block communication inside the company. How would people use share drive in the network? -
If I had to choose whether to allow inbound traffic or outbound traffic I would go for outbound traffic (intranet to internet) for security reasons alone and block inbound traffic (internet to intranet). In most cases that we hear of data breach, we find that attackers come in on inbound connections rather than outbound.
Though there are cases wherein there are attacks from within the organization due to human error or negligence I find inbound traffic to be less secure for the following reasons:
-Confidentiality: In case of inbound traffic we will not be sure of the source from where data is coming. Controls like filters, firewalls, anti-virus and routers may not be enough to monitor the inflow.
-Integrity: Multiple type of encryptions may be used. As we cannot control the environment outside the organization, more chances of the data to be altered. This opens the system to malicious mails, virus, worms, social engineering attacks, DOS attacks.
-Availability: If there is a non-availability of data due to server issue or broken route, it is not very easy to get it fixed. It would need third party involvement and will completely depend on the source to rectify the issue.
Outbound traffic is more secure as it is flowing out of the organization and it is in the control of the network administrator. It becomes easier to predict and provide preventive and corrective controls if needed.
-Confidentially: The access is given only to the authorized user to be able to send the information
-Integrity: The encryption used to send the data is decided by the network team and so it can be more secure as per the design. Alteration to data can be done by user with malicious intent but this is rare and can be prevented with proper authorization permissions and segregation of duties
-Availability: Availability is directly related to the datacenter or the network within the organization. So DRP defined can help restore the information within the timeline identified by the company. -
Absolutely, I agree.
There is a huge security risk associated with out bound traffic. For instance, DDoS attacks. But if you don’t have an open port to move traffic out, the probability of your network to be a participant (botnet) of such an attack decreases.
There are other risks as well, like, uncontrolled email and file transfers from your network to outside network can compromise the confidentiality aspect.
-
Question 2: Suppose an organization is only able to filter and selectively block either: a) network traffic coming into its intranet from the internet (incoming) or b) network traffic going out from the intranet to the internet (outbound). With respect to each of the 3 information system security objectives (i.e. confidentiality, integrity, and availability), if you could only filter and selectively block one network traffic direction which one you would you concentrate on and why?
The answer to this question comes from another question: For what reason there is a need to block the network. CIA is highest level of objective which an organization wants to achieve with respect to data and security rules and regulations revolves around how to maintain the CIA.
So if there is a case an employee is working suspiciously on some important data and is making efforts to leak the data from the organization to the outside world, we would need to block network traffic going out from the intranet to the internet(Outbound). The other case where we would want to block outbound traffic is if there is a malware attack and due to which outside attackers are able to access data. A similar case occurred recently in Wells Fargo where employees had an unauthorized access to the customer data. In such case employee who have access to customer’s personnel data can easily be leaked to the outside world therefore blocking the outbound traffic is the step which would be needed in such incident.
Blocking the network traffic coming into its intranet from the internet would be necessary in case there is a cyber-attack on the network. The attacks can be a virus attack, denial of service attack (DDOS), Man in the middle attack and so on. In becomes very difficult to control such kind of attack from outside. Hence blocking the inbound network is the only option to prevent data breach and internal network.
-
Fred,
I think it might be depends on nature of business, if the organization doesn’t want to communicate in industry so blocking all income network is good practice, However, I think in terms of confidentiality we should block or be sensitive about outbound information.
-
I strongly agree with you Deepali, that the decision to choose between allowing inbound traffic or outbound traffic is very much dependent on the scenario that calls for such a choice to be made in the first place. You gave excellent examples of both such scenarios – one where the need of the hour is to contain data within the company and one where the need of the hour to keep external threat agents out. I think it is safe to say that there cannot be a fixed right decision purely because the decision is one which needs to be made considering different factors.
-
Mansi,
Good point, I agree with you it totally depends on nature of business, However outbound traffic is more important in my view, I remember in advisory session we had couple weeks ago, we analyze the case that the problem was in outbound traffic wasn’t protected so there was a main problem on that case.
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
In the contexts of being attacked by or unwittingly becoming a resource for distributed denial of service (DDoS), which is a bigger threat to an organization’s network and computer resources and why: Spam p […]
-
A Distributed Denial of Service (DDoS) attack is an attempt to make an online service unavailable by bombarding it with traffic from multiple sources
Spear-phishing attack is carefully crafted and customized to look as if it comes from a trusted sender on a connected subject. Spear-phishing scams often take advantage of a variety of methods to deliver malware .Spam-phishing attack is targeted on by sending mass amount of junk emails and unwanted codes to the recipient through different methods
As mentioned the DDoS is a collective effort so it is launched by a large number of computers or bots together to attack a particular website and server by overwhelming it with huge traffic .In this case the spam phishing is a bigger threat in launching DDoS
Every network service has its bandwidth and if it is flooded by spam email and code it actually causes a DDoS..I wanna take an example of how the large amount spam emails can disable the work completely.
In my previous organization while working on setting up an SMTP server for client website,I had given up my organization email to test the content of email being received.I had mistakenly put up the code inside a unending while loop and tested the code.It was in minutes my entire mailbox was filled with test mails and I could not receive any further emails .Every single mail deleted will bring another numerous test email.My outlook went completely down -
Question 3: In the contexts of being attacked by or unwittingly becoming a resource for distributed denial of service (DDoS), which is a bigger threat to an organization’s network and computer resources and why: Spam phishing or Spear phishing?
In the contexts of being attacked by a DDoS, a bigger threat to an organization is Spear phishing because it is often an email from a familiar source, or so it looks.
Spear phishing is a form of email, instant message, text, ect attack. The attacker poses as a familiar contact and persuades the user into performing a certain action. Since this is coming from a reputable source, the user will unknowingly infect the system by performing the action requested by the reputable source. This would be a bigger threat to me because SPAM is common these days and everyone deletes things they don’t recognize. Spear phishing is something you will recognize.
-
In the contexts of being attacked by or unwittingly becoming a resource for distributed denial of service (DDoS), which is a bigger threat to an organization’s network and computer resources and why: Spam phishing or Spear phishing?
Fraudsters use phishing emails to steal personal information. Although, the email may look harmless but they can convince employees to follow links or download attachments that can be dangerous and compromise PII (name, address, SSN, credit card number, etc.).
Spear-phishing is fundamentally based on the same idea except regular employees are not the target for the attackers; they want the access to the organization’s valuable resources.
Spear-fishers typically gather information from social media sites and other sources to craft highly targeted messages. During an attack, they will send emails to a few employees and not everyone onto the organization’s network (like they do in phishing email attack) to avoid getting filtered out by anti-phishing software. Malwares used by spear-fishers are not like the typical malware that will flood the screen with pop-ups, these malwares are difficult to spot. There have been cases where these malware have improved computer performances. Critical information like customer information, confidential files, organization proprietary information, trade secrets, etc. can be compromised.
In a scenario, where top executives of an organization were spear-fished and their system was compromised, request sent from their botnets will probably not be ignored and is a bigger threat, I believe.
-
Spam phishing: Phishing attacks use spam (electronic equivalent of junk mail) or malicious websites (clicking on a link) to collect personal and financial information or infect your machine with malware and viruses.
Spear phishing: Spear phishing is highly specialized attacks against a specific target or small group of targets to collect information or gain access to systems.
A distributed denial-of-service (DDoS) attack is when a malicious user gets a network of zombie computers to sabotage a specific website or server. The attack happens when the malicious user tells all the zombie computers to contact a specific website or server over and over again. That increase in the volume of traffic overloads the website or server causing it to be slow for legitimate users, sometimes to the point that the website or server shuts down completely.
As for DDoS attack, Spear phishing should be a bigger threat because spam phishing is easier to control than spear phishing. First the email has its own ability to identify spams. Second, spams are easier to tell if people have awareness of protecting themselves to click links or download files in unwanted emails. On the other hand, spear phishing is more targeting to collect specific information. For example, a cybercriminal may launch a spear phishing attack against a business to gain credentials to access a list of customers. Since they have gained access to the network, the email they send may look even more authentic and because the recipient is already customer of the business, the email may more easily make it through filters and the recipient maybe more likely to open the email.
https://staysafeonline.org/stay-safe-online/keep-a-clean-machine/spam-and-phishing
https://www.getcybersafe.gc.ca/cnt/rsks/cmmn-thrts-en.aspx -
Vaibhav,
I think your example makes clear the ways in which spam can take down a network and computer systems. I think the question being asked this week is “double-dipping” in a way.
For spam phishing the example you gave makes it clear that overloading a network or system can bring it down.
However, spear phishing could also bring networks/systems down if the target that was compromised had privileged access, or something to that effect.
-
Question 3: In the contexts of being attacked by or unwittingly becoming a resource for distributed denial of service (DDoS), which is a bigger threat to an organization’s network and computer resources and why: Spam phishing or Spear phishing?
For DDos attack, I think spear phishing is a more targeted form of phishing whereas spam phishing involves malicious emails sent to any random email account. Spear phishing emails are designed to appear to come from someone the recipient knows and trusts, such as a colleague, business manager or human resources department. For attackers, they may study victims’ social networking, like Facebook, LinkedIn, WhatsApp, etc., to gain intelligence about a victim and choose the names of trusted people in their circle to impersonate or a topic of interest to lure the victim and gain their trust.
Nowadays, organization’s computers have already installed spam stopper software or tools to prevent these kind of spam phishing emails to their accounts. So spear phishing becomes more popular than spam phishing for attackers because people have already had consciousness for spams, but people still lack consciousness for spear phishing because of emails sent by “familiar” people. -
Question 3: In the contexts of being attacked by or unwittingly becoming a resource for distributed denial of service (DDoS), which is a bigger threat to an organization’s network and computer resources and why: Spam phishing or Spear phishing?
In the contexts of being attacked by a resource for distributed denial of service (DDoS), I would say that spear phishing a bigger threat to an organization’s network and computer resources. A Distributed Denial of Service (DDoS) attack is an attempt to make the target unavailable by overwhelming it with traffic from multiple sources. A typical spear phishing emails is extremely deceptive as they attempt to represent an identity that is trusted or related to the business itself or user’s interest. It is more likely for the users to open the email and download the malicious file. For spam phishing, I believe this kind of phishing emails would be detected by the email security system. Users are less likely to open the emails if they don’t recognize the contents.
One of the most recent example was the Dyn suffering from a DDoS attack, resulting in network and system downtime for many hours. Social network users could not access to their social media account or commercial transactions could not go through.
Definition of Spear phishing:
Spear phishing is highly specialized attacks against a specific target or small group of targets to collect information or gain access to systems.Definition of Spam phishing:
Spam phishing is the abuse of electronic messaging systems to indiscriminately send unsolicited bulk messages, many of which contain hoaxes or other undesirable contents such as links to phishing sites. -
Hi Mengxue, Great post, I like how you stated that Spear phishing should be a bigger threat to an organization because targets the specific organization with collecting specific contents. And you outstanding example shows how the spear phishing hacker will ask for the customer list. Spear phishing does not have to be carrying malicious software or botnet, it can also ask for your financial information/client’s data/personal private information.
-
Question 3: In the contexts of being attacked by or unwittingly becoming a resource for distributed denial of service (DDoS), which is a bigger threat to an organization’s network and computer resources and why: Spam phishing or Spear phishing?
Spam phishing is a bigger threat to an organization becoming a resource for distributed denial of service (DDoS) attack. Spam phishing typically utilizes mass email to target as many people as possible. In a way, similar to the “shotgun approach,” in that more is better. Conversely spear phishing uses a different strategy by target a small number of victims. Spear phishing will often use social engineering to lure employees into clicking a link on an infected email. While both will often use email, the goals are not generally the same. Because spear phishing targets a much smaller number of people than Spam, it usually is trying to steal sensitive data, financial information, and other valuable information. Spam phishing uses a much larger scale because it often recruits the victims computer in to a “zombie” computer, collectively known as a botnet. These computers are then used for DDoS attacks. -
In the contexts of being attacked by or unwittingly becoming a resource for distributed denial of service (DDoS), which is a bigger threat to an organization’s network and computer resources and why: Spam phishing or Spear phishing?
Generally, the spam phishing attack is a form of electronic junk mail sent to users, and it’s a very dangerous phishing scam since the attackers may use it to obtain sensitive personal information from victims like their credit card information or the password of online banking. Different from traditional spam phishing attack, the spear phishing is focusing on specific targets like employees or management of the organizations. Besides the email phishing attacks, pear phishing attackers also usually built fake websites with virus or malware, if the specific targets open the fake websites, the virus may copy the highly sensitive information from inside of the organization, and cause significantly data leak and damage the information assets of the company.
Comparing these two phishing, the spear phishing is a bigger threat to an organization’s network. Indeed, the spam phishing is more widely affected, but from the perspective of organization’s network, the spear phishing may cause more serious damage. Because the spear phishing is targeting the specific victims like management who have access authority to login to the company’s systems. If the attackers use spear phishing successfully obtain the accessibility of the information systems of the company, all confidential information are under the monitoring of the attackers, even worse, they can steal this sensitive information.
-
I think that in the context of being attacked by a DDoS that spear phishing is the bigger threat. A DDoS to take down an organization’s resources would need to know how to access servers via IP or by figuring out how to get past firewalls. Network administrators may be targeted to reveal this sensitive information. A spear phisher may request permission to audit and the administrator may reveal these resources’ locations against normal protocol. Neither spam phishing or spear phishing would change a DDoS on the organization’s public website as that IP is already known.
Conversely, I think in the context of the organization’s network becoming a botnet for a DDoS that spam phishing is the bigger threat. The botnet’s goal is to become as large as possible with as many connected computer resources at its disposal. The most effective botnets barely change normal computer operation making them difficult to know if you’re even infected. Targeting singularly important users in a company is not a strategy aligned with this goal. -
Spear phishing is highly specialized attacks against a specific target or group of targets to collect information or gain access to systems through personalized e-mail messages and social engineering. This is not random kind of attack, attacker knows target name, target email address, and at least a little about target. It’s a more in depth version of phishing that requires special knowledge about target.
spear phishing is an effective method for targeting several industries, appear to come from a trusted source.
Spam is the electronic equivalent of junk mail. It’s annoying and can potentially be very dangerous if part of a larger phishing scam.
I believe that spear phishing is the bigger threat to an organization’s network. attacks have potential consequences such as identity theft, financial fraud or stealing intellectual property. spear phishing is a real threat and it can bypass normal technical anti-threat barriers and exploits users to infiltrate systems.
Here are solutions to mitigate the Spear phishing attack:
– Consider extra level of authorization such as 2 step of verification.
– Frequently change password.
– Employees training
– Deploy a SPAM filter that detects viruses, blank senders. -
Noah,
Its interesting, I didn’t think the way you explained! True its based on reason for attacks or what is the goal of attackers.
If attacker aims botnet, spam phishing as it sent out in mass quantities can be a bigger treat, and if attacker aims specific information spear phishing as its target specific group of organization is a bigger treat.
-
In the contexts of being attacked by or unwittingly becoming a resource for distributed denial of service (DDoS), which is a bigger threat to an organization’s network and computer resources and why: Spam phishing or Spear phishing?
Spear phishing would be a bigger threat to an organization’s network because it is more targeted towards the victim and is more likely to be effective.
Spam phishing is a smaller threat and less effective because spam emails are usually easily detected. Users who receive spam emails can usually tell that it’s a spam because they tend to be random and unrelated to anything. Spam emails can also be filtered by spam filtering programs in the email service so they rarely even appear in employee emails.
Spear phishing emails on the other hand target specific employee. The emails are uniquely crafted specific to that employee such as appearing as sent by someone the employee knows or is related to something the employee done recently. The email may also be social engineered to get the employee to divulge information. Humans are susceptible to these kinds of emails because we tend to be more positive. When we receive an email that appears to be from someone we know, our initial instinct is to accept it and not be suspicious of it.
-
Good example of Dyn suffering from the DDoS attack. In this case, the cyberattacks cause the network and system down for couple hours, which damaged the information assets of the company and also the reputation. Just curiosity, in the Dyn’s case, is that the spear phishing attacks allowed the attacker got the accessibility by stealing the PII from the administrator?
-
Hey everyone,
I see your point that and agree that spear phishing is the more dangerous of the two. However, I interpreted the question to be that if you were being attacked by a botnet or an attack is attempting to make your computer a part of a botnet, what is more of a concern; Spam Phishing or Spear Phishing? In that case, I believe it will most likely be a spam phishing attack and not a spear phishing attack. I suppose which phishing attempt one should focus on depends on the exact risk.
If I am worried about a DDoS, then my concern would be over a spam phishing attack or just spam attack in general. A DDoS is caused by many computer requests coming in from multiple computer locations. If I were to get millions of spam emails in a matter of seconds, then my email system would go offline. Therefore, for the DDoS risk, I would be more concern with spam phishing or just enough spam to knock off my services.
If I am worried about my computer or computer resources becoming part of a botnet, I would be more concerned still with spam phishing. The reason for this, is because those looking to increase their botnet aren’t to particularly keen on whose computer resources it is. My grandma’s laptop works just as well as a botnet as the laptop of a Fortune 500 CEO, so therefore to increase their numbers it is more quantity over quality. Therefore, botnet owners will likely use spam phishing as a method for increasing their botnet and therefore to address this risk, I would focus on spam phishing. However, Abhay brought up a good point that spear phishing shouldn’t be 100% ruled out as a method for acquiring computer resources for a botnet.
If I am worried about social engineering, then I would be more concerned with spear phishing. Since spear phishing is targeted to only a couple of individuals, the cause behind such attempts isn’t so much to get access to computer resources for a botnet, but more to get access to an organization’s data such as PII or sensitive business information like patents. Since spear phishing is so particular and targets a small group of individuals, as opposed to a spam phishing attempt which can targets millions, the chance of success is greater.
-
Hi Fangzhou,
Great post, I agree with you that spam phishing is widely affected because it targets a massive amount of email users. It is inexpensive, quite and convenient but the success rate is lower compared to spear phishing. Compared to spam, spear phishing may take long time to modify the email for specific targets!
-
Hi, Yuming
You made great points. One thing I want to point out is that spam messages often contain images that the sender can track. When you open the email, the images will load and the spammer will be able to tell if your email works, which could result in even more spam. What we can do as email users to avoid this is by turning off email images.
With phishing scams, people should use their best judgement.
– never send someone money just because you’ve received an email request.
– never download email attachments you weren’t expecting because they might contain malware that could damage your computer and steal your personal information. -
Hi, Fangzhou
You are absolutely right that spam phishing is widely affected. I found some statistics online that was very interesting. There was a campaign, it sent 1000000 messages through spam phishing attack, the open rate was 3%, and click through rate was 5%. However, only 1000 message sent through spear phishing, the open rate was 70%, and click through rate was 50%. You can tell there is a huge difference! There are more people opening their email when it was sent through spear phishing.
-
Dyn definitely was the victim of a DDoS attack, but were they also the victim of spear or spam phishing? No question that others were certainly victims because so many connected devices were infected and turned to bots. But Dyn simply suffered from an inundation of traffic from this botnet, which may only be indirectly related to the phishing. It’s definitely very relevant to the attack, but not sure that either directly affected Dyn from what I’ve seen in the news.
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
-
“Major DDoS attack on Dyn DNS knocks Spotify, Twitter, Github, Etsy, and more offline”
Some popular websites including Twitter, Etsy, Reddit experienced disruptions when hackers launched a large cyber-attack. The cause appears to be the outage of DNS provider called Dyn. On Friday morning, domain host company Dyn confirmed that the attack started at 7:10am and lasted for more than two hours and major websites and services across the East Coast were shut down for two hours; services were later restored by 9:30am.
Dyn further said, “Some customers may experience increased DNS query latency and delayed zone propagation during this time. Updates will be posted as information becomes available.”
Domain Name Systems are like Internet’s phone directory. When an user enters a certain web address in the URL, the DNS then facilitates the request to go to that website and ensures that the user is sent to the right address. Since Dyn suffered an outage, a lot of users trying to access the affected webpages were suffering disruptions.
-
Hackers attacked Dyn’s DNS Services with a DDoS (Distributed Denial-of-Services) attack and shut down internet access to people along the east-coast. People had trouble accessing Twitter, Spotify, Netlix, Amazon and/or Reddit.
Dyn confirmed the attack and has began monitoring and mitigating the DDos attack to their Dyn DNS infrastructure.
DDoS attacks are when a hacker initiates large amounts of services on a machine to take it off-line.
-
Major DDoS Attack Causes U.S. Outages on Twitter, Reddit, Others
This week I found the news about a large distributed denial of service attack (DDoD) directed at DNS, internet performance management company Dyn caused Website outages for a number of its customers including Twiiter, Reddit and Spotify affecting mostly the eastern US. Dyn took immediate action to solve the denial of service problem in 2 hours.
A DDoS attack is an attempt to make an online service unavailable by overwhelming with massive amount of traffic from multiple sources. The hackers usually target a wide variety of important resources, from banks to news websites, and present a major damage to the server to shut down and make people cannot publish and access important information. In the case of the attack on Dyn, this affected the company’s ability to manage DNS queries and connect traffic to customers’ proper IP addresses at normal speeds.
Due to the weak protection of Internet of Things devices, there are rapid increased numbers of DDoS attacks. They are poorly secured Internet-based security cameras, digital video recorders (DVRs) and Internet routers.
Source: http://www.toptechnews.com/article/index.php?story_id=111003TVBR2I
-
Researchers Find Dangerous Intel Chip Flaw
Researchers at the State University of New York and University of California discovered flaw in Intel chips which allows them to bypass ASLR (Address space layout randomization works to defend against a range of attacks by randomizing the locations of code in computer memory). The researchers were able to launch a so-called ‘side channel’ attack on a Haswell chip’s branch target buffer (BTB), which resides in the branch predictor part of the CPU. Doing so enabled them to work out where certain pieces of code were located, effectively undermining ASLR. However, Alfredo Pironti who manage consultant at ethical hacking firm IOActive claimed it is worth nothing that these attacks are often more expensive and time consuming to conduct, compared to classical software attacks.
Theoretically, this intel chip flaw is very dangerous since it makes a range of cyber-attacks far more effective-across Windows, Linus, OS X, Android and IOS. Practically, since it is expensive and costing time, besides it requires stricter conditions such as running a specific software on the victim’s machine and being able to collect CPU metrics. It will be difficult to conduct the hack. Still, we hope intel can fix the problem for security reason.
Link: http://www.infosecurity-magazine.com/news/researchers-find-dangerous-intel/
-
Large DDoS attacks cause outages at Twitter, Spotify, and other sites
Several waves of major cyberattacks against an internet directory service knocked dozens of popular websites offline today, with outages continuing into the afternoon.
Twitter, SoundCloud, Spotify, Shopify, and other websites have been inaccessible to many users throughout the day. The outages are the result of several distributed denial of service (DDoS) attacks on the DNS provider Dyn, the company confirmed. The outages were first reported on Hacker News.
The DDoS attacks on Dyn began this morning. Service was temporarily restored around 9:30 a.m. ET, but a second attack began around noon, knocking sites offline once again.The DNS provider said engineers were working on “mitigating” the issue, but a third wave began around 4:30 p.m. ET before being resolved roughly two hours later.
“The complexity of the attacks is making it complicated for us. It’s so distributed, coming from tens of millions of source IP addresses around the world. What they’re doing is moving around the world with each attack,” Dyn’s York explained.York said that the DDoS attack initially targeted the company’s data centers on the East Coast, then moved to international data centers. The attack contained “specific nuance to parts of our infrastructure,” he added.
Large DDoS attacks cause outages at Twitter, Spotify, and other sites
-
Millions of Indian debit cards ‘compromised’ in security breach
On Wednesday, India’s largest bank, State Bank of India, said it had blocked close to 600 thousands debit cards following a malware-related security breach in a non-SBI ATM network. Several other banks, such as Axis Bank, HDFC Bank and ICICI Bank, too have admitted being hit by similar cyber attacks — forcing Indian banks to either replace or request users to change the security codes of as many as 3.2 million debit cards over the last two months.
On September 5, some banks came across fraudulent transactions in which debit cards were used in China and the US when customers were actually in India. -
“Regulators to Toughen Cybersecurity Standards at Nation’s Biggest Banks”
The article discusses initial framework that US regulators recently unveiled to address cybersecurity at the nation’s biggest bank. The plan was developed by the Federal Reserve and Federal Deposit Insurance Corp (FDIC) and will target US and foreign banks that operate in the US with $50 billion or more in managed assets. The most stringent requirements are reserved for the institutions that pose a systemic risk to the economy and financial system. These banks would be required to demonstrate that the ability to return core operations online within two hours of a cyber attack/IT failure. The financial industry is heavily dependent on information systems and is increasingly interconnected which can increase the impact of one event.
-
The article I read is about a massive atm attack in india which hit 3.2 million accounts from multiple banks and financial platform. Probably the biggest data breaches to date. The majority of the stolen debit card information are powered by VISA or Mastercard.
Hackers used malware to compromise the Payment Services platform, used to power country’s ATM, point-of-sale (PoS) machines and other financial transactions. As of now, the hackers identity is still a mystery however, it looks like the affected customers have observed unauthorized transactions made by their cards in various locations in China.As of now, the Payments Council of India has ordered a forensic audit on the Indian bank servers to measure the damage and investigate the origin of the cyber attack
(Im not sure if you guys are aware but cards which use Magnetic Stripes are easier to clone. Whereas, banks who are using Chip-and-Pin cards store your data in encrypted form and only transmit a unique code (one-time-use Token) for every transaction, making these cards more secure.)
-
3.2 Million Debit Card Hacked in India
In what has been termed as the biggest data breaches in the banking industry in India, 3.2 million debit card details have been stolen. These debit cards have been understood to be used at ATM’s that are suspected to have exposed card and pin details to malware at the back-end. A forensic audit has been ordered by payments councils of India on Indian bank servers and systems to detect the origin of frauds that might have hit customer accounts. Indian banks stung by biggest financial data breach to hit the industry are trying to contain the damage and compensate the effected account holders.
According to national payment corporation of India 90 Million ATM’s have been compromised and at least 641 customers across 189 banks have been hit. As per NPCI total amount lost due to fraudulent transactions on hacked debit cards is Rs. 1.3 Crore. The malware used was a malicious software in form of virus, worms, Trojan, ransom-ware and spyware which impacted the computer systems at ATM’s. Reserve Bank of India has directed the banks to submit a report on such a big data theft.
The manufacturer of ATM’s Hitachi payment services is under the fire as it is believed that the malware was introduced in the systems due to lack of testing of the ATM machines, Point of Sale and other services. The company is believed to have installed more than 50000 ATM’s in the country within the last year. Of the total debit cards hit, 2.6 Million are said to be on VISA and MASTERCARD platform. As a damage control exercise, banks have advised customers to change ATM pins or get their cards replaced.
Some banks have even issued advisory messages to stop using other bank ATM’s at their ATM machines. One of the worst hit bank has already blocked 6 Lack debit cards and has blocked international transactions that can be conducted without PIN.
-
Barack Obama Talks AI, Robo-Cars, and the Future of the World
President Obama had an interview in November’s issue of Wired and he made a few interesting points about cyber security.
OBAMA: “Traditionally, when we think about security and protecting ourselves, we think in terms of armor or walls. Increasingly, I find myself looking to medicine and thinking about viruses, antibodies. Part of the reason why cyber security continues to be so hard is because the threat is not a bunch of tanks rolling at you but a whole bunch of systems that may be vulnerable to a worm getting in there. It means that we’ve got to think differently about our security, make different investments that may not be as sexy but may actually end up being as important as anything.”
While the above vastly simplifies what can be a very complicated set of issues, I do appreciate the “armor vs. medicine” metaphor he uses.
-
Yeah, no problem! Just the irony though, I was seriously complaining about how crappy chrome was acting and I discovered this article and I was like oh man. Additionally, yes I remember that haha using hackers for profit, sounds like a booming business! Thanks for the link, wish I could do the bug bounty program, especially after looking at the pay outs; definitely would be putting that towards student loans,
-
the article I read for this week is called “Massive DDoS Attack Knowcs Out Twitter, Box, Spotify.” The article talked about that the DDoS attack targeted New Hampshire-based company Dyn and its managed DNS infrastructure, and began early Friday Morning. The company originally said that “it restored operations around 9:30 am EST, but a second attack followed that knocked Twitter and others offline again for some users, especially those on the East Coast of the US. The attack is ongoing and is causing outages and slowness for many of Dyn’s customers. The US Department of Homeland Security is investigating but so far no one knows who is behind the attacks.
http://www.infosecurity-magazine.com/news/massive-ddos-attack-knocks-out/
-
“Martin Gottesfeld, Anonymous hacktivist, charged over hospital DDoS attacks”
This article was not only interesting because it’s part of this week’s discussion, but also about hactivists and their moral compass.
Martin Gottesfeld, is being charged for computer hacking crimes related to a DDoS attack on Boston Children’s Hospital and Wayside Youth and Family Support Network. He overloaded their computer systems with illegitimate traffic and kept them down for over a week and making the hospital lose money from recovery efforts and income from fundraisers upwards of $600,000.
Mr. Gottesfeld considers himself a hactivist, fighting for the human rights of the “troubled teen industry.” These institutions are involved in the treatment of adolescents with emotional, psychological, and medical problems. He admitted to waging the DDoS attack because of the alleged mistreatment suffered by Justina Pelletier.
Law aside, do you think what he did was right? Should he have taken action into his own hands?
-
http://www.databreachtoday.com/2-million-hipaa-penalty-after-patient-data-exposed-on-web-a-9465
In Feb 2012 St Joseph Health reported that their electronic records containing PHI were publicly accessible from Feb 1 2011 to Feb 13 2012.
These records were stored at a server with default settings, default password that allowed anyone to access data over plain internet.
After installing this server SJH never verified security controls. They had hired a external party to verify the vulnerabilities however, as they worked in patchwork fashion and this vulnerability was missed. This way the conducted risk analysis was against HIPAA standards.
In this case data did not include SSN, addresses or financial data and there is no indication that the information was used by unauthorized persons.
The health services company will now continue its increased enforcement activity to foresee resolution agreements. The agency is in the Phase 2 of the HIPAA audits, which could result in enforcement activity in certain circumstances -
Cybersecurity Expert Saket Modi Will Make You Afraid To Own A Smartphone
Saket Modi, cofounder of Lucideus Tech, asked an audience at the 2016 FORBES Under 30 Summit in Boston. “How many of you think you are smart enough to use your smartphone?” He asked for a volunteer to briefly hand over a smartphone and quickly got one, which is protected with a passcode. He poked a few buttons on the phone and handed it back within half a minute.
In the big screen behind him, he popped up a long list and asked the owner “Is this the list of all your calls you’ve made, up here?” And yes, it was. He then did the same with the phone’s text messages, contacts, current location and GPS history. The only thing he skipped was the phone’s browsing history.
He then said to the public, “all of this was possible with this phone in my hands for 25 seconds”. “And the best part of this entire thing is: what I just did is not even a hack.” In fact, he hadn’t installed any software on the phone. He had simply run a script to collect permissions–permissions most phone owners have already granted to Facebook, Gmail and other apps without a second thought.
“Destruction of all personal privacy within 25 seconds is just one facet of the new hacking landscape. Ransomware is increasingly being used to extract important information rather than just cash. Hackers are getting paid to hack specific targets, both public and private. Scripts can crawl a target’s Facebook page–or private messages, or even deleted messages–to identify the issues most important to the target, then use them against the target”, stated Modi. -
I read the article “75% of Orgs Lack Cybersecurity Expertise”. According to this article, a study from Tripwire found that 66% of respondents faced increased security risks due to this workforce shortage; and 69% have attempted to use technology solutions to fill the gap. Moreover, a full 72% said they had challenges hiring skilled cybersecurity experts; half said their organizations do not have an effective program to recruit, train and retain skilled cybersecurity experts.
According to Tripwire’s study, only 25% of the respondents were confident their organizations have the number of skilled cybersecurity experts needed to effectively detect and respond to a serious cybersecurity breach.
Indeed, some management in the organization think that the investment in cybersecurity is expensive and the cyber-attacks may never have occurred. Therefore, these organizations usually do not have cybersecurity expertise, and lack of the protection of company’s information assets. This may cause significantly data leak and allows cyber attackers access the information systems of the company.
Source: http://www.infosecurity-magazine.com/news/75-of-orgs-lack-cybersecurity/
-
On October 21st I got email from Big interview website, which I am a member of it. That the global Internet Outages Affecting their website. I was curious about the breach and did search about it.
On October 21st , ton of websites and services, including Spotify and Twitter, were unreachable because of a distributed denial of service (DDoS) attack on Dyn, a major DNS provider. Details of how the attack happened remain vague, but the sure thing is increasingly sophisticated hacks.
Some of the was political, like an attempt to take down the internet so that people couldn’t read the leaked Clinton emails on Wikileaks.
According to the article we are getting in to serious level of DDOS attack and the internet becomes more vulnerable.
List of websites that readers have told us they are having trouble accessing includes; CNN, Etsy, Spotify, Starbucks rewards/gift cards, Netflix, Kayak.It was an important attack even FBI investigating this cyber attack.
http://gizmodo.com/this-is-probably-why-half-the-internet-shut-down-today-1788062835
http://gizmodo.com/the-fbi-and-homeland-security-are-investigating-todays-1788079688
-
Loi,
Its an interesting article its explains the detail of DDos attack, also its point out the rationalization of attacker.
-
Linux backdoor Trojan doesn’t Require Root privileges
A newly observed Linux backdoor Trojan can perform its nefarious activities without root access, by using the privileges of the current user, Doctor Web security researchers have discovered.
Dubbed Linux.BackDoor.FakeFile.1, the malware is being distributed as an archived PDF, Microsoft, or Open Office file, As soon as the file has been launched, the Trojan would save itself to the user’s home directory, in the .gconf/apps/gnome-common/gnome-common folder, search for a hidden file that matches its name, and replaces that file with itself. If the malware doesn’t find the hidden file, it creates it and then opens it using gedit. Post checking and confirming that the linux distribution on the system is not openSUSE, the Trojan retrieves the configuration data from its file and decrypts it.The malicious program then launches two threads, one to share information with the command and control (C&C) server, and the other meant to monitor the duration of the connection. So if the Trojan doesn’t receive instructions within 30 minutes, the connection is terminated. On a compromised system, the backdoor can execute a multitude of commands and send to the C&C server the quantity of messages transferred during the sessions or a list with the contents of the specified folder; send a specified file or a folder with all its contents; delete a directory; delete a file; rename a folder; remove itself; launch a new copy of a process; and close the current session.
Other operations supported by the malware include: establish backconnect and run sh; terminate backconnect; open the executable file of the process for writing; close the process file; create a file or folder; write the transmitted values to a file; obtain the names, permissions, sizes, and creation dates of files in the specified directory; set 777 privileges on the specified file. The backdoor can also terminate its own operation upon command.Source : http://www.securityweek.com/linux-backdoor-doesnt-require-root-privileges
-
The article I read, titled ‘Going easy on cyber security could turn India’s technology growth story into a nightmare’, was about the debit card data breach that left about 3.2 million Indian customers vulnerable. This breach was the biggest attack in the country’s banking system up to date and the attack raises the country’s concern for the “after-thought” cyber strategy that India currently employs. It is apparent to India’s critics, citizens, and government that companies currently look at cyber security more as “good to have” rather than a need and/or requirement. The sustainable solution suggested in the article is public-private partnership for cyber crime, involving governments and private players locking arms over issues such as data ownership, liability, audit frequency, and others. Regardless, the Indian banking system data breach is a wake-up call for India and a reminder to the rest of the world to increase their cyber security investments and strategy. The bottom line for India is they need to take ownership and prepare a comprehensive national strategy for cyber defense.
-
Pretty crazy they couldn’t successfully outsmart these dominating industry type companies. Many of these companies pride themselves on having little downtime. Just shows you that the cyber industry has experts on both sides (good vs. bad). It is crucial for these companies to invest more in securing their systems. It could cause these companies to lose customers. I left Spotify for now having great service and a terrible application. Amazon has some data that is most likely attractive to steal so I can’t say it enough. These companies need to invest in their cyber strategy!
-
It is good that some companies are catching up on their cyber! Many companies turn a blind eye and have a “good to have” type opinion on having an effective cyber strategy and not a “must have”! I think it is interesting that they brought in another company to scan their vulnerabilities. I feel like that may be risky. Hopefully they are learning form the company that they brought in and are gaining expertise in this field. I think the scanning process should be in-house due to how much that data needs to be protected.
-
Chinese Manufacture would be replacing the IoT component partially which were involved in DDoS attack.
Chinese manufacturer Xiongmai Technologies has promised to recall or patch some components and circuit boards that it manufactures including CCTV, webcam devices, digital video recorders which attackers compromised and used to help power a massive internet of things botnet that overwhelmed DNS provider Dyn’s systems on Oct. 21 via distributed denial-of-service attacks.
Security intelligence firm Flashpoint said that the massive DDoS attack involved IoT devices infected with Mirai malware, which overwhelmed the DNS service and prevented internet users from reaching many sites. Flashpoint adds that at least some of those devices were built by or used components from Xiongmai, even if they were not labeled at such. Xiongmai has acknowledged and accepted to replace the devices involved in this attackhttp://www.databreachtoday.com/chinese-manufacturer-promises-partial-iot-component-recall-a-9478
-
That’s an interesting post Yulun. The DDoS attacks now can target in the social medias, which may cause widely effects. Since the social medias like Twitter have large number of users’ personal information, if the servers hacked by attackers, huge number of users may be affected. Therefore, the cyber security of social medias is truly very important.
-
Morgan Stanley ‘s Hong Kong division, Morgan Stanley Hong Kong Securities Ltd., has been fined HK$18.5 million ($2.4 million) by the Hong Kong’s securities regulator, Securities and Futures Commission (SFC) for internal control failures.
Continued Internal Control Failures
The breach of the Hong Kong’s Code of Conduct included Morgan Stanley’s failure to avoid conflict of interest between principal and agency trading, failure of proper disclosure of its short-selling orders as well as maintenance of unsystematic documentation of its electronic trading systems. The breach is suspected to have occurred between 2013 and 2016.
In Jun 2013, during an investigation by the SFC on the irregular price movements of two stocks, it was discovered that Morgan Stanley did not have a separation between its discretionary order dealers and principal account dealers, resulting in a potential conflict of interest. Notably, the separation finally took place in Oct 2014.
Further, the bank failed to disclose its 29,000 short-selling orders from Jan 2014 to Nov 2014. Moreover, in Feb 2015, position limits were breached, which resulted in a stock option contract to exceed the limit by more than 300 contracts on a trading day.
Additionally, between Jun 2012 and Mar 2016, Morgan Stanley failed to follow the instructions of an asset manager to report large open positions, on a delegated basis.Read more: https://www.zacks.com/stock/news/229220/morgan-stanley-fined-24m-on-internal-control-failures
-
Magazine Editor Left Red-Faced After ‘Reply All’ Gaff
The president and editor of a popular American financial magazine made a huge blunder when forwarding a confidential email by clicking reply all instead. The content of the email included a a discussion about a buyout and staff lay offs. This email was sent to the entire Wall Street Journal newsroom.
http://www.infosecurity-magazine.com/news/magazine-editor-left-red-faced/
-
This article is a little bit older but I thought it was quite interesting regarding a DDOS attack on a hosting DNS provider called DNSimple that was a managed DNS service provider for a number of extremely popular websites such as Pinterest, Canopy, Exposure among others. In 2014 a massive distributed denial of service attack was leveraged on one of the most critical business days of the year for online retailers, Cyber Monday. The article goes into detail surrounding who’s responsibility it is to build in fault tolerance or redundancy into critical network services, such as DNS, even when you outsource this function to a managed service provider. Even though you outsource this IT function, it does not mean that you transfer all risk associated with these services to the managed service provider, rather it is still the client’s bottom line on critical shopping days for online retailers that is impacted if their online marketplace is made unavailable for any reason including an online act of terror via DDOS attacks. People do not realize that DNSimple is providing your DNS services, rather, they just know they tried to get to Pinterest in order to find some great online deals for their Christmas shopping and weren’t able to access the website. That creates significant damage to the brand reputation.
The author goes onto explain that outsourcing the DNS function that they are creating a single-point of failure which should never be the case for such a critical IT service for the business. One of the ways the decisions that leads to overwhelming the server function is making TTLs (time to live), a function that defines how long a website will keep local cache records for quicker response time for web surfers, and making these TTLs too long. With a short (under 60 seconds) TTL requirement it is causing everyone to access the servers which is exactly what a DDOS attack does, it overloads the servers resources by sending an unreasonable amount of requests the the server and/or network cannot handle. It goes on to say creating TTLs for a full week, instead of a 60 second guideline, they wouldn’t need to access the servers for a full week and if there is an outage they would only realize it after the TTL request was sent through. Obviously, the impact of an outage is only an impact when the end-user knows of the outage.
The second recommendation is a disaster recovery technique used for WAN design all the time, and that is to use service redundant carriers for the service. The same way you would have a WAN connection from Verizon and AT&T so if one went down you could re-route the traffic to the other, he says best practice should be to use nameservers from different DNS providers. As a general rule of thumb, he recommends that you use 4-6 redundant nameservers when trying to accomplish a 100% SLA on availability. The only way to accomplish having carrier or service redundant name servers through multiple DNS providers is to have editable NS records.
I thought this was a great example of the type of exposure that you can easily overlook when planning you network design and services and not only did he outline how real the threats are, but also readily available ways to mitigate the associated risks and accomplish acceptable SLAs on availability for mission critical websites.
-
Massive DDoS Attack Knocks Out Twitter, Box, Spotify
The article I read talked about the DDoS attack targeted New Hampshire-based company Dyn and its managed DNS infrastructure. The company originally said it restored operations around 9:30 a.m. Eastern Time. However, a second attack followed that knocked Twitter and others offline again for some users, especially those on the East Coast of the US. The attack is ongoing and is causing outages and slowness for many of Dyn’s customers.
Internet has become very vulnerable, an attack on one can lead to attack on many others. An attacker seeking to disrupt services to multiple websites, may be successful simply by hitting one service provider like a DNS provider, or providers of multiple other Internet infrastructure mechanisms.
Mark Chaplain, VP EMEA for Ixia suggested that organizations can mitigate the impact of these attacks by reducing their attack surface—blocking web traffic from the large numbers of IP addresses globally that are known to be bot-infected, are known sources of malware and DoS attack.
Source:
http://www.infosecurity-magazine.com/news/massive-ddos-attack-knocks-out/ -
Loi,
This is an interesting question that we can’t respond. He has his reasons that justify his actions. But if we encourage that kind of behavior we might end up in a chaotic world.
-
“American vigilante hacker sends Russia a warning”
It was recently announced and even discussed in the debate that US intelligence has identified that Russia was behind attacks on the DNC and other targets. A vigilante known as “The Jester” (or th3j35t3r in leet speak) decided to take it upon himself to retaliate against a Russian target. He vandalized the Russian Ministry’s website with a message that went “Comrades! We interrupt regular scheduled Russian Foreign Affairs Website programming to bring you the following important message,” and continues “Knock it off .You may be able to push around nations around you, but this is America. Nobody is impressed.” The website is the US equivalent of the Department of State so this message was visible to the international community. The Jester adds to his comments that Putin’s denial is transparent and that he wants Putin to go back to his “room”. The Jester spoke willingly with CNN to say that the recent massive DDoS also spurred him to action, although there has been no public culprit acknowledged yet. The Jester said he used a code injection technique to modify the website. Due to the attack starting on the weekend the message stayed up for a good portion of the weekend.http://money.cnn.com/2016/10/22/technology/russian-foreign-ministry-hacked/index.html
-
When I buy a new piece of equipment I like when it has a randomly generated password on it, usually in the form of a sticker. Xiongmai should’ve been doing something like this from the start. Since for some devices this is just circuit boards and the brand isn’t listed as Xiongmai, a lot won’t be recalled. Without the ability to send updates to these IoT devices, the Mirai botnet will float there for a long time as many users don’t know that their devices are infected.
-
For some bigger companies, a long TTL may reduce the ability of round robin DNS load balancers to distribute web traffic. Its part of what these big companies pay for. For medium companies they should be setting longer TTLs.
I like that the article covers 2014’s DNS attack as it shows that the internet decided to just absorb/reduce the risk instead of mitigating it entirely. A lot of people won’t look to see that a big portion of the internet was down and the reputation loss will be on each individual site. Since 2014 the mitigation services have become massive companies but the botnets have also grown in size; its a modern arms race. -
Today what I shared is about the data breach as a result of stolen electronic device. It’s easier to steal a laptop than to hack a database. What the theft would do to hack your electronic device.
1. Physical access to the system. The most secured server in the world is rendered largely insecure when you let a hacker stand in front of it with a keyboard and monitor. A major portion of the security aspect there is protecting the server from physical access.
2. Time. If (s)he’s taken it, (s)he has all the time in the world to try whatever and whenever they want.
So the steps you take to protect your data should be designed to make it harder and less worth it. For the average home user’s laptop, if there’s hard disk encryption and other protections, the likelihood of them getting something worth all that time investment is lowered, and they’re much more likely to just wipe the drive, and hock it, rather than hack it. -
The very nature of a DDoS attack is to aggregate many innocuous flows into a large and dangerous one. The essential nature of the attack is to overload the resources of the target. This means we need to master a new skill: managing network in overload. This is a problem faced by the military, since their networks are under active attack by an enemy. Part of the solution is to have clear technical “performance contracts” between supply and demand at ingress and traffic exchange points. These not only specify a floor on the supply quality, but also impose a ceiling on demand.
source: http://www.circleid.com/posts/20161024_internet_needs_a_security_and_performance_upgrade/
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
Presentation: Slides in PDF format
Presentation: Video
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
1. Are the terms Business Continuity Plan (BCP) and Disaster Recovery Plan (DRP) synonyms or are they different? If they are different, what are the differences?
2. Is it practical to conduct a thorough test […]
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
Is it practical to conduct a thorough test of a Business Continuity Plan (BCP)? Why might it not be practical? If it is not practical, what alternative ways can you recommend for testing a BCP?
-
For the “health” of the business, it is very practical to do testing of the Business Continuity Plan (BCP). However, testing (by nature) can be disruptive and intrusive. In the “Disaster Recovery and Business Continuity Planning” article by Yusufali Musaji that we read this week, he gives four methods for testing.
They are:
– Hypothetical
– Component
– Module
– FullIn descending order, they become more burdensome to implement. However, something that Mr. Musaji writes about in his “Setting Objectives” section stood out to me. He mentions documentation testing and “third-party” evaluations of offsite locations (a backup data center, for example). Documentation seems to be the bane of most company’s/employee’s existence, but their is certainly a need for it in BCP. This documentation can serve as the foundation of what to do in the event of outages or changes, as long as it is high quality and people are trained in how to use it. Furthermore, reaching out to a third-party to conduct BCP evaluations frees up your own in-house resources, so the company can be more productive while evaluations are conducted.
-
BCP is a plan that allows a business to plan in advance what it needs to do to ensure that its key products and services continue to be delivered in case of a disaster,.A business continuity plan enables critical services or products to be continually delivered to clients. Instead of focusing on resuming to a complete strength after a disaster, a business continuity plan endeavors to ensure that critical operations continue to be available.
It is very difficult and less practical to conduct BCP test in an organization.But testing can be a major challenge to many organizations because it requires
1) Management support,
2) Time for preparation and execution,
3) Funding
4) Structured process from pre-test through test and post-test evaluation
5)Clients do not cooperate to test as post test the results suggests some solution which will be very costly to implementThe full BCP test verifies that each component under each module is workable and the complete strategy and objectives are satisfied.So in most cases you wouldn’t be able to perform the full testing but you will be able to test all the parts of business continuity separately
Component and module testing can be a good option as an alternative.It helps verify details and procedures of individual processes .The emphasis can be laid on more critical components. -
2. Is it practical to conduct a thorough test of a Business Continuity Plan? Why might it not be practical? If it is not practical, what alternative ways can you recommend for testing a BCP?
It isn’t practical to conduct a thorough test because the plan would affect everyone who is utilizing the environment. This would be a huge project that wouldn’t make business or financial sense. An alternative way to test the BCP plan is to conduct it on a pre-defined environment. Conduct a beta-test on “dummy” users. All companies will perform beta-testing before pushing things down to the end users.
-
A thorough test of a BCP is not practical. There would be great expense and it can cause disruption to employees. Also, the organization may be outsourcing some of IT and cannot view inside their operations.
We can do a lot of disaster recover testing in general though insted of a thorough test. There are four main categories of testing; hypothetical, component, module, and full. Hypothetical tests are there to prove that there is a plan in case something breaks. This is the fastest method of testing.
Component testing is a chunk of instructions from the BCP to be performed, usually for one feature. This can be for verifying compatibility for things such as tape storage, recovery, or security packages.
Module testing is testing that multiple components will work together after being recovered.
Full testing is to check that all the components can be up and running in a certain acceptable amount of time.source; disaster recovery and BCP
-
Fred, Thanks for you post. I don’t agree that thorough testing for the BCP/DRP is impractical or that it doesn’t make business or financial sense. Like you said, testing is required before anything is set in to production. Some products may even go through hundreds of tests before being approved. So why shouldn’t a BCP/DRP be put through the same rigor? You don’t know if it will work as intended unless you test it. Yes, some aspects may be expensive, like flipping the switch on the cold-site, but I think it’ll be more expensive to find out that the switch doesn’t work when an disastrous event have already occurred.
-
Nice post Vaibhav. Sometimes it is not possible to have a full operational BCP test as it can be expensive and also result in a loss of productive time.To conduct a full operational test , the organization should have tested the plan well on paper and locally before completely shutting down operations.
Other alternate methods are
1. Desk-based evaluation/paper test: A paper walkthrough of the plan where what happens if a particular service disruption occurs is studied.
2. Preparedness test: It is a localized version of the full test wherein actual resources are used in the simulation of system crash. It is cost effective way to know if the BC plan is good.
Usually before conducting a full operational test both paper test and preparedness test is done to ensure that the operations do not come to a standstill.
Methods to test BCP:
1. Checklist test
A checklist test determines if the plan is current, if the backup site has adequate, correct telephone numbers and contact information are available, emergency forms, copies of the plan and any supplemental documentation are available.
2. Structured walk-through test
This test done team-wise or department wise where in a detailed walk-through of the various components of the plan is conducted. The type of disaster and parts of the plan that needs to be tested is decided by the team leader.
3. Emergency evacuation drill.
A facility evacuation should be conducted at least once a year with all employees to be sure that the employee understands how the evacuation should proceed, where to go, whom to reach out to, how to handle personnel with physical limitations in case of emergency etc.
4. Recovery simulation
In this testing, the team uses equipment, facilities and supplies as they would in a disaster situation provided in the plan. It checks if the team is able to carry out critical functions using the recovery and restoration procedures. -
Loi,
I do think the test should be conducted, but not a thorough test. It is impractical and too expensive. I say this because I use the Merrian Websters definition of Thorough as, “Including every possible part or detail”.
Using this definition, I believe a thorough test shouldn’t be performed.
With that being said, I do believe tests should be conducted and for those processes that require more cumbersome testing, should be conducted on a smaller scale, and on a replicated environment vs. the actual, until the replicated environment test is successful, then you would move to the actual on a much smaller scale..
-
This conversation was intreging and decided to ask Bob Deliosi, the tour guide from Sungard this question. Here was his response and some stuff on Sungard. He is going to send me a link to the Mobile truck they use for clients’ BCP’s.
Fred, It was my pleasure, all were very interested.
Here is a link to some Sungard AS Youtube stuff. Looking for the Truck video.
Companies typically do not shut down production services for a BCP test.
Typically, they isolate a DR network and that small team works in that arena testing for a number of hours/days. -
Here is the link to show how companies responded to Hurricane Sandy, a few short years ago.
Check out the end when they talk about mobile trucks and how companies worked out of the trucks.
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
What is the U.S. Federal Government’s Recovery Time Objective (RTO) for IT capabilities needed to support continuity of communications? [Hint: see Homeland Security (2012) Federal Continuity Directive 1 – […]
-
So, I was a little confused in answering this question, because I’m not sure if you’re looking for a “X amount of hours” answer or a more general “this is what RTO” is answer. In either case, the FEMA document you mentioned as a hint has the following in Annex H, Subsection 7:
“Organizations must ensure that the communications capabilities required by this Directive are maintained, are operational as soon as possible following a continuity activation, and in all cases within 12 hours of continuity activation, and are readily available for a period of sustained usage for up to 30 days or until normal operations can be reestablished. Organizations must plan accordingly for essential functions that require uninterrupted communications and IT support, if applicable.”
If I’m reading it correctly, my understanding is that if you work for the Federal Government, you have no more than 12 hours to meet the required communications continuity plan.
Another portion of the Directive discusses teleworking options to support continuity, and that made me think of my own experience. As a contractor for the Federal Government, if there is inclement weather that prevents me from working on-site, I do have permission to work from home for the period of time required. The readings made me wonder what the plan would be if my organization had to move buildings. We’re small enough to likely be able to work temporarily at another on-site building, and our enterprise IT systems are designed to work across all the government buildings at the Navy Yard.
-
Andres,
Good post. I came to the same conclusion. Communication can not be interrupted for more than 12 hours and the back-up communication system needs to support up to 30 days of operations.
From experience in the Army, they told us our communication system required 100% up-time, at all times. Or, lives are lost. We had several systems on stand-by in case of an outage. We would also have equipment on call in case of a bare-metal restore for severs and workstation.
-
Nice post Andres.
RTO- ‘Recovery Time Objective is the targeted duration of time, a service level within which a business process must be restored after a disaster (or disruption) in order to avoid unacceptable consequences associated with a break in business continuity.’
As per the ‘Homeland Security (2012) Federal Continuity Directive 1 – available from FEMA.gov’- US federal Government requires that the PMEFs(Primary Mission Essential Functions) must be operational within 12 (RTO) hours after an event has occurred under all threat conditions. The capabilities include operability of the essential functions, access and usage of essential records/information, physical security and protection against all threats identified in the facility.
Reference: https://en.wikipedia.org/wiki/Recovery_time_objective
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
-
J&J warns diabetic patients: Insulin pump vulnerable to hacking
According to the article found from Reuters, Johnson & Johnson, the medical device and pharmaceutical company, has recently announced that its OneTouch insulin pump products are vulnerable to hacking. While one may think of hacking to only take place on computers, it turns out that medical devices have recently become a target. The article states that J&J has recently discovered the vulnerability in this product which if exploited, can cause an overdose of insulin into the user. According to the article, the approximately 114,000 individuals which include doctors and users of the medical device have been notified about the vulnerability. This Johnson & Johnson insulin pump is attached to the person underneath some layer of clothing but allows an individual to control a pump of dosage by using a remote control. This is where the vulnerability lies as it allows a hacker to spoof the communication between the remote control and the insulin pump since the communication is not encrypted, potentially injecting a lethal dosage. With that being said, Johnson & Johnson has stated that they believe the risk is low since it requires highly technical knowledge and need to be within 25 feet of the pump. While the risk is low, they have informed customers who are worried about the vulnerability to disconnect the remote control functionality from the pump. This article goes to show that hacking and vulnerabilities are not just relevant to businesses or databases, but can be applied to much more.
Source:
http://www.reuters.com/article/us-johnson-johnson-cyber-insulin-pumps-e-idUSKCN12411L -
https://www.sec.gov/news/pressrelease/2016-133.html
“SEC Proposes Rule Requiring Investment Advisers to Adopt Business Continuity and Transition Plans”
Registered investment advisers would be required to have and execute written business continuity plans.
This could be a great thing for clients and investors who are concerned about what happens with their money in the event they want to take action during a disruption to an adviser’s services.
The BCP would need to take into consideration the following components.
– Maintenance of systems and protection of data
– Pre-arranged alternative physical locations
– Communication plans
– Review of third-party service providers
– Plan of transition in the event the adviser is winding down or is unable to continue providing advisory services. -
Searching for Best Encryption Tools? Hackers are Spreading Malware Through Fake Software
The article I read is about the fact that people when trying to protect themselves from virus , malware etc actually run the risk to use fake securing tool. Indeed, hackers use now fake versions of encryption tools in order to infect as many victims as possible. The article specifically focus on a certain type of hackers called “StrongPity “ . They target users of software designed for encrypting data and communications. How ? by setting up fake distribution sites that closely mimic legitimate download sites, which trick users into downloading malicious versions of these encryption apps, allowing attackers to spy on encrypted data before encryption occurred. The top five countries affected by the group are Italy, Turkey, Belgium, Algeria and France.
-
Synopsis on “Critics Blast New York’s Proposed Cybersecurity Regulation”
The financial industry has always been a target for hackers. Back in January, New York’s governor, Andrew Cuomo proposed some new cybersecurity requirements on banks. The main components of this proposal required banks to:
– Hire a “qualified” CISO to be responsible/accountable for mitigating cyber risks.
– The bank must notify the state within 72 hours of any cybersecurity event that could impact business or
consumer privacy.
– Require two-factor authentication for employees, contractors, and other third-parties who has privileged
access to the organizations internal systems.
– Encryption of all non-public information.Critics has clash back with the state claiming that their approach are too “prescriptive” and that smaller banks does not have the resources to be compliant.
Personally, I think that this is a good thing and it should be a federal requirement. Financial institutions handles a great deal of personal information from the customers, including their money. A successful attack could leave customers vulnerable to bankruptcy and identity theft, among others. I am surprised that encryption of all non-public information and some of the other requirements is not already enforced at the banks. Even if smaller banks cannot meet the requirements at this time, I think a strategic goal for them is to ensure consumer privacy through the adherence of these requirements. What are your thoughts?
Source: http://www.databreachtoday.com/critics-blast-new-yorks-proposed-cybersecurity-regulation-a-9453
-
Is Your Access Control System a Gateway for Hackers?
With access control systems being prime entry points to hacking IT and OT systems, security professionals need to stress protecting security systems. In order to get into IT and critical infrastructure operational technology systems, hackers look for the easiest path in leveraging many different physical assets. They typically start with hardware which will give them access to specific computers. Unfortunately, many organizations don’t secure their own security equipment. For example, IP wireless cameras and card readers in the access control system are favorite targets of hackers.
How to protect the card system from hacking: first is to provide a higher-security handshake, or code between the card or tag and reader to help ensure that readers will only accept information from specially coded credentials, second is valid IT and anti-tamper feature available with contactless smartcard readers, cards and tags.In recent years, the security awareness is improving rapidly, but people often pay a lot of attentions on how to protect themselves from the internet and forget the most basic security issue-physical security. The hacking will never stop, as well as protecting your system.
link: http://www.securitymagazine.com/articles/87505-is-your-access-control-system-a-gateway-for-hackers
-
How is NSA breaking so much crypto?
The Snowden documents shows that NSA has built extensive infrastructure to intercept and decrypt VPN traffic and suggest that the agency can decrypt at least some HTTPS and SSH connections on demand.
However, the documents do not explain how these breakthroughs work.If a client and server are speaking Diffie-Hellman, they first need to agree on a large prime number with a particular form. There seemed to be no reason why everyone couldn’t just use the same prime, and, in fact, many applications tend to use standardized or hard-coded primes
NSA has prioritized “investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic.” It shows that the agency’s budget is on the order of $10 billion a year, with over $1 billion dedicated to computer network exploitation, and several subprograms in the hundreds of millions a year..http://thehackernews.com/2015/10/nsa-crack-encryption.html
-
http://www.technewsworld.com/story/84000.html
The article, “What Should be on the Next President’s Cyberagenda?” starts off by stating that usually cyber security is at the end of the agenda for most and that including the President of the U.S. TechNewsWorld asked experts what should be on the Presidents Cyber agenda.
-“The president has to set the tone early on cybersecurity – within the first 100 days”. Sam Curry, chief product officer at Cybereason further explains, “New cabinet secretaries have to understand that their mission can’t be done without secure systems. Far too often, cybersecurity is not even on the list of priorities for initiatives and agencies and staffing”.
-That Obama should be focusing on the protecting the private sector from cybercrime and threats if he wants a stronger economy.
-Critical infrastructure organizations need more legislation passed so they are able to protect their sensitive data through strict access controls and data encryption.
– Create a national cyber-recovery plan intended along the lines of the civil defense plans created for response to a nuclear attackIt concludes by stating making improvements and policies don’t need to be a new outlook when approaching security, as a country we need to work what we have due to the risk involved with change. Also, experts conclude with that, “The momentum needs to continue and grow. The handoff between administrations should not be a fumble.”
-
Hacking Beyond Software and Applications!
Security researchers have been figuring out hacking techniques that do not restrict to only operating system or applications, but break through the actual machine. They trying techniques to exploit the hardware behavior by targeting the actual electricity signals that comprises bits of data in computer memory. At a recent conference, researchers have presented similar attacks and how they would be implemented in real life.
Google researchers had demonstrated “Rowhammer.” The hacking trick that repeatedly overwrites or hammers a certain row of transistors in its DRAM flash memory, until a rare glitch occurs. They target to leak electrical charge from the hammered row of transistors into an adjacent row. The leaked charge then causes a certain bit in that adjacent row of the computer’s memory to flip from one to zero or vice versa. That bit flip gives you access to a privileged level of the computer’s operating system.
Such attacks will require to find defenses that purely run on digital models. -
The CIA is preparing for a possible cyber attack on Russia http://townhall.com/tipsheet/mattvespa/2016/10/15/report-cia-could-be-planning-a-cyber-strike-against-russia-n2232760
I thought this was interesting because it shows that a method of war will be cyber war. I thought it was interesting that the Vice President said that he hopes that the american people won’t know that this happened.. I’m curious if this will go down as a “Manhattan Project” type moment where we created and demonstrated a new weapon.
I’m curious if this will lead to further attacks from Russia or if this will worry them and other countries about what is yet to come and they might scale back their cyber operations against us.
-
Aviation Officials Step Up Cybersecurity Checks of Flight Communication Systems
U.S. and European aviation authorities are focused on cybersecurity threats that could affect ACARS (Aircraft Communications Addressing and Reporting System), which is a basic data-transmission system primarily used for air traffic control purposes. The ACARS is a decades-old system used since the 1980s and because of its age, the system lacks more secure safeguards embedded onboard newer messaging networks.
Up until now, the information sent by ACARS from planes to the ground wasn’t considered safety critical nor does it handle any data that could immediately jeopardize safe operation of flights. While no specific hacking attempts or intrusions have been detected, the activity has gained importance in light of increasing instances of cyberthreats to commercial aviation in general. Those threats have prompted governments and industries to sit up and be take action to develop future standards to ensure that any successful hacks will be detected and neutralized.
Besides ACARS, the FAA’s technical advisory group has also decided to pay more attention to cybersecurity threats across the full range of onboard equipment and internet connections.
-
Nuclear power plant was disrupted by cyber attack
The International Atomic Energy Agency director Yukiya Amano announced that a nuclear power plant had some disruptions due to a cyber attack. For security reasons, he did not clarify which power plant or what was disrupted. He was able to say that the plant stayed open but took precautionary measures. There is a difference between disruptive and destructive cyber attacks but disruptive can be very dangerous if they target critical infrastructure. Terrorists have considered nuclear plants as potential attack targets, even if it is predicted that they cannot blow up a reactor. The U.N. is helping nuclear facilities prepare for cyber security attacks with training and constructing an information database.
http://www.reuters.com/article/us-nuclear-cyber-idUSKCN12A1OC
-
I read the article “Survey Says Most Small Businesses Unprepared for Cyberattacks”. According to this article, over 78% of small-business owners still don’t have a cyberattack response plan, even though more than 54% were victim to at least one type of cyberattack which include:
1. Computer virus – 37%
2. Phishing – 20%
3. Trojan horse – 15%
4. Hacking – 11%
5. Unauthorized access to customer information – 7%
6. Unauthorized access to company information – 7%
7. Issues due to unpatched software – 6%
8. Data breach – 6%
9. Ransomware – 4%Not only public companies need to protect their information assets, as for those small business or new start companies, they also have responsibility to protect the customer’s personal information. Therefore, the small business owners should realize the importance of protecting their information assets by implement anti cyberattack controls.
-
Healthcare and Cyber Security
http://www.darkreading.com/threat-intelligence/healthcare-suffers-estimated-$62-billion-in-data-breaches/d/d-id/1325482This article talks about the healthcare industry being susceptible to cyber attacks. The issues with the industry and cyber security is budget since it doesn’t necessarily grow, but either stays the same or decreases by a certain percent. Each facility is different but the problem is they just don’t have the talent and according to the article, there are about 20,000 jobs available for cyber security in the healthcare field. Most of the attacks are aimed at getting medical records, some towards insurance and billing. It can happen maybe 1-5 times a year due to malware, ransomware, insider threats and sometimes it’s not reported. It’s scary to think when entering a healthcare facility that our personal healthcare records are put at risk because the protection of the systems isn’t up to date or non-existent. It also makes for a perfect opportunity to attract some talent and help secure these systems.
-
The United States publicly blamed Russia for the attack on the US voting system during the Democratic National Convention. The article I read is about the speculation that the US may be planning a revenge attack on Russia. Apparently, the CIA has given the white house a number of options that are all based on harassing and embarrassing Russia. Joe Biden said that the US will be sending a message to Russia with the revenge attack and he hopes that it will have the greatest impact possible. However, it ultimately will be President Obama’s decision.
I think that it is hypocritical that we pretty much admitted that government attacks are common and acceptable and I think it’s ridiculous that even though the voting system hack embarrassed our country, we are now beating our chest. Talk is cheap and you would think the US would want it to be somewhat of a surprise attack but we are basically telling Russia that it is coming. Seems like the US may be playing games with this issue. Regardless, I think our country needs to invest more in cyber security and the protection of our systems.
Source: http://www.digitaltrends.com/computing/us-cyber-strike-russia/
-
Darin, I read a very similar article. Isn’t it out that our country is being so public about this? You would think they would want a cyber attack to be somewhat secretive yet we are basically telling Russia that this attack is coming. I think it is also odd that the US is “beating its chest” about being so powerful with cyber, yet we allowed Russia to hack into our voting system. Biden said that the country is going to embarrass Russia, yet they definitely embarrassed our country a bit with the voting system hack.
-
This is interesting. I wonder if the cyber attack was executed to steal information to make money or if they wanted to cause harm. I also wonder to what extent they could’ve damaged the plant with this cyber attack. I wonder if they could have caused a meltdown.
-
Paul,
This is another level of criminality. I watched a movie were some was killed with a remote medical device, I thought that it was just fiction. But this article is showing that it something that can really happen, And it also shows that hackers are not only about money and information, now they want to hurt people physically. If J&J don’t come up with a solution this situation can get worse.
-
In a recent survey it is very much highlighted that both awareness and urgency are two major issues surrounding how cyber security and having a response plan in place are lacking, specifically in the EU. It was estimated that over the past 4 years a staggering amount of breaches on businesses occurred, at over 90%. What is equally concerning from this standpoint are the executive’s lack of concern regarding future breaches and how to respond effectively when a breach is identified. Also in the dialogue it appears that CEOs are now broadening or increasing the level of risk to cyber breaches that would fall into an acceptable amount of risk and focusing more of their resources into the incident response team, but still not nearly enough in my perspective. Outlining the executives lack of care was the response that only 42% of responders were worried about losing future business due to a security breach. I honestly did not expect this type of response when over the course of the past few years significant retailers and businesses have been impacted by data breaches and has definitely had a negative impact on their bottom line and brand image in the public eye. The government is try to force the businesses hands by implementing a regulation called General Data Protection Regulation or GDPR however it appears based on the survey results that this is still a very low priority on their list. It sounds as if the EU is much further behind in realizing the real-threat of cyber security and the negative impact it can have in all businesses and their overall bottom-line.
http://www.infosecurity-magazine.com/news/over-90-of-euro-firms-hit-by-data/
-
Leftover Factory debugger doubles as Android backdoor
A leftover factory debugger in Android firmware made by Taiwanese electronics manufacturer Foxconn can be flipped into a backdoor by an attacker with physical access to a device.
This can help the law enforcement or a forensics outfit wishing to gain root access to a targeted device.
It will allow complete code execution on the device, even if it’s encrypted or locked down. It’s exactly what a forensics company or law enforcement officials need.
An attacker with access to the device can connect to it via USB, run commands and gain a root shell with SELinux disabled and without the need for authentication to the device.
This not only allow to extract data stored on a password-protected or encrypted device, but also to brute-force attacks against encryption keys or unlocking a bootloader without resetting user data.
A command called Fastboot is a utility and protocol used to communicate with the bootloader, and to flash firmware. It comes in the Android SDK and devices can be booted into this mode over USB in order to re-flash partitions or file system images on a device. A custom client would support a reboot command that would put the device into a factory test mode. In the test mode, the Android Debug Bridge (ADB) runs as root and SELinux is disabled, allowing an outsider to compromise the device, bypassing authentication controls. -
“Android Banking Trojan Tricks Victims into Submitting Selfie Holding their ID Card”
According to Kaspersky Lab Anti-Malware Research team, Acecard is the most dangerous Android Banking Trojans out today.
You can read more about the evolution of Acecard malware here [https://securelist.com/blog/research/73777/the-evolution-of-acecard/]
Payment card companies like MasterCard have switched to selfies as an option instead of punching in the pin/password during the ID verification for payments that are made online. And, hackers have started to exploit vulnerabilities in this new security verification method.
Acecard, the android banking trojan, masks itself as a video plugin (like adobe flash player, video codec, etc.) Once the trojan is installed successfully, it will ask the target for different device permissions to execute the malicious code and then patiently waits for the target to open mobile applications. These applications are generally the ones that require user’s payment card information.
If a user opens up an app that requires payment transactions (Amazon shopping app, etc.), the trojan overlays itself on top of the legitimate app, and starts requesting user for card details.
“It displays its own window over the legitimate app, asking for your credit card details. After validating the card number, it goes on to ask for additional information such as the 4-digit number on the back.” – explains McAfee researcher Bruce Snell.
The trojan also prompts users to hold their ID card in their hand, underneath their face and take a selfie. A victim may be duped to think that these are the requests coming from the legitimate app they are using. Once customer data is obtained, hackers can make illegal transfers and control the target’s online accounts. This social engineering trick isn’t new but is still a big threat for less tech-savvy users. If one knows that there are family members or friends who are not tech-savvy, one can make sure that their phones aren’t downloading apps from un-trusted sources (in android phones you can change this setting).
Source: http://thehackernews.com/2016/10/android-banking-trojan.html
-
Mac malware can easily spy on your Skype calls
Patrick Wardle, an ex-NSA hacker has proposed a new way snoops might spy on people via their webcams.
As Macs make their camera sharable to multiple apps at the same time for perfectly legitimate reasons, it’s possible to create a malicious app that asks to use the webcam. The app wouldn’t just start using the camera, as the LED light would turn on and alert the user, instead, it would wait until another app – like Skype – ran so the spyware could piggyback on the process and start recording the victim.
With that, Wardle has created a basic tool, OverSight, to alert Mac owners whenever a program is asking for permission to access the camera. The user can then reject or allow access.
-
The article I shared is ” 6 Ways Hackers Can Monetize Your Life.”
Cybercrime is a multi-billion dollar economy with sophisticated actors and a division of labor that includes malware authors, toolkit developers, hacking crews, forum operators, support services and “mules.” There are countless sites in the dark web that offer ways for hackers to buy or sell stolen accounts, hacking tools and other criminal services.
Stolen credit card numbers aren’t the only way hackers take your money. The cybercrime industry is innovative and imaginative. It always comes to finding ways to turn our personal information into cash.
Sometimes you haven’t seen a consequence from a corporate data breach or reported software vulnerability, it doesn’t mean your life isn’t being traded online.
There are six ways hackers monetize your life online:
Medical Identity:
As Social Security numbers, health insurance accounts, Medicare account numbers aren’t so easy to replace as the credit card numbers. This type of information is the gold mine for identity theft and insurance fraud. In the black market value of these credentials at well over 10 times the price of stolen card numbers
Email and social media:
A cybercriminal can use a hacked email or social media account to distribute spam, run scams against the person’s contacts and connections, and try to leverage the stolen account to break into other online accounts used by the same person.
Uber
By hijacking your Uber account, most likely through a phishing email, they can set up fake drivers and bill you for “ghost rides.”
Airline Miles
hackers have to do is get access to your frequent flyer account, and they can steal your airline miles, sell them to other criminals or put the whole account up for sale.
Webcam
They infect your computer by using a remote administration tool (RAT), and they will be able to remotely control and access your webcam. Known as “ratters,” there are a lot of communities and forums on the dark web where these individuals share information, videos and photos of their webcam “slaves,” sell or trade them to other hackers, and rent access. One BBC report claimed hackers get $1 per hacked webcam for female victims, and $0.01 for men.
source:http://www.huffingtonpost.com/jason-glassberg/6-ways-hackers-can-moneti_b_9078224.html
-
I read an article about The Islamic State is seeking the ability to launch cyberattacks against U.S. government and civilian targets in a potentially dangerous expansion of the terror group’s Internet campaign. The Flight communication system would be target as well, so It is necessary to make sure the systems safe,
Source:: http://www.politico.com/story/2015/12/isil-terrorism-cyber-attacks-217179#ixzz4NQNSiHdO
-
Customer trust is often damaged after a data breach. following Yahoo recent disclosure of a data breach that affected more than 500 million accounts. Verizon may demand to renegotiate its $4.8 billion deal for Yahoo Inc. It’s yahoo’s responsibility to prove the full of impact and Verizon could allow it to change the terms of the takeover.
The breach occurred two years ago but was discovered after the merger deal was signed in July. Verizon doesn’t want to cut off the deal However they wanted to make changes on the term. Looking to renegotiate the deal could bring risks for Verizon as well.It’s not unusual for data breaches to affect acquisition deals, Verizon can ask for discounts or pull out of deals entirely because they don’t want to inherit Yahoo’s problems.
http://www.latimes.com/business/technology/la-fi-tn-verizon-yahoo-deal-20161013-snap-story.html
-
How to recover from a disaster:
This article talks about the importance for recovery plan. Disaster Recovery(DR) is a part of Business continuity plan and can result in success and failure of an organization. As per the 2014 Disaster Recovery Preparedness Benchmarking survey 60% of the company’s didn’t have a documented DR strategy. 40% felt that DRP didn’t help at the time of crisis.
It goes on to explain how DR cloud solutions or disaster recovery as a service(DRaas) is a cost effective as there are no hardware and agile way to handle disaster. It has faster recovery, better flexibility, off site data back up, real time replication of data, excellent scalability, and use of secure infrastructure.
Having a DR strategy is and continually testing is not enough. It should be updated regularly and be adapted in line with the changes in the business environment and the market shifts. According to the same survey 6.7% of organizations tested weekly, 19.2% tested annually and 23.3% never test them at all.
The implementation has challenges like budget issues, buy in from CIO and type of solution.
The three steps to have a successful DRP is
1. Identify and define your needs
2. Creating the DR plan
3. Test, assess, test, assess.Having an effective DR strategy in place will help organization to mitigate risks and help organization to recover quickly in event of disaster without negative impact.
http://www.cloudcomputing-news.net/news/2016/oct/17/recovering-disaster-develop-test-and-assess/
-
Euro Bank Robbers Blow up 492 ATMs by Phil Muncaster-UK/EMEA News Reporter, Infosecurity Magazine
492 ATMs across Europe were blown up by thieves in the first half of 2016. Criminals are increasingly using diverse tactics, and blending physical and online methods, to steal from banks. The physical attacks cost over 16,000 euro per attack, not including damage to equipment and buildings. The total 1604 incidents in the first six months of the year rise a loss to hit 27m euro. Thieves also use transaction messages to siphon off cash funds.
In my opinion, banks should use BCP to cover this part before these kinds of thieves happen. Because physical and message and online thieves had already happened like many years ago, but banks still not really care, or not too much. I ll say finding another way to solve these kind of problems even making less convenience for customers.http://www.infosecurity-magazine.com/news/euro-bank-robbers-blow-up-492-atms/
-
Popular Android App Vulnerable to Microsoft Exchange User Credential Leak
A popular Android App, Nine, used to access corporate email, calendar and and contacts via Microsoft Exchange servers is vulnerable to leaking user credentials to attackers. The application could allow an attacker to launch a man-in-the-middle attack, allowing them to steal corporate usernames and passwords of victims. Nine app lacked certificate validation when connecting to a Microsoft Exchange server – regardless of SSL/TLS trust settings. Attackers can pluck names and passwords out of the traffic or snag confidential emails as they pass by. An attacker could use a rogue Wi-Fi wireless access point (WAP) configured to capture Nine application traffic to Microsoft Exchange servers. Next, when the unsuspecting Nine user connected to that malicious access point, the attacker can intercept traffic and obtain the target’s Active Directory login credentials.
Popular Android App Leaks Microsoft Exchange User Credentials
-
Article: Back to School Security for iPads in the Classroom
As a provider of Apple-centric security solutions, SecureMac has outlined five of the top challenges faced when deploying iPads in a school setting, along with solutions that leverage the benefits of this powerful new technology in a safe and secure manner.Problem faced by schools: Securely deploying and managing devices
Schools need a centralized system to efficiently handle tasks including app installation, software updates and locating missing devices. Additionally, steps must be taken to ensure that proper access control and security configurations are in place on any network that will be used by student devices.Problem faced by schools: Maintaining student privacy in a digital environment
Student privacy needs to be a top priority for any educational institution looking to harness the power of technology in the classroom. Apple does not collect information or track students, but data collection might be present in third-party apps used as part of the education curriculum. Not only do schools need to maintain the privacy of student addresses, birthdates and other personal information, they also need to ensure the compartmentalization of student-generated data when it comes to things like school assignments, essays and projects.Problem faced by students: Cyberbullying and online harassment
No longer relegated to the schoolyard, cyberbullying and anonymous online harassment can take place 24 hours a day, seven days a week, and as such can be much harder to identify and address. It is important to provide guidance and student outreach on the risks associated with these new forms of bullying, as well as to educate students on the danger of sharing personal and private information over the internet.Problem faced by teachers: Limiting student access to inappropriate content
Problem faced by schools: App security and malware concernsResource: http://www.securitymagazine.com/articles/87469-back-to-school-security-for-ipads-in-the-classroom
-
Firms urged to automate security certificate backup after Globalsign blackout
The article I read this week is about online firms are being urged to reduce their dependency on Globalsign (a cross-certificate allows a certificate to chain to an alternate root) security certificate authority (CA) after an error made customer sites inaccessible. An unknown number of sites became inaccessible after a cross-certificate was revoked in error during a planned maintenance exercise to clean up of some of their root certificates links.
Education software developer Edsby said its website was affected, along with other sites such as the Financial Times, Guardian, Wikipedia, Logmein and Dropbox.
Globalsign responded by removing the affected cross-certificate and clearing its caches, but the CA’s customers still had to replace their SSL certificates to restore access to their sites.
What we should learn from this news is businesses must have an automated backup plan. Firms need to be able to take control and mitigate the risks immediately. -
Yahoo Confirms 500 Million Accounts Were Hacked by State sponsored Users
Yahoo finally found it’s been hacked in 2 years and they slowly responded to the serious hacking influencing 500 yahoo mall users. Over a month ago, a hacker was found to be selling login information related to 200 million Yahoo accounts on the Dark Web, although Yahoo acknowledged that the breach was much worse than initially expected.
It is on the investigation of the breach with law enforcement agency. They claimed that only the users’ name, email address, dates of birth, phone number, password and in some cases, encrypted and unencrypted security questions-answers were stolen from millions of Yahoo users. However, they don’t believe the credit information was stolen by the hackers. They need to take immediate action to inform the users after they confirmed the hack.
Same cases happened everyday and companies don’t know how to respond to the hack because they are lack of experience and they don’t have Business Continuity and Disaster Recovery Planning in place.
-
Nice post Andres,
I liked how you provide the components of BCP. All companies should develop their BCP based on those components. I think the Plan of transition in the event is extremely important because it can help the company locate where they are and how they can react and transit during the event.
-
“Cashing Out: ATMs Try to Stop Wave of Cyberattacks”
The article discusses the sharp rise in ATM fraud in 2015 and the slow implementation of EMV debit cards. Most financial institutions focused on credit cards and are now only starting to upgrade existing debit cards. Traditional debit cards and vulnerable to an attack known as skimming at ATM machines and gas stations. Criminals attach a device to capture the magnetic information from a card in ATM machines and then make counterfeit cards or transactions. Unlike credit cards, debit cards are tied directly to bank accounts, offer less security, and a more cumbersome process to recoup losses. By next near ATM locations without chip enabled machines will be responsible for fraud, however there is currently a backlog orders for upgrades with many rushing to complete the transition in time.
http://www.wsj.com/articles/cashing-out-atms-try-to-stop-wave-of-cyberattacks-1476529201
-
They are definitely being public which can be good or bad depending on what their goal is. First, its possible that officials are split on the decision the articles are a reflection of that. Or, it may be a form of psychological operations. Might be trying to warn the Russian government without actually conducting an attack.
-
Reminds of the episode in the second season of Homeland where one of the characters is assassinated by hacking into his pacemaker. These types of examples seem closer and closer to reality every day.
-
“Three Steps for Disaster Planning Toward a Smooth Recovery”
According to the Federal Emergency Management Agency (FEMA), 40% of companies that experience a disaster never re-open. The primary goal in disaster recovery is to limit business disruption and restore critical services as soon after a disaster as possible.
When creating or reviewing a recovery plan an organization should consider the following:
Have a written document that includes step-by-step instructions, emergency phone numbers, and back-up protocols.
Include communication procedures so employees, vendors, clients, and renters know how and when to reach management.
Consider establishing an alternative method for phone service, such as forwarding incoming calls to a cell phone or remote number/call center.
Seek out reputable disaster recovery companies, and set-up prearranged agreements that outline the priority of service and assessment of emergency equipment needed.Then, the organization should review its vulnerable areas, document all office processes, and develop a contingency plan for each.
Plan to communicate with employees, customers, and vendors — who, what, when, and how.
Develop the appropriate protocols to ensure your data is safe and can be accessed.
Keep copies of insurance policies and other critical documents in a safe and accessible location (e.g., fireproof safe or backed-up computer system)
Develop a training program for your staff on what needs to happen before, during, and after a disaster.
Address protocols for different types of disasters and prioritize based on the likelihood of these events.Next, the organization should understand and address the three elements of disaster recovery planning: prevention, detection, and correction.
Finally, the organization should test the disaster recovery plan. It should test the plan at least once per year to ensure the disaster plan as written still reflects the current operations.
-
The effectiveness of skimmers should only last as long as they remain relatively unknown. The benefit against skimmers is that you can confiscate the attacker’s equipment whenever they attempt this. With over the internet attacks, you would need law enforcement’s help to do anything to their physical machines. Education is still the best defense in this case as if consumers know that this device exists, they won’t fall victim to it. The tokenization of transactions also will prevent man-in-the-middle attacks like skimmers or anything else attackers can figure out.
-
Ecuador admits it has ‘temporarily restricted’ Assange’s Internet access
The article I selected this week is how the country of Ecuador decided to cut internet access to the leader of Wiki Leaks website, Julian Assage.
There were reports that Sen. John Kerry asked Ecuador foreign ministry to stop Julian Assage from releasing information that my jeopardize the election. The reports were denied but internet access has been cut for Mr. Assage.
Ecuador has harbored Mr. Assage to prevent prosecution for illegally penetrating U.S. and other private and government sector organizations and releasing the hacked information to the public.
It make us questions why they are cutting Mr. Assage’s internet access, while at the same time keeping him safe during his attacking efforts over the internet.
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
Exam 1: with Answers
Presentation: PDF format
Presentation: PowerPoint format
Quiz: Quiz
Quiz w/solutions: Quiz w/solutions
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
What physical security risks are created by an organization’s implementation of an integrated PHYSBITS solution? What mitigations are needed to lesson the risks?
-
PHYSBITS is Physical Security Bridge to Information Security. It is the collaboration between physical security and information security, where as links information system are used to control physical access to facilities, information infrastructure and resources. PHYSBITS focuses on the human aspect of physical security by integrating information security to provide authorized access to facilities and activities monitoring of personnel.
An organization implementing an PHYSBITS solution may experience physical security risks associated with loss of credentials/badges or other identifiable information. Depending on what types of authentication the organization uses, for instance a badge that provides access to restricted areas, if stolen or else compromised can give attackers access to the area to steal, vandalize, or destroy information system hardware or facilities. Another threat is to cause harm to personnel within the facility, like in the Fort Knox shooting.
To mitigate risks of unauthorized access may be to add additional security controls such as biometrics at restricted site or keypads that requires the person to provide a pin along with the badge. To prevent intentional or unintentional harm to personnel mitigation could be to establish strict policies weapons or a screening process like the TSA.
-
The main motive of physbits is to enable collaboration between physical and IT security to support overall enterprise risk management needs.Converging these security environments addresses security gaps that fall between these two different security disciplines and helps protect organizations against multifaceted
security threats.The Physbits include some common solutions1) Employee Provisioning and Access Management — Setting up new hires with:
system and facility access; self-serve password management; access card management; and
employee de-provisioning2)Card Management — issuance and life cycle management of access cards
3)Directory Management — infrastructure that enables distributed, scalable access to user
information and attributes.There are some risk associated with Physbits
1) In case the access card or pin get stolen it can be misused by some miscreant to gain unauthorized access
2)Verification of card or pin access occurs through the directory which has list of credible users and directory can be hacked by the hackers to input the desired credentials
3) The failure of IT infrastructure providing the access could disrupt any form of authorized access like swipe machinesIn order to mitigate such risks
The integration of biometric access along with card or pin access will reduce any form of unauthorized access which can happen in case the card or credentials get lost or stolen
The central directory or employee database server should have proper firewall,IDS in place to prevent any hacks
Backup of IT infrastructure in case any failures occurs an and alternate method which may involve human approach to check the physical security during the given failure timehttps://www.oasis-open.org/committees/download.php/7778/OSE_white_paper.pdf
-
To add
1. Tailgating can pose a risk.
Control: It can be mitigated by having a security person in place to just watch who comes and who goes out
2. Sometimes vendors or contractors are allowed inside the building without a badge accompanied by an employee.
Control: By having a physical registry keeping track of the people coming inside the building.
-
Physical Security Bridge to IT Security is a standard approach for enabling integration of physical and IT security. PHYSBITS provides a architecture for managing and monitoring physical and IT security systems by bridging both securities. There could be risks in using Physbits,
1. Implementation will be complex and time consuming. IT would be difficult to establish the complex structure again at the merged or acquired companies.
2. It will be difficult to maintain single authentication factor for a person who needs to have different physical accesses s and move between locations.
3. If one system fails the other will be impacted. If there is a restriction like, a person who is physically inside the company premises only is allowed to logically enter the database.If the access card reader is not working even if the person has entered the facility with a visitor, he will not be able to login to the database server.
4. Attackers will be able to target one and attack both the systems. When IT and physical security are integrated, a person who manages to disable access by attacking the database can get physical access to the organization.
5. Dependency of systems – Lets consider that HR software module needs to be updated and will be closed for maintenance for 1 day, it would be impossible to issue access cards that day. In this case the access management would be done manually by paper registry but the system will not be able to record any transaction. This is a major risk considering there would be no monitoring on how access is granted.
6. Complexities in the system will rise in case of tasks like patchwork management, installation of new software.
7. In certain cases it could be challenging to prevent authentication in local setup when global policies have authorized. ex. Is it possible to stop the user from entering via the workstation in Chicago when we know that he “badged” into the Los Angeles office?
8. IT security and Physical security have different reporting system. To track performance and tasks of a security guard and a software developer will be difficult to maintain same level of reporting. e.g Security guards may come in shifts and will be responsible to secure the same area. -
Adding mitigation for risks in PHYSBITS:
-Dependencies for maintenance activities of both the systems must be minimum.
-As far as possible the reporting structures of physical and IT securities must be considered to resolve the dependencies that can be avoided.
-Segregate authorization levels depending on locations, the assigned rights must also consider the location criteria. And there should be 1:M relation between person and location.
-Ensure vulnerability assessment must be done for applications handling both physical and IT security. A vulnerability in one interface of physical security system must also be verified in IT security system.
-A good visitor management system and escort system must be maintained.
In case of system failure hard copies can be maintained. Proper authorization and monitoring must be done while in such case. The paper entries must be added to the system once it is up and running. -
The main purpose of physical bits is to allow collaboration between physical and IT security to ensure support to the enterprise risk management. By conjoining these two security environments bridges the security gaps within the two security realms by establishing and strengthening the organization against security threats.
However, some risk may occur such as authentication, accessibility and hacking. The bases of these risk are the intermingling of the systems.
-Systems failure (natural disaster, power outage, etc.)
– The access between the physical bits may not be integrated correct and could arise some risk and weaknesses.
– If the system is hacked, they are able to get physical access to the company as well as the databases
I’d mitigate these risk by enabling bio-metrics, escort security throughout the facility, encryption of data and firewalls. -
The Physical Security Bridge to IT Security (PHYSBITS) focusing on integration of physical and IT security technologies. It is a vendor-neutral approach for enabling collaboration between physical and IT security to support overall enterprise risk management needs. The technical portion of the document presents a data model for exchanging information between physical security and IT security systems.
Risks associated with the implementing PHYSBITS are:– Very complicated to implement
– Creating a central data base that ensures integrity should include all identification record which is usually stored in HR database. Here is a biggest risk because IT active directory should be separated of HR data base.
– Because of the integration another risk is; if one system fails the other system would be failThis system has 3 main goal; Data Auditing, Strong Authentication and user right management.
In order to mitigate this risks:
– Consider additional security level such as biometrical authentication.
– Considering business continuity plan.
– Consider good monitoring services including camera or a physical person to check visitors. -
Great post Vaibhav. I strongly agree with the 3rd risk you mentioned. In case of failure of IT systems what can be done? Do you think a temporary manual system can work? You mentioned about access being blocked due to not functioning of card readers. In this case should the company be prepared to open the doors without the access readers? Is it another risk to have a back up plan like this. I think companies might have to proactively think about failure of infrastructure to support mainly 2 things, one, in case of IT failure how will the process work and two in case of natural disaster, a BCP like situation how will the physical access systems work?
-
What physical security risks are created by an organization’s implementation of a PHYSBITS solution? What mitigations would recommend to lesson them?
The two biggest risks I see in implementing a PHYSBITS solution are:
1. An ex-employee taking a current employees badge
This could cause several physical security risks. The ex-employee may be able to access the physical equipment and compromise the integrity of the system. One example is accessing an electrical grid site. Imagine if an ex-employee had access to the electrical grid, secretly accessed an electrical storage area, and shutting down the electricity for the city of Philadelphia. Yikes!
The mitigations you could put into place would be to update badges on a quarterly basis for employees with “admin” type access to the sites. You could also assign a “pin” number to enter after you swipe your card. This will provide multi-level authentication.2. An authorized person allowing a non-authorized person in the restricted area
As you mentioned in class, some people will think they are being nice by holding the door open for someone else. Unfortunately, that “someone else” may not have access to the room on other side of the door. I have seen this happen at my children’s daycare. I did bring it up to the director.
The mitigation she put in place was an email specific to security measures. The facility also includes real-time camera’s throughout the entire facility. Parents can access the camera’s through the website at any time.
For a larger organization, you could put in man-trap doors. This means one door closes before the other door opens. This allows a more secure look to the environment and may make the authorized person think twice about being nice.
-
Binu,
Great point about sub-contractors allowed inside buildings. One of my clients has a high level of security measures in place. It is a pharmaceutical company in the surrounding suburbs. All vendors and contractors must attend a securities class on the physical grounds and authorized areas. We are only allowed to use the entrance and exit assigned to vendors, and only had access to certain areas, but the biggest thing I noticed is that security was second nature. Tailgating didn’t happen because the employees knew not to do it. We would simply wait for the other person to swipe their card. It was frustrating but it was the culture. Security is a top priority and the entire company culture practices security like a daily habit.
The best way to get a secure environment is to have the employees participate in the policies and understand why they are in place.
-
Fred,
As you mentioned , PHYSBITS focuses on the human aspect of physical security by integrating information security. I agree with Ex- employee risk and the controls you considered for it is smart , adding additional level of access control.
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
For an organization choosing among Denver Colorado, Miami Florida, Redlands California and Tulsa Oklahoma, from a physical security perspective – where would be the best place to locate their data center? Why is […]
-
From a physical security perspective, I’d say Tulsa, OK or Denver, CO.
Why Tulsa?
First, the access to the city excellent. It is conveniently located in the direct path of the central fiber corridor that connects Houston to Chicago.The risk of seismic activity is also low. Second, for data centers that rely on water for chilling, the state has many aquifers, including the Great Plaines Aquifer and numerous rivers that crosscut the state.
Why Denver?
It is located in a state with low-risk climate and geographical features. CO offers an optimal area for disaster recovery. As Tulsa it is also a low seismic zone.Why not:
Miami? Too close to the water and vulnerable to hurricane. Hurricane cane quickly destroy data center in a matter of seconds and put companies in trouble.
Redlands? High seismic risk. Prone to earthquakes. -
When considering the natural disasters in physical security, the organization should choose Denver, Co.
According to the Disaster Hot Zones of the World (http://io9.gizmodo.com/5698758/a-map-of-the-world-that-shows-natural-disaster-hot-zones), the other choices seems more riskier than Denver.
Miami, FL: Is located in a hurricane path which could cause flooding, destruction of property, and residual effects of debris left by the hurricane. It can cause disruption of business during hurricane seasons for business closures for inclement weather and extended recovery times. It is also in an chemical accident zone, such as the oil spills can affect the health of its employees. Utilities and power may also disrupted and roadways maybe inaccessible in an event of a hurricane.
Redlands, CA: Prone to seismic activities that can have long-lasting effects on the data center. It can damage computer hardware and equipment and worst damage facilities structural integrity. It can also affect utilities and road networks that will prevent movements of critical supplies, like fuel for generators, if utilities network (pipes and power lines) are also down.
Tulsa, OK: Located in Tornado valley (https://www.ncdc.noaa.gov/file/1536) that can cause structural damage to facilities and loss outside equipment. It is also located near the New Madrid fault line, which according to USGS is as likely as CA to experience seismic activities (http://www.dailymail.co.uk/news/article-1366603/Earthquake-map-America-make-think-again.html)
Denver, CO: The risks from natural disasters effecting physical security is lower in Denver, than the other choices. It has low seismic activities and away from storms.
-
For an organization choosing among Denver Colorado, Miami Florida, Redlands California and Tulsa Oklahoma, from a physical security perspective – where would be the best place to locate their data center? Why is this place better and the other places worse?
From a physical security perspective, it would be best for an organization to locate their data center in Denver, Colorado. In fact, among the proposed 4 cities, Denver less risky.
According to Bankrate, Florida, Oklahoma, and California are among the 10 states most at risk for major disasters:
“Florida has been roughed up by dozens of tropical storm systems since the 1950s, none worse than Hurricane Andrew in 1992. The Category 5 hurricane with gusts over 200 mph held the title as the most expensive natural disaster in U.S. history until Hurricane Katrina in 2005. Severe freezes have been disastrous for Florida farmers on multiple occasions.”
“The monster tornado that blasted through the Oklahoma City suburbs in May 2013 is only the latest devastating storm to hit a state that has recorded an average of more than 55 twisters per year since 1950. The worst in recent history struck near Oklahoma City in May 1999 with winds over 300 mph and killed 36 people. Other disaster declarations have involved severe winter storms, wildfires, floods…”
“California has weathered wildfires, landslides, flooding, winter storms, severe freezes and tsunami waves. But earthquakes are the disaster perhaps most closely associated with the nation’s most populous state. The worst quakes in recent years have included a magnitude 6.9 quake near San Francisco in 1989 that killed 63 and a magnitude 6.7 quake in Southern California in 1994 that killed 61”
The worse place to locate a data center would be Redlands California.
-
The spots not ideally for data centers would be Miami, FL – Tulsa, OK – Redlands, CA.
Miami, FL – Hurricane season can wreck havoc in the area and cause damage many parts of the city. Also, the heat can play a factor because it will mess with the network signal and cause lag. I remember during my time at Comcast doing tech support, Florida would have outrage or weak signal issues because of either the heat caused signal interference, the rain would also cause interference and when the lines are wet, businesses experienced downtime in internet connection or outages.
Tulsa, OK – Tulsa has problems with Tornadoes, which is their major climate concern. Now tornadoes have their seasons and the prime season for it is between March-August (source: http://okc.about.com/od/forthehome/ht/oktornadotips.htm) but it can hit anytime. A data center there wouldn’t be a wise choice seeing as it get knocked offline.
Redlands, CA – Redlands are a prime spot for earthquakes, making it possible for data centers, utility facilities and other sites to get severely damaged. The downtimes can make it hard for businesses to run effectively because if some important data is unavailable, then certain functions can’t do their jobs slowing the overall process. Another threat would be wildfires, as California is sometimes affected by it due to droughts and very high temperatures. They can last for days and if a data center gets hit by a wildfire, recovery time is very high while an organization tries to plan on how to get back on their feet.
Spot for data center would be Denver, CO. Denver’s climate makes it an ideal choice because the seasons are well spread out and it gets more sunshine. The issues with Denver is the snow, as March is the month where snowfall is heaviest but it does get better. (source: http://www.denver.com/weather). But Denver isn’t near any major fault lines to get hit by earthquakes as much as California nor will it encounter tornadoes and hurricanes. The heavy snowfall is of concern but with Denver being used to it, the city is well-equipped to handle it. This allows for organizations to quickly get back to work, sometimes without skipping a beat.
-
Hi Brou,
I agree that Denver can be a good choice. As far as Tulsa is concerned, the reason why I think it’s risky as it falls in the Tornado Alley, which basically means that tornadoes more than 15 can be expected on an average in that alley.
This should help: https://en.wikipedia.org/wiki/Tornado_Alley#/media/File:Tornado_Alley.gif
-
I agree with you Alexandra that Miami should not be considered as an option.
Miami is the second most humid city in the US. The servers need the relative humidity in the air to be around 45-55%. If humidity levels rise, water condensation occurs which results in hardware corrosion and component failure. In Miami, additional cost would be required to control humidity and maintain it at expected levels. -
Well explained Niel. I agree with you that Denver could be the best choice. To add to your points I would say that Denver has good temperature balance to host a data center. Experts say that cooling management is the most difficult and costly to handle. Average yearly temperatures in Denver ranges from 64 F highest to 36 F lowest.
-
For an organization choosing among Denver Colorado, Miami Florida, Redlands California and Tulsa Oklahoma, from a physical security perspective – where would be the best place to locate their data center? Why is this place better and the other places worse?
From the geographical perspective, both Miami Florida and Redlands California are very close to the coastline. As for Denver Colorado and Tulsa Oklahoma, these two cities locate away from the coastline. Since the place to locate the data center should consider the physical security and ensure the location has less affected by the natural disasters like flood, hurricane or earthquake.
Both of the Denver and Tulsa are reasonable places to locate the data center, because they both located in the low geographical risk areas. The worse place to locate the data center I think is the Miami Florida, because the city is near the ocean, which higher the risk that the core servers physically damaged by the natural disasters like hurricane and tsunami.
-
Good post Loi, and I totally agree with you that Miami is the not a good choice to locate the data center. Actually, to mitigate the risk of natural disasters may damage the data center, I think to transfer the risk to the third party might also work like purchasing an insurance.
-
I totally agree with you, Brou. Miami is near the coastline, and the risk that natural disasters occurred is higher than other three cities. I also agree with you that Tulsa and Denver may be the better choice to locate the data center, but I was thinking which one of them is the best choice?
-
For an organization choosing among Denver Colorado, Miami Florida, Redlands California and Tulsa Oklahoma, from a physical security perspective – where would be the best place to locate their data center? Why is this place better and the other places worse?
As others have said, Denver, CO would be the best location for the data center to be located.
The other locations listed in this question all have significant physical/environment security risks. For example, with rising seas, Miami is at real risk of losing significant amounts of land; Redlands’ proximity to major faults means that strong earthquakes are a risk and Tulsa is located in tornado alley.
Denver, on the other hand, is exposed to considerably less environmental risks and has the added benefit of having a cooler climate, which should reduce cooling costs.
-
For an organization having to choose between Denver- Colorado, Miami – Florida, Redlands – California and Tulsa Oklahoma, from a physical security perspective, In my view, the best place to have the data center set up would be at Denver, Colorado. The pros and cons for each of the places are as below :
Miami, Florida – is located in the Hurricane zone thereby increasing the probability of disruption and destruction during and due to the annual tropical storm activities. Almost every year, the area has witnessed severe storms, rain and resulting flooding. This makes it a poor choice for setting up data centers.
Redlands, California – is close to the Pacific ring of Fire and in an area of high seismic activity. This too would be a high risk area. Apart from the seismic activity, California is also undergoing drought for the 6th year and is susceptible to wildfires. Areas close to these forest fires which often last for days and weeks are in danger of buildings and systems being destroyed due to fire. Because of this, Redlands would not be a good choice to set up a data centre.
Tulsa, Oklahoma – Tulsa, Oklahoma is situated in an area that is prone to Tornadoes and Flooding. In 1999, a total of 74 tornadoes swept across Oklahoma and nearby states in less than 21 hours. In 2015, many areas of Oklahoma suffered their worst floods. In 2013, a 1-2 mile wide tornado swept the ground for 39 minutes and caused damages estimated at over 2 billion dollars. Due to this, even Tulsa would not be a good fit for setting up a data centre.
Denver, Colorado – The City and its surrounding areas have a very low probability of natural disasters occurring. The climate being much cooler, also is favorable such that organizations can utilize free and cheaper cooling in the data centers. Infact, companies like IBM have their data-centers located in Boulder, Colorado which is close to Denver.
-
For an organization choosing among Denver Colorado, Miami Florida, Redlands California and Tulsa Oklahoma, from a physical security perspective – where would be the best place to locate their data center? Why is this place better and the other places worse?
Out of the options provided, I think Denver, Colorado will be the best choice to locate their data center.
Denver, CO:According to the For-Trust-Data-Center, Colorado has a low-risk position for natural disasters due to low incidences such as earthquakes, floods, hurricanes and tornadoes. According to FOR-TRUST, Colorado is located in seismic zone 1, which is the lowest risk areas for earthquakes.
As far as flood is concerned, according to FEMA, flooding can occur anywhere in the United States. In Colorado, flooding occurs due to spring snowmelt when rivers swell and flow from Colorado’s mountain ranges. Colorado has a sophisticated infrastructure to prevent snowmelt and rain from flooding the areas.
For Tornadoes, The National Oceanic and Atmosphere Administration has ranked Colorado 9th in the nation in the context of the number of tornadoes occurred in a year. Colorado also falls outside the “tornado alley.”
Colorado also escapes from the major effects of hurricane as it is mostly experienced by the areas that have low proximity to coast. According to the NOAA, the wild hurricanes have always struck the Gulf Coast and the East Coast.
Snowfall is something that can is commonly associated with Colorado but the snow accumulation is typically on the west of Denver, which has semi-arid climate. It is also located on the foot of Rocky Mountains; thus the climate is mild.
As far as wildfires are concerned, there haven’t been wildfires of a major magnitude that have impacted Denver. Out of 60 devastating wildfires that have listed by National Interagency Fire Center, only three have occurred in Colorado and hence it presents a very low risk.
As far as other resources, Denver is also located on the western and eastern halve of the country and has access to major communication networks.
Why not Miami?
According to the NOAA, Southeast United States are significantly susceptible to yearly hurricane activies.
Why not Tulsa?
Falls under the Tornado Alley, more than 15 tornado activities a year on average.
Why not Redlands?
Falls under the Seismic Zone 3, probability of earthquakes is more.
-
When deciding a location in the country, it is important to consider the environmental factors that could affect the data center. The danger of each hazard fluctuates throughout the country. This includes earthquakes, floods, hurricanes, tornadoes, and even volcanoes. To determine the safest location we can consider if any location is at high risk of a disaster.
Miami, Florida is a poor location since there is a risk of hurricanes and the floods they can cause. Miami is also listed as one of the most vulnerable cities to hurricanes.
Redlands, California is at risk of earthquakes. It is very difficult to protect a datacenter from an earthquake.
Tulsa, Oklahoma is prone to tornadoes. There is a vertical strip in the middle of the country referred to as tornado alley because that is where most tornadoes occur in the US.
Denver, Colorado besides being snowy is not known for being prone to natural disasters. A blizzard can disrupt service but will leave a lot less damage than the previously mentioned potential disasters. Denver is also very capable of dealing with heavy snow conditions. I would choose Denver as the city to build the datacenter.
-
I think that you rule out Miami Florida and Redlands, California due to the risk of natural disasters and forest fires. The backbone of the data center is power and network connectivity so I would choose a place with a geographic area that has access to a reliable power grid. I bet that Tulsa and Denver would both have access to a power grid so it then comes down to your company’s current location. If building from scratch wasn’t an issue than I would look for a location with room for expansion and an area accessible by multiple roadways and/or near an airport. I believe Denver is a bigger city than Tulsa and I bet Tulsa has much more room for expansion, yet still has an airport and multiple roads that would allow the location to be easily accessible. So although I know being close to customers provides advantages and my customers will probably not be in JUST Tulsa, I think the growth of the virtual world and virtual customers will cause me to pick Tulsa.
Source: https://www.expedient.com/blog/the-where-and-why-of-choosing-data-center-location/
-
Hi, Ian
You had a very good analysis! I like how you take the location into consideration.
However, I think Tulsa, OK has higher environmental risk than Denver, CO. Homefacts recorded Tulsa, OK is a Very High Risk area for tornados. According to records, the largest tornado in the Tulsa area was an F5 in 1960 that caused 81 injuries and 5 deaths. In addition, the yearly average for tornados is three. Therefore, I believe Denver, NC is a better option to locate the data center at .Source: http://www.homefacts.com/tornadoes/Oklahoma/Tulsa-County/Tulsa.html
-
Ian,
I don’t think Tulsa is a good idea because due to tornadoes and seismic activities. Truth that you want to have a data center with easy access. however, I do not think that it is wise to locate it in a place where the risk of natural disaster is high.
-
Abhay,
You did a great analysis of why Denver is the best place for a data center. The only thing I would mention to everyone is…
Two is better than one, three is better than two, and so on…
Redundancy is the key. Denver is the best place but finding another location in the world that matches Denver might be a good idea too
-
Abhay – that is good insight. I did not think about Tulsa being a tornado valley but you are definitely right. With that said, I may change my choice from Tulsa to Denver. Denver has the mountains to block if from tornados and it is not near the coast so you do not have to worry about hurricanes. Denver also has a major airport and is not an overly huge/crowded city. Plenty of land right outside the city to build the data center, yet still has major roads keeping it easy to get to.
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
What physical security risks are created by an organization’s implementation of a PHYSBITS solution? What mitigations would recommend to lesson them?
For an organization choosing among Denver C […]
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
What are the sources of Electromagnet Pulse (EMP)? Why is it a physical security threat? How can an organization defend itself against EMP?
-
The sources of EMP are typical chemical-based explosions but most likely large nuclear detonations. An EMP occurs when a nuclear device is detonated high in the atmosphere.
An EMP is a physical security threat because of its nature. In fact, it is a super-energetic radio wave that can destroy, damage, or cause the malfunction of electronic systems by overloading their circuits. An EMP attack on the U.S. would leave the country with no electricity, no communications, no transportation, no fuel, no food, and no running water. That’s huge.
Also, today’s world depends on advanced electronics systems which makes it even more vulnerable to EMP.In order to defend itself against EMP an organization needs to have an integrated catastrophic planning including equipment protection like installing surge protectors installed or store important electronics in a faraday cage or electromagnetic shield.
-
An electromagnetic pulse (EMP), also sometimes called a transient electromagnetic disturbance, is a short burst of electromagnetic energy. Such a pulse’s origination may be a natural occurrence or man-made and can occur as a radiated, electric or magnetic field or a conducted electric current, depending on the source. Caused by the high-altitude detonation of a nuclear weapon, electromagnetic pulse can cause widespread damage to electric systems across a wide area.
All electronic equipment and apparatus could be destroyed. Every device that relies on integrated circuits for operation could be immediately disabled or destroyed.
Unlike a cyber-attack where “fingerprints” can often be found for forensic analysis, an IEMI attacker will not leave any information behind.
An EMP shutdown of electronics is so rapid that the log files in computers will not record the eventPursue Intelligence, Interdiction, and Deterrence to Discourage
EMP Attack, highest priority is to prevent attack, shaped global environment to reduce incentives to create EMP weapons, and make it difficult and dangerous to try
What’s more, Protect Critical Components of Key Infrastructures, especially “long lead” replacement componentsResource: http://www.heritage.org/research/reports/2010/11/emp-attacks-what-the-us-must-do-now
-
EMP stands for Electromagnetic pulse, which is essentially a short burst of electromagnetic energy. This pulse if strong enough, can cause damage or destruction to electronic computers posing as a significant threat to businesses. The article I found in the New York Times on EMPs (see below) is revolved around how an EMP can be used as a weapon of mass destruction and is a threat against the United States. However, not all EMPs are that drastic. Thunders storms are a common form of EMP, which if the storm is strong enough and lightning strikes are close, the lightning can fry your electronic devices. Not only that, but the sun can create solar flares that emit an EMP which can reach the earth and cause damage. Therefore, from a risk analysis standpoint, it is more likely that a thunder storm or solar flare takes out one’s equipment than a full-fledged EMP attack on the United States.
Since EMPs can cause damage or destruction to computer electronics, businesses would consider it a physical threat since it can be considered as a natural disaster. In order for business to protect itself, it can follow what the government has done which the Wall Street Journal identifies as using surge arrestors, faraday cages, micro-grids, and using underground data centers. Therefore, businesses if wanting to address the risk of an EMP, will likely need to set up a datacenter that specifically implements these protections against EMPs. Likewise, businesses need to include in their Business Continuity Plans the ability to resume their businesses in the event of a loss of technology and electricity. While a large scale EMP such as a weapon of mass destruction or solar flare is a threat that could destroy a business, businesses’ need to decide if it is a risk worth addressing or not.
Articles: http://www.wsj.com/articles/james-woolsey-and-peter-vincent-pry-the-growing-threat-from-an-emp-attack-1407885281
http://www.computerworld.com/article/2606378/new-data-center-protects-against-solar-storms-and-nuclear-emps.html -
What are the sources of Electromagnet Pulse (EMP)? Why is it a physical security threat? How can an organization defend itself against EMP?
EMP, electro magnetic pulse is a short burst of electromagnetic energy. It can be due to natural occurrence like lightening or manmade.
EMP radiation can be caused by detonation of a nuclear bomb, a solar flare, a device indented to cause EMP, a close lightening stroke, a massive power-line short circuit. EMP source is a device that intentionally produces electromagnetic pulse. And it can be a small device used by the police to disable a fleeing vehicle, a source used to test equipment for EMP resistance or a weapon intended to disable the enemy equipment.Computer equipments, appliances containing microprocessor, semiconductor electronics, cellular phones, power grids, generators, transmission lines, computer disks, UPS are all susceptible to EMP. The EMP interference can damage an electronic equipment or disrupt its performance. The EMP induces large current in the conductors that are connected to the equipment damaging the equipment. At higher energy levels like lightening EMP can damage the entire building.
An organization can protect itself from EMP threats by:
.
• By enclosing the wiring with metal conduits and shield wiring connected to sensitive equipment. Make sure to ground the shields
• By making sure any person entering the organization are not carrying any EMP devices that can damage the equipment
• Ensure that the equipment bought are not faulty and do not produce EMP
• Fuse long wires and cables and use large ferrite beads on power wiring.
• Increase the current carrying capacity of the building ground.
• Avoiding or minimizing the use of semiconductors where possible. and if used, make sure they are rated at maximum voltages and currents at least 10 times the values actually in use.
• Bypass suitable electronics to ground with capacitors rated for several thousand volts and heavy currents.
• Design circuitry to be resistant to high voltages and currents.
• Provide battery backup power for essential equipment.
• Provide the above protections to essential equipment, such as emergency communications and traffic signals.
Corrective measurements to be taken to restore operations after an EMP has occurred
• Stock replacement equipment and parts in metal containers or rooms.
• Wrap replacement parts in aluminum foil.
• Keep a stock of batteries. And rotate the stock, so the newest ones are not taken out and used first.
• Stock up on light bulbs that are not CFL or LED.It is always better to take necessary steps to prevent the risk and also have some corrective measures in place. EMP can cause extensive damage to the organization. Though the probability is too low the organization needs to be prepared and cannot neglect it.
-
What are the sources of Electromagnet Pulse (EMP)? Why is it a physical security threat? How can an organization defend itself against EMP?
An EMP is a high-intensity burst of electromagnetic energy caused by the rapid acceleration of charged parties. The causes could be: detonation of a nuclear bomb, a solar flare, a device intended to cause an EMP, a close lightning stroke, a massive powerline short circuit. The sources of EMP can be: a small device used by police to disable a fleeing vehicle, a source used to test equipment for resistance to EMP and a weapon intended to disable enemy equipment. It is a physical security threat because its power to destroy computers and computer equipment and all electrical and technological infrastructure, effectively sending the U.S. back to the 19th Century. Organizations defend itself against EMP by storing equipment inside metal cases with no openings, using old-style electric telephone equipment, shield wiring connected to sensitive equipment, or enclose wiring in metal conduits, or using large ferrite beads on power wiring.
Sources: http://midimagic.sgc-hosting.com/emp.htm
http://www.heritage.org/issues/missile-defense/electromagnetic-pulse-attack -
An electromagnetic pulse is an extremely powerful burst of electromagnetic energy capable of causing damage and/or disruption to electrical and electronic equipment.
What can cause and EMP? 1. Detonation of a nuclear bomb 2. A solar flare 3. A device intended to cause and EMP 4. A close lighting stroke 5. A massive powerline short circuit.
What is an EMP source? An EMP source is a device that intentionally produces a small EMP. There are three kinds:1. A small device used by police to disable a fleeing vehicle (the source shown in the table) 2. A source used to test equipment for resistance to EMP (classified strength) 3. A weapon intended to disable enemy equipment (classified strength)
Why is it a physical security threat? For example, if a lightning stroke is close enough to occur and EMP to your data center. If there were no protections set up before, then there is a high chance that your data center is ruined. All the servers, monitors will be destroyed as well.
How can an organization defend itself against EMP? 1.) Shield wiring connected to sensitive equipment, or enclose wiring in metal conduits. Ground the shields 2.) Fuse long wires and cables 3.) Increase the current carrying capacity of the building ground 4.) Avoid the use of semiconductors where possible 5.) If semiconductors are used, make sure they are rated at maximum voltages and currents at least 10 times the values actually in use 6.) Use large ferrite beads on power wiring 7.) Bypass suitable electronics to ground with capacitors rated for several thousand volts and heavy currents 8.) Design circuitry to be resistant to high voltages and currents. 9.) Provide battery backup power for essential equipment 10.) Provide the above protections to essential equipment, such as emergency communications and traffic signals. -
Although EMP is not as common as cyber threats posed on a organizations information infrastructure, it should not be a threat that is taken lightly. A massive EMP, either naturally or man-made, can have devastating consequences to any organization. When deciding how a company should protect itself from this risk, I believe it should also come down the criticality of each system that it used. What if they can’t protect all of their computers and datacenter? The organization needs to decide which systems are critical (backups, critical servers, or generators) for the business to continue and recover after an event.
In this case a massive EMP would likely take out power grids, utilities services, generators and pretty much any electronic equipment. It is also important to consider the protection of physical assets required to keep the information systems running like generators, UPS, water pumps for cooling, etc.
-
What are the sources of Electromagnet Pulse (EMP)? Why is it a physical security threat? How can an organization defend itself against EMP?
An electromagnetic pulse (EMP) is a short burst of electromagnetic energy caused by an abrupt, rapid acceleration of charged particles, usually electrons. The sources of EMP can be a natural occurrence or man-made and can occur as a radiated, electric or magnetic field or a conducted electric current, depending on the source.
Natural EMP events can be:
• Lightning
• Electrostatic discharge
Man made EMP events can be:
• Electric motors
• Gasoline engine ignition systems
• Nuclear Electromagnetic pulse due to nuclear explosionIt can be a physical security threat as:
For example, a high voltage level an EMP can induce a spark, such as an electrostatic discharge when fueling a gasoline-engine vehicle can cause fuel-air explosions. Such a large and energetic EMP can induce high currents and voltages.
A very large EMP event such as a lightning strike can damage objects such as data centers, IT infrastructure set up, electricity grids (failure of which can lead to nonfunctioning of a country’s economy including IT Sector), destroy electronic network, buildings and aircraft directly, either through heating effects or the disruptive effects of the very large magnetic field generated by the current. An indirect effect can be electrical fires caused by heating.Organizations can defend themselves from against EMP in the following ways:
Building their own Power sources as a part of Business continuity planning such as solar panels, Hydroelectric power systems which can help them to continue their functioning during an EMP event.
Also organizations need protection from EMP weapons and should have proper controls to restrain any outsider or the employees to bring or to use any such products within the premises.
-
What are the sources of Electromagnet Pulse (EMP)?
EMP is a short burst of electromagnetic energy. The sources EMP could be manmade such as directed energy weapons or nuclear blasts and naturally, and natural such as solar flares.
Why is it a physical security threat?
The role of physical security is to protect the physical assets that support the storage and processing of information. The EMP damages the physical assets directly such as solar flare’s damage to bulk power system assets (e.g., transformers. ) by exploiting the vulnerabilities of it.How can an organization defend itself against EMP?
1,Grounding
Proper grounding and a proper relationship between the neutral and the ground is not only
essential to meet National Electric Code (NEC) requirements, but is imperative to achieve
optimum performance of microprocessor based equipment such as computers, programmable
logic controllers, communications systems and telemetry systems.
2,Shielding
Surge events cause a magnetic field to be induced in conductors within a given radius,
depending on the magnitude.
3.Filtering
Passive filter networks block out induced surge currents and voltages on data and power
circuits for hardening electronics against lightning and EMP surge energy.
4.Surge Protection
A voltage induced between conductors can drive a surge current into an electronic circuit,
or conversely, a current induced onto a conductor can create a voltage across a series
impedance as the current propagates into a circuit.Vacca, Physical Security Essentials, Chapter 54
-
What are the sources of Electromagnet Pulse (EMP)? Why is it a physical security threat? How can an organization defend itself against EMP?
An electromagnetic pulse (EMP) is a high-frequency burst of electromagnetic energy caused by the rapid acceleration of changed particles. A catastrophic EMP would cause the collapse of critical civilian infrastructures such as the power grid, telecommunications, transportation, banking, finance, food and water systems across the entire continental United States—infrastructures that are vital to the sustenance of our modern society and the survival of its citizens. EMP can be used as a weapon of mass destruction and Boeing has announced that it successfully tested an electromagnetic pulse.
The sources of EMP are:
1) A deliberate electromagnetic weapon attack
Without causing any harm to humans, the effects from an IEMI weapon could disable regional electronic devices.
2) A nuclear device detonated in space, high above the U.S.
A High-Altitude Electromagnetic Pulse (HEMP) detonated 30 miles or higher above the Earth’s surface would destroy electronic devices within a targeted area without creating blast damage, radiation damage, or injuring anyone.The EMP can cause damage to electronic equipment within an organization or it can affect its performance. An EMP could permanently destroy all electronic equipment including hardware, software, and data.
An organization can protect and defend itself against EMP by:
1. Having a business recovery plan to resume their business loss
2. Provide battery backup power for essential equipment.
3. Provide the above protections to essential equipment, such as emergency communications and traffic signals.Sources:
http://empactamerica.org/our-work/what-is-electromagnetic-pulse-emp/ -
An electromagnetic pulse (EMP) is a short burst of electromagnetic energy. It may originate from natural or man-made occurrence. It occurs as a radiated, electric, magnetic field or a conducted electric current, depending on the source.
Natural occurrence that cause EMP include:
-Lightning
-Electrostatic Discharge (two charged objects coming into close proximity with each other)
-Coronal Mass Ejection (A massive burst of gas and magnetic field arising from the solar corona and being released into the solar wind sometimes referred to as a Solar EMP)Man-made occurrence that cause EMP include:
-Switching action of electrical circuitry, whether isolated or repetitive (as a pulse train).
-Electric motors can create a train of pulses as the internal electrical contacts make and break connections as the armature rotates.
-Gasoline engine ignition systems can create a train of pulses as the spark plugs are energized or fired.
-Continual switching actions of digital electronic circuitry.
-Power line surges. These can be up to several kilovolts, enough to damage electronic equipment that is insufficiently protected.
-Nuclear electromagnetic pulse (NEMP), as a result of a nuclear explosion.EMP is a physical threat because it is generally disruptive or damaging to electronic equipment, and at higher energy levels a powerful EMP event such as a lightning strike can damage physical objects such as buildings.
In order to protect itself against EMP, the organization can utilize:
– Faraday Cage – Surround important electronic equipment completely within metal as it conducts electromagnetic radiation.
– Electrical Grid – Acts as a huge antenna which captures electromagnetic radiation and conducting it into the earth.
-Surge protectorsIn addition, business should always have backup and data recovery plan in the event that the above protection fail.
-
What are the sources of Electromagnet Pulse (EMP)? Why is it a physical security threat? How can an organization defend itself against EMP?
An electromagnetic pulse is a sudden burst of electromagnetic radiation that large enough to cause wide-scale disruption (Wikipedia), the sources of EMP could be but not limited to detonation of a nuclear bomb, a solar flare, a device intended to cause an EMP, a close lightning stroke, a massive powerline short circuit.
EMP is capable of causing damage and/or disruption to electrical and electronic equipment, so it is a physical security threat to most organizations.
Firstly, one company can transfer the risk to an assurance company. Then, in order to defend against EMP, one organization can do as follow:
Shielding: First, the equipment or rooms that require protection (e.g. communications console, utility room, electrical service room, entertainment room, or even the entire shelter), are covered with an overall shield. This is the first line of defense and provides excellent, although not perfect, protection. The shield must be very carefully designed and constructed; e.g., improper materials selection may not achieve enough shielding, incompatible materials may result in corrosion, and incorrect seams or bonding may greatly reduce or even destroy the shield’s effectiveness.
Alternative power source: Even if the devices could survive from EMP, they will need to have a usable and sustainable source of power. This can be done by setting up alternative energy sources in advance.
-
Loved the detailed explanation, Binu. In addition to the causes you mentioned, EMP could also be caused by geo-magnetic storms. To elaborate further on why Electro Magnetic Pulses are a physical threat,
I’d like to explain the Compton Effect, which is an intense release of electromagnetic energy that causes photons to knock loose electrons in the atmosphere. The electrons, guided by the Earth’s magnetic field, essentially become a giant and powerful circuit. The current flowing in this circuit, generates intense electromagnetic fields that propagate to the surface of the earth. When these fields cross conductive materials they release energy into the material. As we know, electronic devices are full of conductive material so given a sufficient density, the energy absorbed can fry the device.Source : Robert Frost, NASA
-
Great explanation Paul. As you mentioned companies should consider the Business Continuity aspect especially when protection against EMP can done only with a little additional cost. The concept of underground data centers and using shields or cabinets made out of EMP resistant special materials is great.
Experts also mention he risk is not high as expectancy of event is low currently. However, with the increasing face of terrorism protection against EMP terrorist-sponsored nuclear blast has become topic of much discussion for data center professionals. Even if consider the risk as low right now, the impact is high.
Iron Mountain’s National Data Center in Western Pennsylvania has been built to EMP resistant. It is located 220 feet below ground in a more than 450 acre area in an underground facility. The data center facility naturally absorbs 90 percent of the EMP pulses. This greatly reduces the cost and impact of any minimal residual shielding required in a customer’s individual space to ensure electrical, mechanical, power infrastructure and subsystems are thoroughly shielded and tested to be EMP-resistant. -
What would the cost be for an underground data center. Seems like it would be more expensive than building one above ground. Would the risk/impact justify the expense? Without considering the cost, it would be an effective strategy.
-
I definitely agree to all of your risk strategies. Although I was thinking that if an organization is affected by an EMP, than most likely others will be as well. So even if that data center is protected and survives the EMP, how will the surrounding damage affect its ongoing operation? If it is severe and infrastructure and businesses are affected, it might have a long term affect. For example, how long can the back up generator last? How does it recharge? Fuel or electricity?
-
I first learned about the EMP from the movie, Ocean’s Eleven. They used the device to take out all the electronic devies in Las Vegas to break into a casino. While this is fiction as the device they used is probably too small to take out an area that large, I don’t doubt that the EMP can be shrunk to do damage to key technology inside a company. Here is a video of someone who claims to have built a small EMP that can damage a cellphone.
-
-
David Lanter wrote a new post on the site ITACS 5206 8 years ago
-
The article I read was called, “Remote switch-on enlists Mac webcams as spies”, which is very concerning taking voyeurism to a whole new level via technology. The article explains the use of new malware that has enabled attacks via webcam. The results of some of these attacks have led to theft of personal information as well as the ability to use surveillance as means for blackmail.
Graham Cluely, a security researcher points out, “recent malware detections that showed Eleanor and Mokes arrive ready to record video and audio content from infected computers.”
This article sheds light on the ever increasing threat of technology. It definitely makes me wonder if my webcam is one at any given moment. I will make it appoint to close my computer when it’s not in use that is for sure.http://www.scmagazine.com/remote-switch-on-enlists-mac-webcams-as-spies/article/530381/
-
‘Security Fatigue’ Can Cause Computer Users to Feel Hopeless and Act Recklessly, New Study Suggests
NIST conducted a study on the weariness that users express when they are forced to adhere to certain types of security policies. Our program makes it clear that the largest vulnerability in an organization is it’s people. However, I think it’s important that we, as security professionals, continue to place value on the usability of our policies. We know that security and ease of use are often on opposite ends of the same scale, but a control that is overly cumbersome is likely to be tossed aside by end-users. This ultimately weakens the organization’s security stance, though “on paper” we may think we’re doing the right things.
The three “takeaways” from the article on not fatiguing your end-users:
1. Limit the number of security decisions users need to make;
2. Make it simple for users to choose the right security action; and
3. Design for consistent decision making whenever possible. -
Police Bust Multi-Million Dollar Indian Vishing Ring
Mumbai police have smashed an international vishing operation which could have netted ringleaders as much as $7.5 million from US victims who thought they were calling from the IRS. Police have detained over 700 staff at several call centers in the Thane and seized hundreds of servers, hard disks, laptops and other equipment. Staffs at call center pretended to call from IRS and claim the victim had outstanding taxes or fines to pay. This was ordered to be paid through online pre-paid cash cards. The callers used VoIP via proxy servers to anonymize their location. Staff said they have been heavily coached to speak with an American accent and handed a six-page script to use. The operation may also extend to the UK and Australia.
Vishing is the act of using telephone in an attempt to scam the user into surrendering private information that will be used for fraud purpose. It was rated as the most popular type of cyber fraud tactic according to Get Safe Online. The organization which started this vishing was still out there. People need to be more careful about their personal information and this kind of fraud.
Link: http://www.infosecurity-magazine.com/news/police-bust-multi-million-dollar/
-
The article I read is about Yahoo using a secret tool to scan users email content for US spy agency.
Yahoo recently suffered a major data breach and now is sharing user personal data just like Apple with Imessage (referring to my last week article).
Yahoo has a custom software that scans emails without the user knowledge, usually looking for certain information needed by agencies like the FBI for example.
The funny thing it’s that it looks like Yahoo security team was not even aware of it. That’ how secretive this software is.
What happened is that the US intelligence agency approached the company last year with a court order which, I guess, gave the company no other choice but comply with the directive. However, I do not understand why Yahoo (the CEO and the general counsel) decided to go behind the security team back’s to ask the company’s engineer to build the secret software program. This is an example of a lack of communication in a company which lead to the resignation of the chief information security officer who was disapprove the fact that he was left out of a decision that hurt users’ security. -
The article I read named “High Cybersecurity Staff Turnover is an Existential Threat”. According to the article, nearly 65% of cybersecurity professionals struggle to define their career paths—leading to high turnover rate opens up big security holes within organizations. Of course, most people want a better job with higher salary or more opportunities to be promoted, but this also brings the potential risk that the high cybersecurity staff may impact his former company’s information assets since he or she well understands the company’s IT systems. Even worse, if he or she was the initial member who built the IT governance frameworks of the company, and involved in the corn decision making of the company, then these former high level staffs knew the security loopholes of the former company. In some cases, these former high cybersecurity staffs may work for their former competitors, in this scenario, if they have well understanding of the existing weakness and loophole of the former company, they may use these loopholes to against the former company.
On the other hand, some people strongly agree that they are happy as a cybersecurity professional, and many of them believe that they will keep secret about the loopholes of their former employers with professional moral.
Source: http://www.infosecurity-magazine.com/news/high-cybersecurity-staff-turnover/
-
PwC: Security is No Longer an IT Cost Center
Many organizations no longer view cybersecurity as a barrier to change, nor as an IT cost.That’s the word from the Global State of Information Security Survey 2017 from PwC US, which found that there is a distinct shift in how organizations view cybersecurity, with forward-thinking organizations understanding that an investment in cybersecurity and privacy solutions can facilitate business growth and foster innovation.According to the survey, 59% of respondents said they have increased cybersecurity spending as a result of digitization of their business ecosystem. Survey results also found that as trust in cloud models deepens, organizations are running more sensitive business functions on the cloud. Additionally, approximately one-third of organizations were found to entrust finance and operations to cloud providers, reflecting the growing trust in cloud models.
resource: http://www.infosecurity-magazine.com/news/pwc-security-is-no-longer-an-it/
-
Hi Andres,
I agree. I like to hope that users will see the “Secret” conversations and ask “How comes this isn’t the standard?”. However, I think for the majority of users they won’t even use this function while the remaining will likely just think of it as a way that deletes a message after a certain time without understanding the real premise. While I may be pessimistic, I really hope this is a step in the right direction.
-
Synopsis of “2016 Emerging Cyber Threats Report” from Georgia Tech Institute for information Security and Privacy.
This report came out of the security summit in 2015. It speaks of cyber threats in broader terms and addresses these four areas:
Consumers continue to lose their privacy as companies seek to collect more data:
As consumers become more mobile and dependent on technology in their everyday life, companies are taking advantage of big data collection to improve operations and lead generations, posing a significant risk to privacy. There are limit use of technologies that does not collect data, and unfortunately, consumers are giving up a lot of their privacy for convenience.Growth of internet connected devices creating a larger attack surface.
As more devices get connected to the internet, hackers are looking for vulnerabilities to exploit. Devices, sensors, cars, industrial control systems, and devices from just about every industry is being added to the Internet of Things, which is also adding more entry points for attacks. The challenge and still growing concern is that these devices does not have security built-in, and there is no single solution for securing all devices in IoT.Growth of digital economy and the lack of security professionals.
-
The influx of technology creates a high demand for security professional to help protect organizations from attacks. According to a research conducted by Frost & Sullivan and the International Information Systems Security Certification Consortium (ISC)2 worldwide shortfall of security professionals will be 1.5 million workers by 2020.
Information Theft and espionage shows no signs of abating
Cyber-criminals that are not just financially-motivated has become commonplace. Attacks are be more sophisticated and nations along with private organizations are at risks from cyber attacks.To Read the report: http://www.iisp.gatech.edu/sites/default/files/documents/2016_georgiatech_cyberthreatsreport_onlinescroll.pdf
-
The article I read is about how many of the recent major breaches have something in common… In all of the major cyber security breaches, the path of the attack has been the common password because hackers know that the password is the weakest link in cyber security today. There are a number of reasons passwords are failing, including the reuse of passwords across accounts (Facebook and work email). The article stated the need to make our password issue a national priority and the need to come up with something better. We apparently need to leverage and develop the next generation of authentication technologies to authenticate identities in a way that is both stronger than passwords but not too inconvenient for users.
“This innovation is being spurred by the near-ubiquity of mobile devices that contain biometric sensors and embedded security hardware, creating new ways to deliver strong authentication – in many ways, with models that are both more secure and easier for the end-user, relative to “first generation” authentication technologies.”
-
Tech support scams put UK Users at Risk
A warning issued of tech support scams aimed at UK users. A company named Eset revealed data and claimed that the UK’s share of HTML/FakeAlert malware rose to over 10% over the past month.
HTML/FakeAlert refers to the malware typically used in tech support scams. It flashes up fake alert messages relating to supposed malware infection or other technical issues with the victim’s machine. The victim is then typically urged to contact a fake tech support phone line which could be a premium rate number, or else download and install a fake security tool which is actually additional malware.
It is recommended that the users mitigate the risk of support scams like this by keeping machines patched, up-to-date and protected with reputable security. Users should remain vigilant and should not trust unsolicited calls purporting to come from major IT companies like Microsoft. Users must get in touch with tech support via the official channels—a phone number or email contact on a vendor’s website, the firm added.
Microsoft claimed last year that such scams had cost more than three million victims over $1.5 billion. The company says that they have received more than 175,000 complaints about these scams over an 18-month period.http://www.infosecurity-magazine.com/news/tech-support-scams-put-uk-users-at/
-
Turkey blocks Google, Microsoft and Dropbox to control the data leaks.
As a result of the release of 17GB worth of leaked government emails, Turkey blocked access to Google, Microsoft and Dropbox services to suppress mass email leaks. The nation-wide censorship attempt was launched on 8th October.
Analysis has revealed that Google drive and dropbox services were issuing SSL errors which was intercepting the traffic at a national or ISP level. Around 57,623 emails from the Turkish government dating as far back as 2000 were leaked. The hackers were threatening to leak the stolen data if the Turkish government failed to set free a number of leftist dissidents. Instead of complying with these demands, the government instead chose to ban news outlets and forced Twitter to suspend accounts circulating the leak.
This way of blocking the site has been a common approach by the Turkish Government. I think that as this is not the first attack, the government should start working on preventive controls to avoid such circumstances.
-
Insurer Warns of Drone Hacking Threat
The increasing amount of drones, so-called unmanned aircraft systems(UAS) is being used in military and business, could present a major physical cybersecurity threat, potentially even resulting in loss of life.
However, there are attendant risks, notably the prospect of hackers taking remote control of a drone “causing a crash in the air or on the ground resulting in material damage and loss of life.” There is a hacking term “spoofing” that is referred to using a UAS via hacking the radio signal and sending commands to the aircraft from another control station. There’s also a risk of data loss from the UAS if a hacker manages to intercept the signal, or hack the company gathering the data. Even the companies of drones claim that the owners of drone can be found online, it is still a threat.Source:
http://www.infosecurity-magazine.com/news/insurer-warns-of-drone-hacking/ -
iOS 10’s Safari Doesn’t Keep Private Browsing Private
The Safari browser in iOS 10 no longer offers the same level of privacy as before. Previously, Suspend State was stored in a manner that would prevent information recovery, but iOS 10 changes that, in iOS 10, Suspend State is designed to create a list within the web browser to allow easy switching back and forward between the recently accessed pages in the currently opened tabs. It is stored in a database, thus allowing for the recovery of deleted records, and some experts have already proved that by experiments.
This change would make web browsing much faster when the user decides to go backwards or forwards to recently accessed pages, it seems that Apple chose user experience over user privacy.Source: http://www.securityweek.com/ios-10s-safari-doesn%E2%80%99t-keep-private-browsing-private
-
The article I found this week involved the possibility of someone hacking a diabetic patients insulin injector.
Ethical hackers have found J&J’s Animas Onetouch Ping insulin pump, which allows patients to push a button to inject the proper dose of insulin can be hacked because the communication from the remote to the device isn’t encrypted. The flaw would allow the hacker to inject insulin into the patient multiple times. Scary.
J&J has warned customers and offered a fix for the problem. The group also said hacking of the system is extremely low but this is a vulnerability that must be fixed. The science behind it is great, especially for the elderly who may have problems with the a syringe, but you would think simple encryption is a no-brainier. Crazy how J&J didn’t think about this during developement.
http://www.reuters.com/article/us-johnson-johnson-cyber-insulin-pumps-e-idUSKCN12411L
-
Card Data Stolen from eCommerce Sites Using Web Malware.
RiskIQ, a cloud-based security solutions provider have been monitoring a campaign in which cybercriminals compromise many ecommerce websites in an effort to steal payment card and other sensitive information provided by their customers. The method of attack was called “Magecart” where threat actors inject keyloggers and URLS directly into a website. RiskIQ identified more than 100 online shops from around the world hacked as part of the Magecart campaign.
JavaScript code injected by the hackers into these websites captures information entered by users into purchase forms by acting as a man-in-the-middle (MitM) between the victim and the checkout page. In some cases, the malware adds bogus form fields to the page in an effort to trick victims into handing over even more information. The harvested data is exfiltrated over HTTPS to a server controlled by the attacker.
By loading the keylogger from an external source instead of injecting it directly into the compromised website, attackers can easily update the malware without the need to re-infect the site.http://www.securityweek.com/card-data-stolen-ecommerce-sites-using-web-malware
-
UK BANS APPLE WATCHES IN CABINET MEETINGS
The news I read basically talks about Apple Watches have been banned from government cabinet meetings in UK. There is a concern that Russian spies will utilize Apple Watch as a listening tools.
Russia has chosen hacking to gather intelligence and play a role in government activity
Prime Minister Theresa May imposed the new rules following several high-profile hacks that have been blamed on Russia. Several cabinet ministers previously wore the Apple Watch has brought up concerns because “The Russians are trying to hack everything.”. Mobile phones have already been banned due to similar concerns.I believe it’s a good action that preventive control taking in place to mitigate the risk. The reason why I think it make sense to ban iwatches during cabinet meetings is because all these meetings are confidential meetings, they wouldn’t want to leak any sensitive information to others. An iWatch is like a mini computer. Once it is hacked, it can be programmed to do whatever the attacker wants. It could record all the audio offline, and when the hacker connected to the internet next time, the audio will be uploaded to the attacker’s server.
Source: http://www.infosecurity-magazine.com/news/uk-bans-apple-watches-in-cabinet/
-
Paul,
Thanks for sharing, I didn’t know about ” Secret Conversation” feature. However I don’t think social media is safe platform for sharing important information
-
The attacks on iCloud especially for celebrity accounts has been on rise. Hackers confess it is a easy hack and can be done by finding out the email address behind the icloud account. Hackers find a target to exploit and can find purported email accounts. Hackers use the Apple’s create account page to guess email address used. While creating new entry if a used email is listed, the message confirms if it is used or unavailable. If they get a message displaying that the email listed cannot be used or already in use, they are one step away from hacking. After this they would attempt to crack password or guess the user details to answer security questions. First step is to enter birth date of victim which is commonly available on any social networking sites.For answering security questions like my pets name, where were you on Jan 1st 2010?, Who was your favorite teacher is a matter of social engineering.
Against this Apple must modify the sign up process and forgot password mechanism to detect hackers while they are attempting to guess iCloud accounts.http://www.businessinsider.com/how-hackers-get-into-your-apple-iCloud-account-2014-9
-
The bad news for Mac users!
Malware targeting webcam and microphone, now targeting Mac laptops. Mac malware to tap into your live feeds from Mac’s built-in webcam and microphone to locally record you even without detection.
Attackers use a malicious app that monitors the system for any outgoing feed of an existing webcam session, such as Skype or FaceTime call.
The malware then piggybacks the victim’s webcam or microphone to secretly record both audio and video session, without detection.
You should physically cover your webcam! -
“UK Bans Apple Watches in Cabinet Meetings” by Tara Seals, Infosecurity Magazine
The news I read talked about that in the UK, Apple Watches have been banned by government cabinet meetings, because of the concerns that they could be used as listening tools by Russian spies. Many sources claimed that those smart watches have become a major concern for hacking activities, and one said “the Russians are trying to hack everything.” People said that intelligence community had more conviction in the presence “Weapons of Mass Destruction” in pre-invasion Iraq than they have in the clear attribution of who is really behind the cyber-attacks.
I think for now, Apple has to make response for this because this could influence not only for UK markets but for the entire global markets. And Apple should update the system of Apple watches to a secured system. However, Iphones are also portable devices and could also be listening tools by hackers. Maybe Apple should build the same security system for both Iphones and Apple Watches.http://www.infosecurity-magazine.com/news/uk-bans-apple-watches-in-cabinet/
-
What Makes a Good Security Awareness Officer?
Sharing the article i found interesting that how communication skills are also important with technical skills
Communication is one of the most important soft skills that a security awareness officer will need. Time and time again its been seen that people with the strongest communication skills develop outstanding awareness programs.The best awareness officers seen have little to no security background, but instead worked in communications, marketing, public relations, or sales .
In contrast 2016 Security Awareness Report identified that over 80 percent of people involved in security awareness have technical backgrounds.http://er.educause.edu/blogs/2016/10/what-makes-a-good-security-awareness-officer
-
Military Cyber Command of South Korea Suffers Embarrassing Hack
South Korea cyber command center of military has been hacked last month when officials discovered malicious code in its system. Officials are not clear how the malicious code entered the system, but think that the objective was a “vaccine routing server” used by the cyber command of the country.
Kim Jin-pyo member of the national defense committee of the parliament stated that the probability of leaking or stealing sensitive data is low because targeted server was not connected to military intranet.
North Korea is being suspected for this attack, but investigators are looking for finding facts and thus officially will not blame anybody till the investigations have been completed.
Fortunately, the attackers didn’t steal any data from the server, which has been secluded from the rest of the network and the Internet network of military did not experience any downtime due to the breach.
The server’s task is security of computers, which military has for the purpose of Internet-connection. Approximately 20,000 computers of military are believed to be connected to server. Officials are trying to find out how the malicious code entered the system
-
“Government lawyers don’t understand the Internet. That’s a problem”
The article discusses the dearth of lawyers with a science or technical background, and the effect it is happening in prosecutions and the legal profession. It first chronicles the physics professor who was arrested for espionage and accused of working for China. Eventually the charges were dropped after it was revealed that prosecutors did not understand the actual contents of the material in question. He was simply collaborating with a colleague in China, but the Justice department assumed it was regarding a sensitive research when it wasn’t. Very few lawyers have an understanding in cybersecurity or any science which makes prosecuting cases more difficult and leads to mistakes. More and more prosecutions, and also civil lawsuits involve technical information central to the issue of the cases. As technology and sciences progresses at a faster rates, lawyers will have more trouble properly litigating and prosecuting cases.
-
White House Vows ‘Proportional’ Response for Russian DNC Hack
The precursor to this story is that the Democratic National Committee emails as well as other organizations have been hacked and leaked by unknown sources. The files have been posted by WikiLeaks, DCLeaks.com, and Guccifer 2.0, who also may have been a hacker. The U.S. intelligence community stated that they were highly certain the hacks were orchestrated by high level Russian officials. The White House press secretary Josh Earnest told the press that Obama will take a proportional response to the hacking. Proportional isn’t very well defined in this case (the DNC doesn’t have a Russian wing to hack). Obama does have several options at his disposal still. More economic sanctions could be imposed but may hurt other countries that trade with Russia. There could be a diplomatic approach but that jeopardizes the situation in Syria where discussion still isn’t on similar pages. Obama could try to prosecute the hackers themselves but as seen with Snowden, we cannot extradite from Russia to try them. The response could be with our own hackers to go after Russian officials or elections. As with anything proportional, any move could cause a continuing escalation as two sides rarely see attacks as equal.
http://www.wsj.com/articles/white-house-vows-proportional-response-for-russian-dnc-hack-1476220192
-
Physically covering the webcam doesn’t stop the microphone recording, which often will have juicier details. Even if you have a Mac, you need to run antivirus and frequent scans. The article also mentions a 3rd party tool that monitors what programs try to access the webcam or mic. If you suspect you have an issue, don’t start Facetime or any other VoIP calls. Piggybacking means they can’t access them unless you’re also using those features.
-
Laly – this is very concerning. I work in a “closed area” and am able to bring my laptop into the area (most times). With that said, my work computer has a webcam. You will see many employees but a sticky note or some kind of coverage over their webcam. In fact, I have done that as well. I would guess that most people are doing this because of the recent news that you reported. Pretty crazy but definitely not a surprise.
-
Brou – I bet with a situation like this, the US agency came in and took control of the monitoring. The security team has their own role and they need to continue to improve their work. Another high volume task like this would not help with the roles that are already assigned to the security team. Considering the team had a terrible breach in recent history, I think it is probably wise to let the agency that is forcing this, monitor the emails themselves. Also, if the security team is monitoring the emails, many employees would have to get a government clearance that many of the employees probably do not already have.
-
- Load More