-
Heather D Makwinski wrote a new post on the site ITACS 5209 F17 6 years, 10 months ago
D14.1: Discussion Topic 1:
In regards to laws and regulations… Complying with the law is obviously important, but in my industry (healthcare), sometimes this is a gray area. In my professional field, HIPPA re […] -
Heather D Makwinski wrote a new post on the site ITACS 5209 F17 6 years, 10 months ago
Discuss one of the following topics:
In this unit, we looked at the categories of network security software and devices. However, in the market, many of these have converged… the line between a firewall […]
-
This is a good explanation Donald. I liked your example of denying spam traffic from genuine traffic. Another example of where you may want to deny traffic without a response is when traffic is suspected to be generated from a botnet attack (e.g. denial of service or brute force credential attack). In those instances, I would want to discard the traffic and give no indication of whether the packet was received. Sending a response to these connections could result in additional network traffic and could give the attacker useful information about the environment.
-
In this unit, we looked at the categories of network security software and devices. However, in the market, many of these have converged… the line between a firewall and a router is much less defined, especially in low to mid-range devices. Is this a good thing or a bad thing? What are the consequences of this convergence?
I think it is ultimately a good thing. The majority of consumer do not even know what a router vs a firewall is. If the two were completely separate, there might not have been a firewall installed repeatedly if an unknowing person ended up purchasing either a low or mid-range device.
The biggest issue with this, however, might be the inability to configure the firewall to ones liking. The firewall that exists in one of the devices might be a pre-configured one that has existing vulnerabilities. A bad-actor might figure this out and exploit a number of devices owned by people or company’s they personally know.
-
There are two ways to prevent a packet from reaching a destination address. The packet can either be Dropped or Rejected. The difference lies in the response the sender gets back regarding this prevention. In the case of a Rejection, the sender would receive a response back from where the packet was dropped stating, “Destination Unreachable”. This is basically a friendly response to users that let them know the hose they’re trying to reach is there but it is not taking packets from them for whatever reason. When a application receives a Rejection response, it ends its attempts at connection. In the case of Dropping a packet, the receiver is intending to completely forbid any communcation with that sender at all. Here, the application sending packets does not get a response back (“Host Unreachable”) so it keeps trying to reestablish the connection. Dropping packets is beneficial to preventing malicious users getting access to any information about a system.
-
Traffic on a system can be managed by either Rejecting or Denying. Both these choices are utilized broadly in separating activity from clients to a framework or server. Reject is utilized when the objective host needs to dismiss parcels got from source by sending and ICMP Unreachable message. The reason for Reject is to tell the source that the framework is dynamic and that a firewall is being utilized to dismiss the parcels got. Likewise, Deny is utilized when one needs to totally dispose of a client movement to an objective framework. Such parcels are dropped or disposed of and there is no answer that is sent to a source have. One may utilize it when one needs to digest spam movement from honest to goodness activity. An association might not have any desire to obstruct a movement, but rather in light of the fact that the framework needs to keep any security risk, it is modified in a way that it is made a request to dismiss certain activity.
-
In the presentation, we see that there are two actions when not passing traffic… We can reject or deny. What is the difference between these? When might you use one or the other?
reject and deny in both cases result in a closed or gap in connectivity. Reject may mean that the data is corrupt or that a server is open but can not connect to a port. reject can also mean the message header or message type is wrong and to drop that message. Whereas deny means that this connection is not trusted, not allowed and therefore denied.
As stated reject is based upon policy, rule, and configuaration. Denial is because there may be secuirty breach and denial of connectivity..
-
-
Heather D Makwinski wrote a new post on the site ITACS 5209 F17 6 years, 10 months ago
Discuss one of the following 3 topics:
What is buffer bloat, and what does it have to do with TCP?
We learn in this unit that TCP has a lot of features that allow reliable communication on unreliable […]-
2. We learn in this unit that TCP has a lot of features that allow reliable communication on unreliable networks (like the Internet). However, UDP does not have these features… why so you suppose we need a protocol like UDP, and what are some uses for UDP where reliability may not be as important? What do we gain when we sacrifice TCP’s reliability for UDP?
UDP is used for connections that do not need the reliability of TCP, where every packet is confirmed to have been received. Applications that need the speed and don’t mind if packets are dropped here and there and aren’t resent, such as VOIP, would use UDP as it would be a lot faster than using TCP. For connections requiring an acknowledgement and every packet to be sent, then TCP would be the choice of protocol.
-
According to Wikipedia, buffer bloat is high latency in packet-switched networks caused by excess buffering of packets, cause jitter and reduce overall throughput. TCP has a role with buffer bloat because usually TCP will adjust itself and match the speed of the bandwidth unless the buffer is full then it will be backed up and packets will start drop. It messes up interaction with other types of network protocols such as UDP and make VoIP and gaming things slow due to the latency.
-
Hi Neil,
Your answer to the question was quite interesting. I see a TCP more as a regulator of traffic than a security prevention protocol. It would ultimately ensure that the bandwidth requirements are met and that the speed of the network is utilized to its full potential. However, I feel that security layer at TCP is very essential so that a situation of buffer bloat can be carefully dealt. In organizations where traffic comes from different networks and locations, it is important to know which dropped packets were crucial and which were spam.-
I see your point Donald, I overlooked that fact about TCP being a regulator of traffic and the dropped packets. It is essential to an organization who monitors network traffic to see if the packets received were supposed to come through were reported as spam and vice versa.
-
-
-
The biggest difference between TCP and UDP is that UDP does not wait for a response from the receiver packets were received or if they were received in the correct order. TCP segments the packets so that the receiver has a short window to verify that it received all packets and in the proper order. UDP would not be the protocol of choice in sending bank account information to someone or, generally, in any area where the rate of data exchange and the order is highly important. UDP is, though, good for less sensitive application such as ping or streaming music over the internet. Applications that use UDP gain in speed over TCP applications-an application that uses UDP does not have the overhead of checking for correctness. It just gets the data and send it the application stack.
-
This is a high latency that occurs when there traffic on your network. The best way to test if your network has Buffer bloat is by using DSLReport Speed Test or the Tests for Buffer Bloat. If one of these tests shows that your network means that your router is letting bulk traffic (such as gaming, Skype, Facetime,etc…) twiddling with QoS might help. In so many cases, faster internet connection probably won”t help at all.
TCP get involved since Buffer Bloat causes a long delays because buffer bloat causes long delays during any kind of network congestion. One more reason why TCP gets involved with Buffer Bloat is the fact that it has so many features to manage reliable and unreliable commutations that go through networks.
-
3. I think if a message comes in a format – the firewall can be adjusted to only look for that segment header. For instance I work with Hl7 messages which is healthcare standard and patient infirmation always starts with a “MSH” segment which is the header. These are allowed to pass otherwise they are firewall restricts the data. I know in our security department the firewalss also look for header of source ipport and destination ipport. Also if the length is too long or junkshort data.
-
-
Heather D Makwinski wrote a new post on the site ITACS 5209 F17 6 years, 11 months ago
In this unit, we begin to discuss some security tools, such as password crackers, disassemblers, packet sniffers, etc. We will discuss many of these tools in the next section of the course, which covers […]
-
It really depends on the job of the individual. I don’t see why an Application Developer or a Database Administrator will require access to these tools. So it should be banned to them. However, if you are part of the pen testing or cyber security team, you would most likely need access to one of these tools to do your job. In general, these tools should be banned unless a clearance has been provided and there is a Business objective to needing them.
Non-IT employees should never have access to these tools, at all. There is simply no reason to why they should. What does an Accountant or Business Analyst needs these tools for? While certain jobs within IT might need the ability to use some of the features the tools provide, Non-IT professionals wouldn’t and shouldn’t ever need it.
-
You’re absolutely right Ahmed – those who are in the company handling the cyber security roles should have access to the tools. I can only see a software developer have access to it if it was made in-house or a combo of on-shelf and in-house development. But if they had nothing to do with it, they shouldn’t touch it. Non-IT employees shouldn’t be exposed to the tools.
-
Ahmed, I agree with your comment that this should be based on job function. Another key requirement that we learn about in Ethical Hacking and Penetration Testing is the requirement for written permission. Even those that are trained and experienced with using these tools (e.g. packet sniffers and password crackers) should formally obtain written permission on a clearly defined scope and objective before using these tools, especially in a production environment. If used inappropriately, these tools could result in disruption of IT systems or inappropriate disclosure of information.
-
I agreed with it. By doing this, we know who uses what. We have records. When bad things happen, we know what’s going on.
-
-
Ahmed,
These tools are tested and certified, using them won’t be a big issue. However, non every IT employee should be able to use these tools as well as the other employees who are not belong to the IT department. In other word, restriction from using these tools should be applied depends to people roles after a discussion.
-
-
The decision to allow certain penetration and vulnerability scan tools should be properly discussed prior to deployment, and each tool should be assigned to the utility owner. The utility owner will be the only authorized administrator, which would assign other users.
I believe the decision to allow these tools is based on the job description of the individual. In my experience, a technology professional will pitch the business case of an application to the C-level executives. If the business calls for specific tools to mitigate the risks from a high-level threat, it may be a good idea to have these tools available to those who are authorized to use them.
-
Hi Fred,
I absolutely agree with you that security tool implementation or withdrawal needs to be discussed with employees and those at the user levels before making a decision. Most organizations try to force upon decisions or changes on employees without a consent. Nevertheless, as you mentioned, as long as the situation demands and specific tools are required to mitigate certain risks, both IT and Non-IT employees should be required to have adequate knowledge about their uses and implementation.
-
-
Personally, I will find it rather worrisome should Non-IT employees have access to these security tools in a workplace. Security tools such as password crackers, disassemblers, packet sniffers etc., should be part of an arsenal of a Cyber-Security professional. It should be approved by Management prior to its adoption in the workplace. Overall, Organization policies should trump any business reasons for these tools to be used in the environment to avoid any subsequent abuse even in the hands of trusted IT Professionals whose job descriptions approves the use of such tools.
-
It truly relies on the activity of the person. I don’t perceive any reason why an Application Developer or a Database Administrator will expect access to these devices. So it ought to be restricted to them. In any case, in the event that you are a piece of the pen testing or digital security group, you would in all likelihood require access to one of these devices to carry out your activity. All in all, these instruments ought to be prohibited unless a freedom has been given and there is a Business target to requiring them.
Non-IT workers ought to never approach these instruments, by any stretch of the imagination. There is essentially no motivation to why they should. What does an Accountant or Business Analyst need these devices for? While certain occupations inside IT may require the capacity to utilize a portion of the highlights the apparatuses give, Non-IT experts wouldn’t and shouldn’t ever require it.
-
Security Tools:
I do believe that in so many cases companies define the tools that suppose to be used to monitoring and maintaining the security level of the IT resources. It has a lot to do with the job roles demands to use these tools. In my opinion, organizations should use anything to protect IT resources including using such resting tools. The biggest question at this point will be: How much these tools can reflect the organization security?
In so many cases, especially if these tools are tested and certified, using them won’t be a big issue. However, non every IT employee should be able to use these tools as well as the other employees who are not belong to the IT department. In other word, restriction from using these tools should be applied depends to people roles after a discussion.
-
Donald,
Organizations should define the tools that suppose to be used to monitoring and maintaining the security level of the IT resources. It has a lot to do with the job roles demands to use these tools. -
Personally, I would not be comfortable with my organization using password crackers in the environment. I think a good motto for anyone in IT security is, “Don’t trust anyone”. Even if there are policies in place that restrict their use to only certain areas or somehow require admins to avoid cracking employee passwords, I would still not be comfortable. For packet sniffers, I would not be as concerned as I would be with password crackers. I’m sure somewhere in my employer’s network, packet sniffers are employed. The justification here is that encrypted data within the packet is still encrypted upon inspection.
-
Non-IT employees should have no access to these tools. I am a developer and not in security but I actually worked with vendors to address connectivity issues. I was able to download wireshark and packet sniffer tools. I work with vendors who have vpn accounts and we send data via HTTPs, TCP-IP and FTP. having these tools at my disposal lets us know which side is rejecting connection. I do not feel everyone in IT should have use of these tools but certain members it could benefit.
-
-
Heather D Makwinski wrote a new post on the site ITACS 5209 F17 6 years, 11 months ago
In this unit, we discussed the growing trend of BYOD (Bring your own device) and some of the challenges associated with this. There has been some talk in the news in the past concerning users, their own de […]
-
Fraser,
Here is some information that may help you for, “Hippa… doesn’t have a standard”.
This link will show you NIST guidelines for the hardware:
http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-124r1.pdfThis link will show you HIPAA regulations for Mobile Applications.
https://csrc.nist.gov/Presentations/2013/HIPAA-2013-HIPAA-Requirements-and-Mobile-AppsThey are recommended policy, but this is what HIPAA regulators will look for if the organization was to be breached, or during a random audit check. The results could lead to fines, penalties and/or other punishments.
-
BYOD is not necessarily nightmarish with regard to HIPPA policy compliance of protecting patients data. With the use of encryption and sandboxing, I believe covered entities and healthcare providers should be able to enjoy the same level of risk protection if the same data procedures are applied on mobile devices.
-
Fred,
Bring Your Own Devise (BYOD) can be a very conflict issue with security. In so many cases, people don’t really realize that using their personal devises for work can lead to many security issues. Most of companies force their employees to be connected to IT resources using devises work since most of them use VPN as a secured way. -
Fraser,
In so many cases, people use their own personal computers and devises to access to websites, which make those devises under the risk to become infected with viruses and warms. In other hand, personal devises have a very weak security which can make them full of all kind of viruses. Personal information which can be basically sensitive information can be under the risk especially if the employer wants to have access to the devise. -
Bring your own device is challenging in a work place. One of the hard things is if you have wi-fi and how secure is it. Is there a standard account for employees and a guest accountpassword. I know companies block apps to be used on their wi-fi. I know my company bans the use of many dating or personnel sites on their wi-fi. There are device management tools as stated earlier that can wipe a device upon it being lost. I feel that in healthcare, a clinician can NOT use personnel device and must use a work issued device. If I use my work wi-fi with my personnel device, my activities can be traced but I dont access patient data. Clinicians like to use their own device, yet IT needs to encrypted and and not accessing patient data. there are always security threats of viruses on personel devices getting on the network.
-
-
Heather D Makwinski wrote a new post on the site ITACS 5209 F17 6 years, 11 months ago
Take a look at this document from the Centers for Disease Control, which provides a plan for business to prepare for an influenza pandemic: […]
-
A pandemic such as the flu, chicken pox and smallpox is a threat to IT security because if a few personnel gets sick and a few key people are out can cause information systems to be unprotected if they are not monitored or maintained for a few days or weeks. Now smallpox happening is very rare but flu has a season and can strike any time, so being able to backup systems and personnel is key so that IT security isn’t put at risk because of pandemics. Always be prepared for the flu and be less concerned with smallpox since it is not a major concern at the moment but expect the unexpected.
-
Hi Neil,
I agree with what you said that Pandemics such as Flu and smallpox are definitely a threat, but a big threat that organizations need to concerned of. In fact, Small Pox and Flu rarely happen. I was reading the article and it says that Flu percentage in the United States has dropped drastically. All that organizations need is to have backup plans for their key personnel so that in times of a Pandemic of any nature, their business operations don’t suffer and that they are able to have buffers. In fact, the concern for such Pandemics purely depends on the nature of the business too. Example, a mission critical business such as support can have a bit of impact, but that I again easily solvable.
-
-
I do think it is important to be prepared to respond to a pandemic event, however with any incident, it is important that you consider the likelihood of occurrence and the impact to the organization and to the IT department. While it seems unlikely, it is important to prepare for an event that could make your entire staff unavailable in a specific geographic reason. This could include a pandemic event, a terrorist attack, a natural disaster, etc.
Some organizations have multiple geographic locations and headquarters and may plan to recover at alternative facilities with existing personnel. Others may need to plan to quickly recover at a vacant location and train all new resources. These events are important to prepare for, however you should always take a risk based approach to ensure the cost of your contingency program does not have a significant impact on productivity and profitability of primary operations.
-
Well put Jason. It really depends on the organization and where the risks are. If you have an organization that has many different sites throughout the globe, maybe departments will create a contingency plan at a local level for disease in the geographic areas. If for example, the company only has 1 or 2 sites that are located in a high-risk pandemic area, they would have a full blown contingency plan in case many people get sick. In the end, the company will need to conduct a cost-benefit analysis to determine whether or not the Risk is high enough to invest in a contingency.
-
Ahmed,
It is important to prepare for such events by creating a good plan to replace those important elements or create a system that makes the organization not dependent to few employees to process tasks.
-
-
-
Basically as all others stated here as more and more people get sick there is greater risks for people who have greater roles in the company to fall ill to sickness and will impact others because others won’t know what they did to perform those roles. As more and more people get sick it can potentially grind IT operations to a complete halt. To help prepare for cases like this it is better for people in primary roles to train a backup as a just in case scenario.
Other threats that should be concerned about more are any potential threats to current power grids and all.
As those that remember the Die Hard movie where they hacked into our national power grid and all that could be a very real threat and concern as doing just that could bring our entire country to a halt.
That is just one of the major things that could be impacted but any other major resource grid could be hacked and all as well causing major damage. -
This question is kind of unorganized. i mean pandemic belongs to what category. Typically, we make several big categories. Under those, there are several subcategories. If company is big enough, they will have dedicated people calculating how likely what threat could happen, they put more resource on them. For pandemic, it’s less likely happen compare to other direct threat.
-
Fred,
I think one way to prevent pandemic events is by not choosing locations that have a high risk to have pandemic events when you create the organization. Documents should be distributed to employees that explain step by step what should be done in case of such event. This can be a type of education for employees to know how to deal with these events. -
I think what you described is the main concern from a business perspective and the primary measure to combat this threat is to have documented procedures in place for key functions. This along with cross training can help ensure business continuity when disastrous external events take place. At the same time, however, cross training can be a double-edged sword when it comes to SoG. The potential issue of one individual knowing all parts of a system is less likely in large organizations, but is something to be considered and balanced in smaller ones.
-
despite the low chances of a pandemic spreading to the staff, any company, not just IT staff needs to be somewhat pro-active. Most companies require a flu-shot now, in my job upon hiring they verified if I had gotten shots for chicken-pox and small-pox. Many companies when they hire people from oversees make sure they have certain shots like polio, hepatites, tb shot etc. I work at a place where all information is confidential. If you want to have a secure IT staff and handle information through proper standards than I am all for having a proactive approach to pandemic sicknesses.
-
-
Heather D Makwinski wrote a new post on the site ITACS 5209 F17 7 years ago
Research how quantum computing is being used in both enhancing cryptography, and weakening existing cryptography standards, and discuss these with the class. Based on your research, how do you think quantum comp […]
-
Jason you make some good points and with quantum computing, it can calculate algorithms way faster than computers now using binary. There’s an algorithm called Shor’s algorithm that can deem to break RSA because of how it can break calculations down to it’s prime numbers and it’s meant for quantum computing. But if a RSA value is very long say in trillion bits (terabytes), it may take a quantum computer to break since the rule is the longer the value, the harder it is to break. So security has to take this into factor and figure out how to make it so that like you said, have standards in place for quantum computing and RSA algorithms to not be so quickly calculated.
-
So what is understood of quantum cryptography is that it can dramatically reduce the time needed to break a symmetric key algorithm as well as the aes-128 for example. When this does become mainstream a lot of the known standards will become obsolete due to how quick they will be broken.
In terms of the length of time needed to where this does become mainstream I think we will have a long time to prepare. As people are just now breaking into this technology trying to develop quantum computers for the first time here. Based on what I have found out that there are some companies currently using this technology for their CA related activities, how ever since it takes a while to gradually progress in this technology it may take a new generation of quantum computing like every 10-20 years or so till the next faster advance. It will be in the news and all once these become faster and faster and more widespread.
-
Fraser,
I do agree with you, these types of computers are already in the market which means that the changes in the IT security will be noticed soon. For example, in the field of encryption, as professor Green said in class are based on mathematical calculations to create a random algorithm which is required so many calculations that takes a long time in some cases. -
Fred,
Thank you for the explanation, Quantum computing is a new computer technology area of study based on the principles of Quantum Theory. This theory explains the nature and behavior of energy and matter on the quantum (atomic and subatomic) level. These new generation of computers would increase the capability to a modern supercomputers, they follow the laws of quantum physics to execute multiple tasks using all possible permutations simultaneously. -
Based on my research on Quantum cryptography, will definitely change the way computers work. Though much of its use is in theoretical state, much progress has been made lately. Take for instance Bitcoin. Some Bitcoin miners are already taking the advantage of quantum computing to mine coins to their advantage. As quantum cryptography grows and matures , I envisage a chaotic situation whereby the race for speed might break algorithms. IT security community must brace up because this change is yet to come but just like we could not imagine the common technological possibilities of today, quantum cryptography is upon us and will surely disrupt the way we sure information today.
-
Based on my research,Quantum cryptography will definitely change the way computers work. Though much of its use is in theoretical state, much progress has been made lately. Take for instance Bitcoin. Some Bitcoin miners are already taking advantage of quantum computing to mine coins at a faster pace to their advantage. As quantum cryptography grows and matures , I envisage a chaotic situation whereby the race for speed might break algorithms. IT security community must brace up for this change is yet to come but just like we could not imagine the common technological possibilities of today, quantum cryptography is upon us and will surely disrupt the way we secure information today.
-
Quantum computing is still some time away from being used extensively and by extensively I mean used by well-funded organizations and governments, even. In this sense, modern cryptography is still a valid means of security. When quantum computing becomes more prevalent, however, what will become of these modern protocols? According to an article from Quantum Magazine, our fears shouldn’t be so large. The article sites a paper co-authored by Penn Professor, Nadia Heninger, that reasons RSA algorithms are faster than a classical computer and a quantum computer (running Shor’s algorithm). “RSA is not entirely dead even if quantum computers are practical”, Heninger
https://www.quantamagazine.org/why-quantum-computers-might-not-break-cryptography-20170515/
-
great post Fred and explanation. Quantum computing is an off-shoot of Quantum Mechanics.. Standardization, Organization, and implementation may be decades away. Basically it is enhancing cyrptography in short -term but will be overtaking it long term.
-
-
Heather D Makwinski wrote a new post on the site ITACS 5209 F17 7 years ago
Research Kerckhoffs’ Principal, and read the segment in the text titled “Never Trust Proprietary Algorithms”. I think we can all agree that having open protocols is considered critical in cryptography. But wha […]
-
Jason, I like what you said about Kerckhoff’s method and the other side to that is Kerckhoff’s principle is when keeping things secret, make sure it has low value or can be replaced. Secrets can be open because with cryptography if someone who’s evil gets it, they may not know what’s it’s for because keys can be randomized and changed with the correct algorithm(s).
-
As you stated Younes, it goes both ways. If you open up protocols, this gives developers the opportunity to increase security, but by opening it up, you are also inviting more attackers. However, if a company is diligent and tighten things down when developing the propriety protocol, they will be able to restrict many attacks. Personally, I believe everything is hackable. This is pretty much a fact. I prefer the open-source method as it allows the collaboration of all individuals to increase security. This also promotes white hat hackers who will have the potential able to discover additional security flaws before the bad actors do.
-
Basically what Kerckhoffs’s principle was saying was that anything that was to be secure via cryptography and all should remain secure even while everything else about the product is public knowledge. Personally I don’t think that proprietary algorithms are a good idea period. As they are not fully trusted as an industry standard and there is no guarantee that they are hack proof. I wouldn’t trust any proprietary cryptography in any part of my business as there is no proof that they are 100% secure.
-
Kerckhoffs’ guideline is a perfect that works ideally, however as we have found in this class and others, cryptographic frameworks regularly have vulnerabilities. Some of these aren’t found for a considerable length of time (See the most recent WPA2 break). This issue is philosophical – humanity isn’t flawless, subsequently we would never outline an immaculate cryptographic framework.
The civil argument comes down to this: Open source implies everybody can perceive how it functions, and everybody can discover vulnerabilities, you nearly swarm source weakness and bug discovery when open source. Exclusive is shut source, nobody knows whether the framework is really secure with the exception of the seller who made it. This can be useful, as security through indefinite quality can be compelling (blackbox versus whitebox) however this has a few issues, quite 1)Protection from State/NGO that approach source code and vulnerabilities (See NSA, FSB) 2)Patching isn’t really a need.
-
It’s more about philosophy. It’s a lasting fight between open source and closed source. In my opinion, nothing is unbreakable. What it matters is how much time it takes to break it. So we expect it to be secure for certain amount of time. Under my assumption of everything is breakable, it doesnt matter if is open source or closed source. People will find it and break it. Even for quantum communication, scientist states it’s theoretical unbreakable. Who knows?
-
Jason,
Your answer explain it all. Proprietary protocols helps to protect more systems and makes it hard to attack them, however it makes it hard for developers to add their touches to increase the security. In the other hand, open protocols help to develop and increase the security level because they give chance to develop better secured code, but it remains unsecured and can be attacked easily since everyone knows the code source. -
Donald,
I believe that we should demand some space on certain IT protocols to add our touches to secure our IT architectures. For example, Apple should give us the ability to customize our security dependent to our needs, that won’t only provide a better security, but it will help Apple to increase its sales numbers because more organizations will purchase apple products. -
In general, I think this principle can translate into most areas technology. With a proprietary protocol or software, only the developers have reviewed the code and may be limited when it comes to recognizing weaknesses as they are “too close” to it. Open protocols tend to be more secure over time as more people can examine it in depth, and more eyes on it makes it much more likely that flaws and vulnerabilities will be discovered and addressed. In short, a wider variety of knowledge and experience (provided by vast communities of developers) applied to protocol and software development makes for not only more secure, but more innovative and dynamic systems, protocols, and applications in almost any context.
-
I think open sourced protocols would be ideal in some situations as it would allow some companies, governments etc to make modifications as needed to meet potential security needs as they see fit. If they don’t then they are basically left with what ever is the standard and stuck with any risks that may impact anyone else using those non open source protocols.
-
Certainly, as the text states, using proprietary algorithms presents security concerns. If a company expresses to use its own, proprietary version of algorithm, this means that this piece of technology has not been tested in the “wild”. Open source crypto-algorithms have. In order for some piece of security technology to actually be considered secure, it needs to have been able to hold up against attacks of all different types. Even with that being said, I think proprietary crypto-algorithms and proprietary security technology in general can be a good thing. I think the trade-off has to be a measure of time, however. What I mean by this is, if a new proprietary piece of technology comes out, it will (in theory) have never been seen before by anyone including hackers. For a time, then, this technology will be secure. The difficult thing is to say for how long, but it will certainly be secure.
-
In my line of work in healthcare, it is too dangerous to go open-source. There are HIPPA violations and patient care issues. I am at a big hospital and we have a mix of Open source and proprietary. Our emergency medical record is propritary and created by a large scale vendor. It is rough as we need them to have fixes, updates, changes in code and secure it in our environment. Yet we have open-source applications that have less patient information but more related to billing, demographics and data analysis. This is good as we can secure it ourselves and change code as needed.
-
-
Heather D Makwinski wrote a new post on the site ITACS 5209 F17 7 years ago
One of the techniques for mitigating risk of application vulnerabilities is restricting what types of applications can be executed on your network. Windows Active Directory includes tools in group policy that […]
-
I think the right answer will change from organization to organization. For me, I might think about utilizing both policies but instead of white listing the app, I would white list the type of traffic/connections the app wants to use. I would do this after verifying its signature, of course. I might say that any app can run on a system once its signature has been verified but I might use my white list/black list to say it can only communicate via https ports. I might go further to say that, based on the subnet its running on, it can only communicate to very specific IPs/subnets via firewall rules. This way, I believe, it is easier to let the app run then open ports/firewall rules as needed. If I’m restricting what apps can run by application white/black listing, I feel this will restrict the availability of the data even more.
-
In a business, applications should be whitelisted if it’s absolutely required for the organization and if it can be set, set it to where only a few employees can use it on role-based or departmental needs. There are ways you can restrict applications without whitelisting or blacklisting them – imbed a credential manager. That way if an organization has a tier 1-2-3 type setup, they can only allow access to application based on what they do and when they need it. For example, when they are new employees they need to request access for simple applications that all Tier 1 would require.
-
Good point Fraser. Blacklisting vs Whitelisting is dependent on the type of department and the applications that they use. I think that the departments utilizing the Core Business functional applications can get a way with Whitelisting. Generally, the core applications would only be a few, so it would be easier to Whitelist.
Now, as you have mentioned, the IT dept might need access to multiple types of applications, both open-source and non open-source. This type of department would most likely benefit from Blacklisting applications. Otherwise, it would be a hassle to continue managing the list of blocked applications if using Whitelists.
-
I believe that it depends on the organization. Whitelisting every websites opens a lot of risks to the organization, while blacklisting may affect availability in the C-I-A triad. From a security perspective, it is much safer to blacklist every website and only whitelist specific websites. It still limits availability, however, mitigates potential risks.
From my experience in working in a visual effects studio, most sites are whitelisted in order to gather references or resources. The only blacklisted sites were the usual blocked websites in every office setting.
-
Blacklisting applications, websites, or other elements is very easy to do. However this is not the most secure method. Blacklisting requires constant updating and vigilance to ensure new areas don’t pop up to cause risk. This can impact confidentiality and integrity most, as an attack vector not blacklisted may sneak in unknown.
Whitelisting is very effective at keeping unfriendly actors out of our network or system. By only allowing things we know and trust, new attacks can’t rise up to circumvent out block. However, unless the organization is extremely mature and knows everything that is needed, it will inevitability run into issue with Availability as programs or applications may be unknowingly blocked and rendered useless.-
Kevin,
I do agree with you in most parts of your post. However, using Blacklisting and White listing can be useful to block and give permission to certain applications to be part of the IT structure if the IT department keep watching the traffic to determine which application can be allowed or restricted from the IT structure.
-
-
In my opinion, Blacklist all the applications can cause a problem of availability which can cause an issue to add an important application that will benefit the company. However, the company should blacklist only the non trusted application and audit the traffic of communications. It is true that blacklisting all applications is the easy way to secure any company IT resources, but It’s hard to do that.
If the company uses blacklisting to block the risky applications, the impact of the availability will be less and the company will face less risk.
As a conclusion, using Blacklisting and White listing can be useful to block and give permission to certain applications to be part of the IT structure if the IT department keep watching the traffic to determine which application can be allowed or restricted from the IT structure.
-
In my experience, the option of classifying an application to a white list or black list will depend entirely on the IT environment. An organization that operates in a high risk environment may opt to white list its application which conversely relates to a trusted application. On the other hand, an organization that has a low risk profile may prefer to adopt black listing. The blacklisting option allows the organization to be more flexible with what applications can be allowed or disallowed in it’s environment.
-
In my opinion, I believe that both whitelisting and blacklisting applications can help companies to improve the security system and the secure the data. From my experience, I agree with classmates that it’s depends on the business environment to determine which solution is the best for the security systems. For example, I work for Temple University Admission Office and most sites are whitelisted in order to gather information and resource. However, for blacklisted sites, it most likely blocked websites and the computer systems will warm you about it
-
In terms of what should and should not be approved on a corporate network I think the white list route should be implemented. In this case the corporation has control on what gets installed on their pc’s and will help potentially prevent unwanted tools that could potentially have some kind of malware tied to them. If a company goes the blacklist route then there is no control over what all gets installed and opens up a pandora box in terms of potential issues that could arise as they would have no idea as to what could be on a users system.
-
It really depends on the whole enterprise’s mission. If it’s safety critical business, safety is priority, which is bigger than anything else. Well, for the sake of discussion, combination approach is preferred. Each one of them has weakness and strength. Depending on specific case, takes whatever appropriate
-
Neil,
Thank you for the answer that explain most of it, applications should be whitelisted if it’s absolutely required for the organization and if it can be set. The IT department should decide what type of applications should be whitelisted or blacklisted. These decisions should be dependent to how much are those applications important to the organizations or how much they can harm the organization IT resources. -
Good example here I feel that companies that truly lack IT governance make policies like this then that in turn back fires and causes circumstances like this where some group try to act like they are “gods” and have all control over everything where as other groups would have no access to that. It the company had IT governance in place it would prevent situations like this from happening.
-
I think it should be black-list model and its up to IT staff to implement. In my work their is a combination of white-list and black-list based on user credentials. For instance non-IT staff usually can not install anything outside the scope of their job. We also lock down computers from installation of applications unless they are administrators on the PC. If a user needs an application download, they request through their manager approval. I think for IT staff, there needs to be control so staff do not install stuff to download music, streaming, or personnels, etc. It takes time to hunt down these applications to black-list but that is part of IT job.
-
-
Heather D Makwinski wrote a new post on the site ITACS 5209 F17 7 years ago
Linux or Windows? Seems like nothing starts a war in the IT department like this topic… but which is better? Which is more secure? These two operating systems are very different, and regardless of your pref […]
-
i personally prefer Linux. i used ubuntu 14 most. i use ubuntu just because i got feeling for it. When i use windows system, i got no feeling. I feel Linux chose me. I use windows only for entertainment purpose.
-
I am a big user of Windows because I’ve used it on all my jobs, school and home. I basically grew up using Windows. I like it because each version has their shares of ups and downs but also it’s something to learn and realize that even though Windows is a target for hackers and other malicious activities, there are steps to prevent that. Also, many applications are compatible with Windows. There are thing you can do in Windows that sometimes you can’t do in Linux. But Linux seems to be more complex and once you understand how it works and get more comfortable, it is an useful OS. I don’t know Linux much but I do want dwell into the possibilities that it holds and see what challenges it has. Maybe it might get me to use Linux on a future machine and that I know Linux is a big player in the cyber security world. Right now, I prefer Windows because I understand how it works but I would like to learn more about using Linux and getting comfortable using a command-based system.
-
I am a big user of Windows because I basically grew up using it and have used it throughout my jobs and school. I like it because it is simple but also there are so much that can be done in Windows that I haven’t explored yet. Many applications are compatible with the OS and even though people target it, there are also many ways to mitigate the risks. Also the active directory in Windows is a big plus, as it is useful in managing users and making the permission changes a bit more secured. Also I enjoy a GUI interface. For Linux, I haven’t gotten skilled or comfortable with it. I am still a rookie but I do want to use more of it and learn the security aspects of Linux over Windows. I like a challenge so that’s why I am exciting about using Linux and want to get more comfortable and skilled in it.
-
That’s how I feel about Windows. I am a big user of Windows because I basically grew up using it and have used it throughout my jobs and school. I like it because it is simple but also there are so much that can be done in Windows that I haven’t explored yet. Many applications are compatible with the OS and even though people target it, there are also many ways to mitigate the risks. Also the active directory in Windows is a big plus, as it is useful in managing users and making the permission changes a bit more secured. Also I enjoy a GUI interface. For Linux, I haven’t gotten skilled or comfortable with it. I am still a rookie but I do want to use more of it and learn the security aspects of Linux over Windows. I like a challenge so that’s why I am exciting about using Linux and want to get more comfortable and skilled in it.
-
sorry posted more than once – website was giving me fits
-
I prefer windows for now, as I don’t know much about Linux or have much experience. To me, it seems like Linux is the preferred OS for security and hacker types. Linux, as an open source OS seems to be much more customization and has more configurations options – whereas windows will throw up some dialog and do more hand holding. To me, I think of it like old car mechanics talk: Older cars (Linux) can be worked on with a basic set of tools, everything is pretty much replaceable and anyone can get started…whereas new cars (Windows) are more modular and require custom tools and diagnostics. I’m not sure that analogy is all that strong. I think both have a role to play in IT, with Linux being more centered on power users and Windows more of a GUI based experience. That being said I look forward to learning more about Linux.
-
My choice is purely on preference as I use and enjoy apple computers. Linux. I might be biased in that respect but I also agree with open source code. Regardless of service my choice is based upon the availability of resources and the community support that comes with linux.
-
Personally, for my day to day use, I prefer Windows due to the types of applications I use (Microsoft Suite, Games, etc..) and the GUI. One of the great benefits of Linux is the powerful Terminal, something I do not need to use on a daily basis. Linux is more powerful than Windows from a penetration stand point. In my pentesting classes, using Kali Linux and pentest related apps: NMAP, Metasploit, Nessus made ethical hacking effortless, and everything was almost all done within the Terminal. It would be a lot more difficult to do all this on Windows. In the end, it comes down to what am I using the OS for? As previous stated, if it is for my day-to-day activities, I prefer Windows. If it is to work on something technical like pentesting or running a server, then I prefer Linux for its flexibility, powerful Terminal, and open-source nature.
-
I have used windows pretty much my whole life. It wasn’t a conscious decision, but it’s just what’s always been in my house. In fact, my first real experience with Linux has been in this class. Apart from the OS itself and business/office applications, Microsoft apps (such as Solitaire Collection and other casual gaming ones) aren’t of very high quality. In my experience with Linux thus far, it seems like there is much more potential for customization, flexibility, and improvement, due to the nature of open-source. Windows, on the other hand, is much more rigid and cookie-cutter. I could see myself switching to Linux sometime in the future, once I gain more experience with it and more understanding of what I can do with it.
-
I personally prefer the windows operating system but that opinion is skewed from the fact that it has been my primary OS since I can remember. I know that Linux is much more secure then windows and offers a lot more freedom with the applications and platforms available. However, if I am choosing an OS for my IT infrastructure I would choose windows as it is more widely used and wil be easy for any employees to use as they are also familiar with windows. Also support for windows is easily attainable and is more available then that of linux. Also as systems and servers running windows become old as new versions of windows comes out, they can still be supported although the costs will be higher as these are legacy systems. However, I feel that Linux could be a cheaper alternative to a windows infrastructure as a lot of Linux applications are free and open source which can provide you a lot of flexibility upon setting up your infrastructure.
-
Since I have been used Windows for a long time, I would prefer to use Windows. However, when it comes to the question which one is better, I would say people like Linux because the Linux has a better security system and there are multiple different versions of Linux systems. Besides that, Linux is an open resource that developers continue to develop more on it. On another hand, Windows operating systems that are designed by Microsoft which is not an entirely open resource to users. Furthermore, compared to applications, Linux has many more free applications where compare to Windows that it might require purchase fee for the applications since it’s not a completely free open-source operating system.
-
Personally I think windows is a more secure operating system as with their main service of active directory for security it is really a thorough means to be as granular as they can to protect the overall infrastructure. The method as which AD is laid out makes it really hard to break a system which may be using that for its security. Understand linux utilizes ldap but to me feel that if done right AD can be a very secure system.
-
I am an ardent Windows platform user and have had very little exposure to Linux. My experience so far with this course has shown me that Linux offers tremendous feature that over windows. I choose Linux because it’s clearly more powerful and it certainly requires one to have adequate knowledge to work with Linux unlike Windows. This is not to discredit Windows in any way but I’m so convinced that Linux would provide a more secured environment to most users.
-
I couldn’t agree with you any less. I have been having such a hard time trying to get by working with Linux but I’m glad I took this course because I have shied away from Linux in all my IT working experience and for someone like me who wants to breathe an live IT Security, the time for learning is now and I’m getting experience.
-
Richard,
Thank you for the interesting post. I do believe the something, Linux is the operating system that will have big future for people who are in the security field. It’s an open source OS which can give these professionals the chance to be productive and develop techniques to secure companies architectures. -
Neil,
I do agree with you. So many people prefer using Windows, simply because they grow up using it and does’t require any coding skills at all. This is the reason why Microsoft has been and going to be doing well. However, I do believe that Linux has people who prefer, security is a big reason for that as well as it gives users the opportunity to modify the source since it’s an open source. -
I am like neil in that I grew up using Windows. But after learning Linux I understand its traits. linux is more secure and performance is better. Yet for large scale system Linux is a better operating system. I also feel lots of IT employees like myself should be better trained in this system. It is also better for people like us getting into security. From a user standpoint, everyone has grown up in a windows or apple iOS landscape. Users such as business, finance, clinicians, engineers do not have the time or patience to learn a command based operating system. This would take away time from doing their actual job.
-
-
Heather D Makwinski wrote a new post on the site ITACS 5209 F17 7 years, 1 month ago
This week we looked at Single Sign-On, and standards that can allow authentication even outside the organizational boundaries. We also familiarized ourselves with these technologies in our case study review. I […]
-
There are indeed security concerns with having authorization done outside of company boundaries. First of all, when doing authorization outside of company’s boundary, there is reliance on a third party. If that organization’s service is down for any reason, users will not be able to access their accounts if they try logging in. This can be mitigated if users are able to create user accounts and login without using SSO. Another way to mitigate this is by allowing different SSO. For example, instead of relying primarily on Facebook login, also allow users to login using Google or Yahoo. There also needs a resiliency plan in place that describes the options and Business critical functions that can be restored in case authentication is down by the third party.
-
According to Auto website, the single sign-on is defined as a session and user authentication service that permits a user to use on the set of login credentials to access multiple applications. For the organization, there are many advantages to use single sign-on. One of significant advantage is users can easily access multiple files without asking for the password. That can save tremendous time for both organization and users when they at work. However, there are also many disadvantages for single sign-on, and one of the biggest problems is security. Since users just require one-time password when they log in, it’s not secure for the organization systems and user’s information
-
While single sign-on seems like a great idea for trying to save money in terms of hardware, software being used, personally I feel that anything being purchased, should be managed in house. As it could become a very risky ordeal for major companies to rely on another corporation to manage their security infrastructure. If one thing went wrong then it could bring the corporation utilizing the single sign on services to a screeching halt. Not something I would want to be a part of. Would prefer that all security related tools remain in house as that could be a major risky ordeal.
-
Neil,
Single Sign On (SSO) is a good way for so many companies to let an employee or a third party authorized user to access to one or more of its applications. It’s a very good way to access to multiple applications. However, it is very risky since it can put the organization resources (Data and Hardware). -
Donald,
I do agree that Single Sign On provides an convenient solutions for so many companies to help employees and third parties to access to resources remotely, but this technology have a great security concerns such as giving a limited access to third parties to have access to the resources since it doesn’t have Access Controls management that can provide increasing security. -
The major difficulties with single sign-on service should be apparent.
Sites will be giving away their user data to a third-party provider. For some sites that will not be an important consideration, but some may have a problem with handing over their user data to another company.
By choosing the right identity provider, a company can ensure that they cover a significant subset of their potential users, but that will by no means cover everyone, leaving the option of implementing an additional authentication system, which is what they were trying to avoid, or implementing as many SSO services as they feel necessary, which largely negates the simplicity benefits for users.
There is a single point of failure. If the SSO provider goes down, a site’s users will be unable to authenticate. If the SSO provider is hacked or breached, data loss may occur SSO providers are a very juicy target for hackers, although they are also likely to have much better security than the average site.
-
One major security concern with single sign-on is single point of failure. The opportunity gain of reducing multiplicity of efforts may turn out to be a risk because if a users password is exposed or obtained by an unauthorized user, a lot of damage can be done. Undoubtedly, single sign on makes user access to multiple systems and applications in an organization but there should be a lot of thought applied in its implementation to avoid and reduce vulnerability that it may introduce in an organization. The risks can be mitigated if its implementations is deliberately thought out such that we apply the appropriate single sign on wherever it is deployed.
-
The biggest benefits of Single Sign On (SSO) are 1)Decreased “friction” in user experience when authenticating (one credential versus many); Decreases total cost of ownership for SSO – AWS just rolled out there own – for smaller companies; last a single point of management for systems administrators (potentially).
The biggest risks and threats of SSO include: Single point of failure. One credential can be used to access many systems, therefore only one credential is needed to be compromised. Potential loss of data if SSO is outsourced, you don’t know what your vendor’s security looks like (regardless of what the contract says) – as my parents say “You can delegate responsibility you can’t delegate accountability.”
I think outsourced SSO makes sense in some deployments, but you have to be very careful that expectations are written in to the contract (SLAs) as well as security practices. I would definitely want my vendor to be ISO certified!
-
I work in a hospital and single sign was requested by clinicians as a way to alleviate users from remembering usernamepasswords for all systems and AD account. It is costly to implement. I think it is easier to start single sign on from ground up or have single sign-on for applications and a separate account for ADVPN, etc. The other drawback is the single point of failure and that could impact users and several systems. The cautionary tale is doing research on which vendor to use but it has to be supported in-house.
-
-
Heather D Makwinski wrote a new post on the site ITACS 5209 F17 7 years, 1 month ago
This week, let’s keep the discussion informal; we can get to know one another, and get acclimated to using the discussion forum for this course. Post a short bio about yourself, and your experience as it re […]
-
Hello Everyone!
This is Brian… I “teach” this class!
Welcome to class!
-
test
-
Does this work?
-
Hello class. My name is Sachin Shah, but everyone calls me Sach. I am hoping to graduate eithe this Summer or Fall 2018 with a MS in Cyber Security. I have been working at Penn Medicine for over 12 years in a variety of roles in the IS department. I have been a systems analyst, project lead, and currently an interface developer. I do coding in what the industry refers to as middle-engine and in HL7 interfaces. I am looking to move into a security role and transition from a softwareapplicationprogramming roles and into infrastructure and security role. This program is great as I now understand the network components and strategy and not only how it reflects security but the application structure as well.
-
Hi class,
I’m Jason Lindsley and I live in Voorhees, NJ with my wife and three kids (ages 1, 5, and 6).I work full time for a financial institution and my expertise is primarily in softer skills (i.e. Technology Risk Management and Governance), however I really enjoy technical exercises and I have expanded these skills while working through the ITACS program.
I’m primarily available evenings after 8 PM or Saturday mornings and look forward to working with you all this semester.
You can connect with me further on LinkedIn:
-
My name is Matt Roberts and I just graduated with a Bachelor’s in MIS from IUP. I have not had previous employment in the field my knowledge in this area is mostly theoretical. My bachelor’s program focused heavily on soft skills such as project management. I decided to get my Master’s to gain more hard skills.
-
My name is Ryan Boyce. I am Linux Systems administrator, predominantly with RedHat Enterprise Linux 5/6/7. I also am a VMware administrator, working with ESX/Vcenter 6. I have scripting/coding experience in Powershell, bash, and Java. I defininitely fall on the technical side of the fence.
-
Hello everyone! My name is Donald, I graduated from Temple University in 2014 with a BA degree, majoring in Accounting & MIS . I currently work under the Tax department where we focus in improving clients processes and implementing TOM (Tax Operation Manger) . TOM is a tool developed internally which allows clients to easily manage and collaborate within the Tax Department.
-
Hi my name is Neil Rushi, pursuing a MS degree in Cyber Security graduating by hopefully Summer of 2018. I currently work at Verizon as a Fiber Solutions Analyst meaning I help customers troubleshoot their FiOS services – I have been working there for 5 months now. It’s fun as I talk to the seasoned agents and learn what they know. I did something similar with Comcast Business 5 years ago, the comparisons between Business and Residential are the same but also different which is good. I have my bachelor’s from Temple in Management Information Systems, graduating in September 2010. I hope to find a role within the security field either doing Cyber Security Analyst, forensics, hacking or a combination of both. I like being hands-on and facing a challenge.
-
My name is Brent Hladik. Sorry I am a little late to this here was a hectic week. I currently live in Kansas City Mo. I graduated from Wichita State Univ with a Bachelor’s in Business and a focus in International Business and MIS. I am a prior Coast Guard veteran and have about 20 years of overall IT experience working various roles from supporting government contracts to civilian roles. Currently I am a Sr. DBA and have been doing this for the last 7 years or so. I recently just got a Masters from Depaul in Business web analysis and development. Looking to expand my knowledge more in the cybersecurity realm. So hoping to complete my Masters at Temple as well here in that area.
-
Hello Everyone,
My name is Mohammed Syed, I graduated from Kakatiya University in 2007, India. I live in Philadelphia, PA with my family. At present, I am working as an Independent Network Administrator in North Carolina. I have completed Cisco certification (CCNP) in 2017. I have six years’ experience as a technical Engineer. I have worked on CISCO ASA firewall, Fortinet, Nexus switches, and in data centers. -
Hello Mohammed:
You have quite an interesting set of skills that are admirable. I hope to learn some tips with regard to firewall rules and configuration from you.
-
Hello Everyone,
My name is Younes Khantouri. My undergraduate studies in Mechanical Engineering which I didn’t like too much. I decided last Spring to start the MIS program with Fox School of Business. I am in the Cyber Security Track.
I do like the IT field a lot and my goal is to use my technical background to work on building secured IT architectures. Currently, I am looking to find an entry level position in the IT security field. I would graduate in the summer of 2018.Younes Khantouri.
-
-
Heather D Makwinski wrote a new post on the site MIS5214 Security Architecture 7 years, 8 months ago
Compare and contrast (i.e. identify commonalities and differences) between what Sherwood et al. (in our textbook Enterprise Security Architecture) means by the term “information system architecture” and wha […]
-
Heather D Makwinski wrote a new post on the site MIS 5214-Security Architecture-001 7 years, 8 months ago
Compare and contrast (i.e. identify commonalities and differences) between what Sherwood et al. (in our textbook Enterprise Security Architecture) means by the term “information system architecture” and wha […]
-
Heather D Makwinski wrote a new post on the site MIS 5214-Security Architecture-001 7 years, 8 months ago
Compare and contrast (i.e. identify commonalities and differences) between what Sherwood et al. (in our textbook Enterprise Security Architecture) means by the term “information system architecture” and what Off […]
-
Heather D Makwinski wrote a new post on the site MIS5214 Security Architecture 7 years, 8 months ago
Compare and contrast (i.e. identify commonalities and differences) between what Sherwood et al. (in our textbook Enterprise Security Architecture) means by the term “information system architecture” and what Off […]
-
Heather D Makwinski wrote a new post on the site MIS 5214-Security Architecture-001 7 years, 8 months ago
What does Swanson et al. (NIST Special Publication 800-18R1 “Guide for Developing Security Plans for Federal Information Systems”) mean by the term “information system boundary”? How does this concept help pro […]
-
Heather D Makwinski wrote a new post on the site MIS5214 Security Architecture 7 years, 8 months ago
What does Swanson et al. (NIST Special Publication 800-18R1 “Guide for Developing Security Plans for Federal Information Systems”) mean by the term “information system boundary”? How does this concept help pro […]
-
Heather D Makwinski wrote a new post on the site MIS5214 Security Architecture 7 years, 8 months ago
What is a database conceptual schema and what are the relationships (i.e. commonalities and differences) between the user view of data and the enterprise view of data in the database conceptual schema?
-
Heather D Makwinski wrote a new post on the site MIS 5214-Security Architecture-001 7 years, 8 months ago
What is a database conceptual schema and what are the relationships (i.e. commonalities and differences) between the user view of data and the enterprise view of data in the database conceptual schema?
- Load More
Topic 3:
1, Assuming investigator background here: middle class with $100K/year, has two kids and full time house wife.
Assuming he won’t know there maybe other personnel working with VP in the company. Assuming VP may have enough resource to deal with investigator, he knows some powerful friends and investigator knows it. If he reports, VP uses his resource to revenge . He not only lose job, his family could also in danger. That’s the toughest dilemma in his life.
Me, assuming his trusted friend would recommend,
First, talk to his family what he faces at once, making sure her wife understands every consequence and moving his family to somewhere safe.
Second, find a trusted lawyer, planning everything ahead, including prepare an audio record, recording every possible evidence, making sure his statement supported by evidence.
Third, comply with law and ethics, doing what he’s suppose to do.
That was the worst case. In a less sever case, doing step two and three will be enough.
Discussion Topic 14.3 – I would report my findings as needed, regardless if the new programmer is the relative of the VP. The VP can offer whatever he wants but in the end, the integrity of myself is more important than more money. In the world of cyber security, we don’t want to encourage people who break the law and not comply with IT laws. Forensics requires much analysis of the crime scene, just like in the police world – a detective doing his job will not sacrifice his honesty for the sake of saving someone who broke the rules. This is an example of employees intentionally performing malicious acts. I would report my findings as I see fit and keep a backup on another device just in case someone decides to modify my report.
exception happens a lot
D14.1: Discussion Topic 1:
I totally agree with Jason’s viewpoint . The need for security awareness training remains inherently important if healthcare providers are to handle patients’ sensitive data as required by HIPAA regulations. Non-compliance is a big security issue and It should never be justified because security and privacy risk should be a shared responsibility of all healthcare providers.
Complying to the HIPAA requirement of protecting patient’s sensitive data where
healthcare providers find themselves challenged with increasingly multifaceted requirements for effective management and processing of sensitive health data can be mitigated by deploying a proven encryption technique to safeguard the data while at rest and during transmission. Prohibiting use of personal email account on computers that would be used in emergency situations will go a long way to mitigate and lessen sensitive data from being exposed.
D14.2: Discussion Topic 2:
I would consider the RFC 1087: Ethics and the Internet still relevant. In my opinion, it forms the building block of all things internet today. The five basic ethical principles forms the pillar of most policies and rules governing the use and access of data that traverses the internet. Most internet users will definitely find the document easy to understand but being that the internet and its use has grown way beyond what it was originally intended for, there is the need to expand the RFC 1087 to include these new areas that leverage on the use of internet i.e. IOT, artificial intelligence, use of data across international borders, Blockchain and a host of other technologies that come with its inherent risks and problems that if not addressed exposes its users and adopters to a new set of security risks.
I believe the sharing of private information should be based on an emergency policy. The emergency policy should outline why information needs to be shared, how the information should be shared, and who can request such information. HIPPA has an exception to the privacy policy. The exception states that a health entity can exchange patient information if it is life threatening. This exception is important because protecting the patient’s life is more important than protecting the patients PII. As for the encryption process, there are services that are available to provide email encryption to and from outside entities. The hospital should use a service for this type of high level emergencies. As a lower level option, one could use 7-zip to compress, encrypt, and attach to an email. You can password protect the zip file as well. You could then use another means of communication, like the phone, to exchange the encryption password.
Topic 1:
How to interpreter law and regulation is subjective. It offers definition telling us what is right from wrong. In emergency case, time is life. what are we really worried about ? Is privacy a issue? It depends.
Topic 2:
Those rules still apply today. But it just offers a general framework. Today’s internet is a lot more complicated. More comprehensive principle is needed. Based on it, details shall be described.
4.3: Discussion Topic 3:
Every profession maintains a code of conduct for its members and security consulting is not left behind. Based on the scenario, any action short of presenting my findings as obtained will reek of impropriety. As a security consultant, I must strive to avoid any improprieties or the appearance of improprieties nothing less. It is my responsibility to honor the code of conduct of security consultancy no matter who is involved. After reviewing the case, I must comply accordingly by reporting the perpetrator as discovered to the Law Firm.
D14.1: Discussion Topic 1:
It really depends on the circumstances that might result in a non-compliance. If it is a life-or-death situation, I believe that non-compliance would be justified. This would be similar to a Good Samaritan law. A possible way to mitigate these types of issues is by:
-Setting up rules and regulations on when this would be justified
-Requiring approvals before sending out documents
-Having a relative or Patient’s guardian sign a waiver or providing the consent before sending the documentations.
D14.3: Discussion Topic 3:
I would also report the findings as everyone else has stated. It would be against the law for me to corrupt the findings. However, having said this, without perfecting the facts, maybe maybe the wording can be modified to not soften the blow. During audits, when we discover a finding, we discuss it with the clients and agree on the wording that will go into the final report. This way, not only are we appeasing to the clients, we are getting them to agree and acknowledge the issues so that they can work on re-mediating them.
what kind of wording is appropriate?
It can be wording that gets the message across but wouldn’t sound extremely harsh, maybe either a substitution of words or added context around the intended message.
For example, if during an Audit it is found that users have access to an Application they do not need, instead of just saying that “Users were found with unneeded access” more context can be added explaining that users required access to the application due to an earlier project but they were not removed afterwards.
Apologize for the typos, meant – “without perverting the facts, maybe the wording can be modified to soften the blow.”
Good points on Topic 2 Jason. The spirit of this document is still relevant. However, since this document is from 1989, so much has changed that extends beyond the scope of what was intended back then. This document can be improved by adding the different aspects of the internet, and what the responsibilities of the users would be. Also, the different types of devices that use the internet these days can also be added to the document to expand the scope of what used the internet back then.
Discussion Topic 14.3
I would report my discoveries as required, in any case if the new software engineer is the relative of the VP. The VP can offer whatever he needs however at last, the honesty of myself is more vital than more cash. In the realm of digital security, we would prefer not to energize individuals who infringe upon the law and not agree to IT laws. Legal sciences requires much investigation of the wrongdoing scene, much the same as in the police world – an investigator doing his activity won’t forfeit his genuineness for sparing somebody who broke the tenets. This is a case of workers deliberately performing malignant acts. I would report my discoveries as I see fit and keep a reinforcement on another gadget just on the off chance that somebody chooses to adjust my report.
D14.1: Discussion Topic 1:
Jason,
I do agree with you, e-mailing a patient’s personal information without proper security is unacceptable and is either a failure of business requirements or inadequate training. This can also make patients don’t trust that medical institution or hospital. I do believe that those doctors or medical employees don;t only needs to justify why they have done these types of actions, but they have to inform patients before and after exchanging their personal information over the internet. Asking for patients permissions would be more ethical.
D14.3: Discussion Topic 3:
Jason,
Great explanation, This is a very ethical question, I won’t change my report. I will discuss with my superiors what happened and if they decide to change the report, I will have to leave the company. In my opinion, if the consultant is not strong to make the right power, he/she shouldn’t be in that position.