Chapter 3.5 is about the traditional SDLC approach and what I found the most interesting is the figure named verification and validation in that the different layer of the match between the requirement and the testing offers me a clear and structural sight about how the system could response to the internal and external requirements. To explain it further, the core concept from this figure is that the system is gradually ameliorated by different requirements at different stages. The requirements are the internal detailed design, architecture design, functional requirement and ultimately, the user requirement. As a response, the system would do the unit test, integration test, system test and acceptance test in order to positively purify itself and satisfy the feedback of the users. This makes the entire system a positive and two-way system instead of a passive system which only waits for the human modification. Thus, this is especially interesting for me in that it reminds me the SDLC ought to be a two-way process.
The thing that I found most interesting in chapter 3.6 is the risk of virtualization and the initiation of the cloud system into the management information system. It reminds me that while the public are enjoying the advantages provided by the virtualized internet, there might be potential risk hind inside. As it has been mentioned, one of the most frequently seen risks of the virtualization is the possibility that the information such as files, documents and so forth in both the host OS or the guest OS to be attacked by the hackers online. In addition, the higher rick of the snapshot than the image also interests me in identify the risk level of items. It says that the snapshot is more at risk because it is a Random Access Memory (RAM) which is a new concept for me. I learnt that the Random Access Memory (RAM) would make the data more sensitive than the conventional ones. This concept could also be applied in the real life that the snapshot stored on the phone needs to be protected with a special code in order to endure their confidentiality.
In chapter 3.9, the list of risks of current system is the most interesting thing as far as I am concerned. The reason why I reckon it interesting and useful is that it reminds me the analysis of the eventual risk ought not to be limited to the internal flawed points of the system but also to the external influence. For example, in the figure, it has listed the internal problems like the deficits in functionality, the deficits in information supply and so forth which are typical the internal problems of the system. However, apart from the internal deficits, it also mentions the external factors such as the insufficiency of the cost in the project, the changed business requirement in the future which is generated from the external environment. Thus, it reminds me that in order to ensure the good functionality and success of a system, both the internal design and the external support and requirement ought to be taken into consideration. What’s more, the developer needs to have a long perspective which helps the eventual changes in the future. It is in the interaction of the potential influential factors that the preliminary system could be developed.
Hello,
By far one of the longest responses I’ve had to read in regards to analyzing the CISA book sections, I could sense that you found little interesting nuggets of knowledge within every section that made it difficult for you to decide to write about just one thing. Within chapter 3.5 alone, you talked about the intricacies of various types of requirements and how each of these requirements needs a method to test whether these requirements have been met and if the new changes are compliant. For chapter 3.6, you took your time analyzing some of the more common risks associated with virtualization and even took an example of how RAM within servers or other devices can be vulnerable. And for chapter 3.9, which focused on risks associated with implemented systems and long-term planning, you really homed in on how important design and support are for general success and long-term sustainability of information system projects.
SDLC Cycle represents the process of developing software.
During the testing phase, testing starts once the coding is complete and the modules are released for testing. In this phase, the developed software is tested thoroughly and any defects found are assigned to developers to get them fixed.
Retesting, regression testing is done until the point at which the software is as per the customer’s expectation. Testers refer SRS document to make sure that the software is as per the customer’s standard.
The goal of a training plan is to ensure that the end user can become self-sufficient in the operation of the system. What I found interesting is the end-user training section. It tells us one of the most important keys in end-user training is to ensure that training is considered and a training project plan is created early in the development process. To develop a training strategy, the organization must name a training administrator who will identify users who need to be trained with respect to their specific job functions. These are some consideration given to the following format and delivery mechanisms:
-Case studies
-Role-based training
-Lecture and breakout sessions
-Modules at different experience levels
-Practical sessions on how to use the system
-Remedial computer training (if needed)
-Online sessions on the web or on a CD-ROM
The training administrator needs to record student information in a database and their feedback for improving the training course further.
Before creating your training program, it is important for you as the trainer to do your homework and research your company’s situation thoroughly. By gathering information in several key areas, you better prepare yourself to create a relevant and customized training plan for your company.
Objective 1: Determine what training is needed.
Objective 2: Determine who needs to be trained.
Objective 3: Know how best to train adult learners.
Objective 4: Know who your audience is.
Objective 5: Draw up a detailed blueprint.
Software development lifecycle (SDLC) is a framework that defines the steps involved in the development of software. It covers the detailed plan for building, deploying and maintaining the software. SDLC defines the complete cycle of development i.e. all the tasks involved in gathering a requirement for the maintenance of a Product.
SDLC is a process which defines the various stages involved in the development of software for delivering a high-quality product. SDLC stages cover the complete life cycle of a software i.e. from inception to retirement of the product. Adhering to the SDLC process leads to the development of the software in a systematic and disciplined manner.
Purpose of SDLC is to deliver a high-quality product which is as per the customer’s requirement. SDLC has defined its phases as, Requirement gathering, Designing, Coding, Testing, and Maintenance. It is important to adhere to the phases to provide the Product in a systematic manner. For example, software has to be developed and a team is divided to work on a feature of the product and is allowed to work as they want. One of the developers decides to design first whereas the other decides to code first and the other on the documentation part.
A software life cycle model is a descriptive representation of the software development cycle. SDLC models might have a different approach but the basic phases and activity remain the same for all the models.
In CISA Chapter 3.5 “Business Application Development,” the author mentioned 2 major categories that define all developed business application system will fall into, which are Organization-centric and End-user-centric computing. The perspective of organization-centric application is to collect, collate, store, archive and share information with business users and various applicable support function on a need-to-know basis. The objective of an end-user-centric application is to provide different views of data for their performance optimization.
Organization-centric application projects usually use the SDLC or other more detailed software engineering approaches for development. End-user-centric applications are developed using alternative development approaches.
Hi, I really like your recognition of why certain organizations gear towards the traditional SDLC and why end-user computing is more geared towards different approaches.
I chose this section because this section ties in the business applications of system implementation. When running a website or an application, e-commerce is the primary profit driver. Without a secure and efficient e-commerce system, a business cannot sell their products or service. It is crucial to the business that these systems are set up securely as to not have any of their data breached. If a customer’s information is leaked, this could permanently damage the business’s reputation both online and off. Furthermore, a company with more employees might consider having their own email domain. Once again, this system has to be very secure. Any hacker can send in a phishing email and gut an organizations systems from the inside.
I like your point. E-commerce has become an integral part of business in the modern world. With the help of e-commerce web design you get an opportunity to have your products and services available to customers 24 hours. An online store is available all day, every day meaning your customers can visit your store at all times, no matter what their schedule might be. The best thing about it is buying options that are quick, convenient and user-friendly with the ability to transfer funds online.
The interesting thing that I found is about the post-implementation review, which assesses the adequacy of the system, evaluates the project cost or ROI, develops recommendations that address the system’s inadequacies and deficiencies, and assesses the development project process. More importantly, the information that is needed for post-implementation review should be identified in during the feasibility and design phase as well as collect during each stage of the project in order to make the review effective. I found it is interesting because the planning stage is even more critical than I thought it would be because people who are planning the project need to run the project in their mind without before actually run the project, and it is difficult and needs a lot experience.
Agreed. The planning stage is very difficult because you have to envision and plan the entire project. Depending on the precedent, this can be extremely difficult.
Good explanations, Shuyue and Panayiotis. Adding to what you guys wrote, a post-implementation review is a process to evaluate whether the objectives of the project were met. Post-implementation review helps to identify project successes, deliverables and ways to improve the areas that did not meet expectations. The review can even be used as a blueprint for the next project.
Hi Shuyue, Good points, I think you have misunderstood the concept of post implementation review with pre-implementation review. The point that you had mentioned about assessing the system adequacy, project cost, ROI would be part of Pre-implementation review.
As Raisa mentioned above, a post-implementation review is a process to evaluate whether the objectives of the project were met and it is conducted after completing a project to determine how effectively the project was run, to learn lessons for the future, and to ensure that the organization gets the greatest possible benefit from the project.
Through reading chapter 3.5 of the CISA book on “Business Application Development”, I felt that I had reviewed and enlightened myself to some of the more intricate details regarding the software development life cycle. It was refreshing to look through the CISA book’s descriptions on each of the 5 basic phases of the software development life cycle, which were Requirements Gathering/Definition, Design, Programming, Testing, Implementation, and Post-Implementation Review. Of all the phases I had revised, the one that I had learned the most from had to be the post-implementation review, due to not learning too much in regards to that part of the software development life cycle prior to reading this section. With the post-implementation review, organizations tend to assess the success of a project’s development and implementation from multiple perspective. These include the financial perspective through a measure of return on investment, and an operational perspective as well by reviewing work productivity and employee feedback.
Hi, Jordan:
I also found the post-implementation review interesting, and it also got me thinking that a knowledgeable and experienced project manager would be a key feature for an IT project’s success. Without someone like that, there would not be a good post-implementation review for the IT project, which the company would face issues like evaluating the project ROI inaccurately.
The topic which I found very interesting is the “Data Migration” process under Chapter 3.5.
Data conversion is required if the source and target systems utilize different field formats. Sizes, database structures or coding techniques. For example: in a drop-down menu representing the colours Red, Blue and Green as R, B and G in the new system. The objective of data conversion is to convert existing data into new required format, coding and structure while preserving the meaning and integrity of the data.
The data conversion process must provide some means such as audit trails and logs which allows for the verification of accuracy and completeness of the converted data. Following factors needs to be considered during a data migration project – how long the migration will take, the amount of downtime required, and the risk to the business due to technical compatibility issues, data corruption, application performance issues, and missed data or data loss. These factors should be evaluated before migration in order to avoid migration challenges.
The one thing that I took away from CISA reading is about planning implementation of infrastructure. According to the article, it mentioned ensuring the quality if the results, it is necessary to use a phased approach to fit the entire puzzle together. There are four different phases procurement phase, delivery time, installation plan and installation test plan. In my opinion, the installation test plan is also an important phase. A test plan is a document detailing the objectives, resources, and processes for a specific test for a software or hardware product. It is usually prepared by or with significant input from test engineers. Also, the plan typically contains a detailed understanding of the eventual workflow. From this chapter, I learned about plans to implement infrastructure.
IT Audits of Cloud and SaaS introduce the components of cloud computing, As the author mentions, some of the key factors for management when choosing the IaaS provider are flexible performance (including scalability) and availability while achieving physical and virtual security needs. Third-party service providers provide customers with hardware, operating systems and other software, servers, storage systems and various other IT components in a highly automated delivery model. In some cases, they can also handle tasks such as continuous system maintenance, data backup, and business continuity. Therefore, the flexibility makes IaaS is used in the cloud computing service.
There are many considerations for choosing SaaS too. Applications are installed by vendors or service providers and can be accessed over a network, usually the Internet. This pattern is often referred to as on demand software, which is the most mature cloud computing model because of its high degree of flexibility, proven support services, and strong scalability, which can reduce customer maintenance costs and investment.
CISA Chapter 3.5 Business Application Development explains the methods of changeover. Among the three ways, parallel is the most widely used of the switching users from using applications from the existing system to the replacing system in the actual system switching work. Parallel switching provides an opportunity to compare the operation results of the new system with the old system, and can give a fair evaluation on the time requirements, error frequency and work efficiency of the new and old system. The risk is small, and the performance of the new and old systems can be compared simultaneously during the transition, while the primary system operators and other concerned personnel are fully trained. Although this method is expensive, but it is more secure, can save more data that could lost in the transition.
Any type of major change that impacts the productivity of a business, or the confidentiality/availability/integrity of data, should be taken seriously, and really shouldn’t be the area where management would look to cut corners and save money. However, I think we can assume that it does happen from time to time due to budget constraints or any other setbacks during a project that eat up resources. Change is never easy, and as we saw in the Mudra case study, a plan of action needs to be considered early on so to avoid pitfalls or resistance during the changeover period.
I also think that parallel is the most used information architecture, it saves time and effective. The fundamental concept of parallel architecture is this: Given a series of tasks to perform, divide those tasks into discrete elements, some or all of which can be processed at the same time on a set of computing resources
One interesting point is virtualization. The concept of virtualization is generally believed to have its origins in the late 1960s when mainframe-enabled robust tine-sharing option environments with several users accessing the OS concurrently. However, the high cost of resources at that time is the main reason that is not able to deploy these computing capabilities on the scale as seen in virtualization like today’s devices model. The same rationale for the TSO model is being applied today via virtualization because the majority of current server technologies and underutilized and capable of supporting much more than one server functions.
Hi Haixin,
I also find it amazing how our database went from physical to virtual in less than 50 years. Now virtualization is a very important part of our lives because it creates so much convenience and it also saves lots of money for companies because the physical database is more expensive and hard to maintain. Though virtualization needs to be very careful because hackers can always find a way in if you allow them to do so. Therefore, updating the system and understand the system is key to stay safe for a virtual database.
Hi Haixin, thanks for pointing out the history of virtualization and I just have a few things to add. In 1960s, mainframes were certainly not portable, so the quality and availability of dial-up and leased telephone lines improved rapidly, as well as improved modem technology that enabled the mainframe to exist as a virtual terminal. In fact, virtual machines: because of advances in technology and the economics of microprocessors, this model of computing directly led to the creation of the personal computer in the 1980s. In addition to data transmitted over telephone lines, local networks eventually represented the possibility of continuous access to the Internet.
There are several project phases of physical architecture analysis should be focus on. Including review of existing architecture, analysis and design, draft functional requirements, vendor and product selection, writing functional requirements, proof of concept. To start the process, the latest documents about the existing architecture must be reviewed. After reviewing the existing architecture, the analysis and design of the actual physical architecture has to be undertaken, adhering to good practices and meeting business requirements. With the first physical architecture design in hand, the first of functional requirements is composed. While the draft functional requirements are written, the vendor selection process proceeds in parallel. After finishing the draft functional requirements and feeding the second part of this project, the functional requirements document is written, which will be introduced at the second architecture workshop with staff from all affected parties.
Hi Yuan, I like your idea about physical architecture analysis. Based on my learning, a physical architecture model is an arrangement of physical elements, that provides the solution for a product, service, or enterprise. It is intended to satisfy logical architecture elements and system requirements. Therefore, I think your point is clear and easy to understand.
The CISA Chapter 3.5 “Business Application Development” guides us on evaluating the business case for proposed investments in information system acquisition, development, maintenance. and conducting reviews to determine whither a project is progressing in accordance with project plans. The “V-life cycle” interested me most, this is a variant of the waterfall technique and it emphasizes the relationship between development and testing. The V-model helps IS auditors to review all relevant phases and report independently to management on adherence to planned objectives and company procedures. and to identify selected parts of the system and become involved in technical aspects on the basis of their skills.
I agree with you that V-model helps to review all relevant phases and report independently to management on adherence to planned objectives and company procedures and to identify selected parts of the system and become involved in technical aspects on the basis of their skills. It is ideal for restricted projects and time management that need to keep to a fairly tight schedule.
Chapter 3.6 of the CISA Manual focuses on Virtualization and Cloud Computing Environments. Since my start in the MIS department I’ve had a particular interest in cloud computing and the overall existence of “the cloud” (ooo aahhhh)! The cloud, as we learned last class session was developed by Amazon in the early years of what would become known as AWS. From the CISA chapter we know that there are two different types of virtualization/cloud computing types: 1) bare metal/native virtualization, or 2) hosted virtualization. The AWS cloud environment started as a bare metal environment as it ran on the serves positioned all across the world. However, in today’s world there are more operating systems (OS) such as windows, linux, and mac that can allow for hosting and cloud environments to run through an application. This allows for a more convenient experience for the user.
Great comments and thank you for sharing the knowledge of the 2 types of cloud/virtualization computing. It is true that we have many different operating systems now, such as Windows and Linux. Users can choose the system they prefer to have different operating experiences.
In CISA 3.5, the chapter breaks down the five basic phases of the software development life cycle. The five basic phases are Requirements Gathering/Definition, Design, Programming, Testing, Implementation, and Post-Implementation Review. Organization-centric computing usually relies on traditional SDLC to optimize their business needs such as collecting/storing/archiving/sharing data. For business users, they rely heavier on the traditional SDLC to have more specific software engineering compared to other kinds of business development.
The section on implementation planning and end-user training stood out to me, mainly because I was able to relate this to the training I experienced back when I first started working at my current serving gig. At the time, new servers had to go through 10 training shifts and shadow under each different front of the house position, such as host, busser, runner, server, and bartender, so to have a thorough understanding of what each job entails and be able to perform those duties if needed. This way, the company can get their money’s worth (all $2.83/hr of it), while the restaurant runs in an efficient manner, thus creating more revenue for the company. On top of that, servers must be certified in responsible alcohol management, and complete a course on handling customer PII. So, in a way, hiring a new server is like implementing a new system that requires data transfer and security controls that are specific to that company.
ISACA “IT Audits of Cloud and SaaS” and CISA 3.6 “Virtualization and Cloud Computing Environments” both stress the fact that in order to develop a decent audit program, it is necessary to have a good understanding of the virtualization/cloud service provider (CSP) environments. Why? Because these environments have security risks as well. The virtualized computing environment consists of a server, guest machine, and hypervisor. A virtualization hypervisor comes in one of two forms: bare metal/native or hosted. The bare metal hypervisor refers to direct access to the hardware; no host OS is needed. The bare metal hypervisor allows for the virtualization of underlying hardware components to function as if they have direct access to the hardware. On the other hand, the hosted hypervisor requires you to install an OS first because the hypervisor coordinates resource usage in the virtual environment through the OS. An example of hosted virtualization is VMWare. Hosted virtualization allows the user to run more than one OS on top of a single host computer.
Hi Raisa, I also chose to write about chapter 3.6 and I like how you brought in the security aspect in your post. The different aspects of hosing a cloud platform on a 3rd party system provides an innate risk to the the customer. In many cases, the bare metal and application style cloud platforms can have the same level of security risks. In the end, the applications are hosted on servers somewhere across the world so they are definitely still vulnerable.
Feng Gao says
Chapter 3.5 is about the traditional SDLC approach and what I found the most interesting is the figure named verification and validation in that the different layer of the match between the requirement and the testing offers me a clear and structural sight about how the system could response to the internal and external requirements. To explain it further, the core concept from this figure is that the system is gradually ameliorated by different requirements at different stages. The requirements are the internal detailed design, architecture design, functional requirement and ultimately, the user requirement. As a response, the system would do the unit test, integration test, system test and acceptance test in order to positively purify itself and satisfy the feedback of the users. This makes the entire system a positive and two-way system instead of a passive system which only waits for the human modification. Thus, this is especially interesting for me in that it reminds me the SDLC ought to be a two-way process.
The thing that I found most interesting in chapter 3.6 is the risk of virtualization and the initiation of the cloud system into the management information system. It reminds me that while the public are enjoying the advantages provided by the virtualized internet, there might be potential risk hind inside. As it has been mentioned, one of the most frequently seen risks of the virtualization is the possibility that the information such as files, documents and so forth in both the host OS or the guest OS to be attacked by the hackers online. In addition, the higher rick of the snapshot than the image also interests me in identify the risk level of items. It says that the snapshot is more at risk because it is a Random Access Memory (RAM) which is a new concept for me. I learnt that the Random Access Memory (RAM) would make the data more sensitive than the conventional ones. This concept could also be applied in the real life that the snapshot stored on the phone needs to be protected with a special code in order to endure their confidentiality.
In chapter 3.9, the list of risks of current system is the most interesting thing as far as I am concerned. The reason why I reckon it interesting and useful is that it reminds me the analysis of the eventual risk ought not to be limited to the internal flawed points of the system but also to the external influence. For example, in the figure, it has listed the internal problems like the deficits in functionality, the deficits in information supply and so forth which are typical the internal problems of the system. However, apart from the internal deficits, it also mentions the external factors such as the insufficiency of the cost in the project, the changed business requirement in the future which is generated from the external environment. Thus, it reminds me that in order to ensure the good functionality and success of a system, both the internal design and the external support and requirement ought to be taken into consideration. What’s more, the developer needs to have a long perspective which helps the eventual changes in the future. It is in the interaction of the potential influential factors that the preliminary system could be developed.
Imran Jordan Kharabsheh says
Hello,
By far one of the longest responses I’ve had to read in regards to analyzing the CISA book sections, I could sense that you found little interesting nuggets of knowledge within every section that made it difficult for you to decide to write about just one thing. Within chapter 3.5 alone, you talked about the intricacies of various types of requirements and how each of these requirements needs a method to test whether these requirements have been met and if the new changes are compliant. For chapter 3.6, you took your time analyzing some of the more common risks associated with virtualization and even took an example of how RAM within servers or other devices can be vulnerable. And for chapter 3.9, which focused on risks associated with implemented systems and long-term planning, you really homed in on how important design and support are for general success and long-term sustainability of information system projects.
Zhu Li says
SDLC Cycle represents the process of developing software.
During the testing phase, testing starts once the coding is complete and the modules are released for testing. In this phase, the developed software is tested thoroughly and any defects found are assigned to developers to get them fixed.
Retesting, regression testing is done until the point at which the software is as per the customer’s expectation. Testers refer SRS document to make sure that the software is as per the customer’s standard.
Yuchong Wang says
The goal of a training plan is to ensure that the end user can become self-sufficient in the operation of the system. What I found interesting is the end-user training section. It tells us one of the most important keys in end-user training is to ensure that training is considered and a training project plan is created early in the development process. To develop a training strategy, the organization must name a training administrator who will identify users who need to be trained with respect to their specific job functions. These are some consideration given to the following format and delivery mechanisms:
-Case studies
-Role-based training
-Lecture and breakout sessions
-Modules at different experience levels
-Practical sessions on how to use the system
-Remedial computer training (if needed)
-Online sessions on the web or on a CD-ROM
The training administrator needs to record student information in a database and their feedback for improving the training course further.
Yuan Liu says
Before creating your training program, it is important for you as the trainer to do your homework and research your company’s situation thoroughly. By gathering information in several key areas, you better prepare yourself to create a relevant and customized training plan for your company.
Objective 1: Determine what training is needed.
Objective 2: Determine who needs to be trained.
Objective 3: Know how best to train adult learners.
Objective 4: Know who your audience is.
Objective 5: Draw up a detailed blueprint.
Zhu Li says
Software development lifecycle (SDLC) is a framework that defines the steps involved in the development of software. It covers the detailed plan for building, deploying and maintaining the software. SDLC defines the complete cycle of development i.e. all the tasks involved in gathering a requirement for the maintenance of a Product.
SDLC is a process which defines the various stages involved in the development of software for delivering a high-quality product. SDLC stages cover the complete life cycle of a software i.e. from inception to retirement of the product. Adhering to the SDLC process leads to the development of the software in a systematic and disciplined manner.
Purpose of SDLC is to deliver a high-quality product which is as per the customer’s requirement. SDLC has defined its phases as, Requirement gathering, Designing, Coding, Testing, and Maintenance. It is important to adhere to the phases to provide the Product in a systematic manner. For example, software has to be developed and a team is divided to work on a feature of the product and is allowed to work as they want. One of the developers decides to design first whereas the other decides to code first and the other on the documentation part.
A software life cycle model is a descriptive representation of the software development cycle. SDLC models might have a different approach but the basic phases and activity remain the same for all the models.
Penghui Ai says
In CISA Chapter 3.5 “Business Application Development,” the author mentioned 2 major categories that define all developed business application system will fall into, which are Organization-centric and End-user-centric computing. The perspective of organization-centric application is to collect, collate, store, archive and share information with business users and various applicable support function on a need-to-know basis. The objective of an end-user-centric application is to provide different views of data for their performance optimization.
Organization-centric application projects usually use the SDLC or other more detailed software engineering approaches for development. End-user-centric applications are developed using alternative development approaches.
Haixin Sun says
Hi, thank you for discussing Organization-centric and End-user-centric computing.
Mei X Wang says
Hi, I really like your recognition of why certain organizations gear towards the traditional SDLC and why end-user computing is more geared towards different approaches.
Panayiotis Laskaridis says
Chapter 3.6
I chose this section because this section ties in the business applications of system implementation. When running a website or an application, e-commerce is the primary profit driver. Without a secure and efficient e-commerce system, a business cannot sell their products or service. It is crucial to the business that these systems are set up securely as to not have any of their data breached. If a customer’s information is leaked, this could permanently damage the business’s reputation both online and off. Furthermore, a company with more employees might consider having their own email domain. Once again, this system has to be very secure. Any hacker can send in a phishing email and gut an organizations systems from the inside.
Feng Gao says
I like your point. E-commerce has become an integral part of business in the modern world. With the help of e-commerce web design you get an opportunity to have your products and services available to customers 24 hours. An online store is available all day, every day meaning your customers can visit your store at all times, no matter what their schedule might be. The best thing about it is buying options that are quick, convenient and user-friendly with the ability to transfer funds online.
Shuyue Ding says
The interesting thing that I found is about the post-implementation review, which assesses the adequacy of the system, evaluates the project cost or ROI, develops recommendations that address the system’s inadequacies and deficiencies, and assesses the development project process. More importantly, the information that is needed for post-implementation review should be identified in during the feasibility and design phase as well as collect during each stage of the project in order to make the review effective. I found it is interesting because the planning stage is even more critical than I thought it would be because people who are planning the project need to run the project in their mind without before actually run the project, and it is difficult and needs a lot experience.
Panayiotis Laskaridis says
Agreed. The planning stage is very difficult because you have to envision and plan the entire project. Depending on the precedent, this can be extremely difficult.
Raisa Ahmed says
Good explanations, Shuyue and Panayiotis. Adding to what you guys wrote, a post-implementation review is a process to evaluate whether the objectives of the project were met. Post-implementation review helps to identify project successes, deliverables and ways to improve the areas that did not meet expectations. The review can even be used as a blueprint for the next project.
Deepa Kuppuswamy says
Hi Shuyue, Good points, I think you have misunderstood the concept of post implementation review with pre-implementation review. The point that you had mentioned about assessing the system adequacy, project cost, ROI would be part of Pre-implementation review.
As Raisa mentioned above, a post-implementation review is a process to evaluate whether the objectives of the project were met and it is conducted after completing a project to determine how effectively the project was run, to learn lessons for the future, and to ensure that the organization gets the greatest possible benefit from the project.
Imran Jordan Kharabsheh says
Through reading chapter 3.5 of the CISA book on “Business Application Development”, I felt that I had reviewed and enlightened myself to some of the more intricate details regarding the software development life cycle. It was refreshing to look through the CISA book’s descriptions on each of the 5 basic phases of the software development life cycle, which were Requirements Gathering/Definition, Design, Programming, Testing, Implementation, and Post-Implementation Review. Of all the phases I had revised, the one that I had learned the most from had to be the post-implementation review, due to not learning too much in regards to that part of the software development life cycle prior to reading this section. With the post-implementation review, organizations tend to assess the success of a project’s development and implementation from multiple perspective. These include the financial perspective through a measure of return on investment, and an operational perspective as well by reviewing work productivity and employee feedback.
Shuyue Ding says
Hi, Jordan:
I also found the post-implementation review interesting, and it also got me thinking that a knowledgeable and experienced project manager would be a key feature for an IT project’s success. Without someone like that, there would not be a good post-implementation review for the IT project, which the company would face issues like evaluating the project ROI inaccurately.
Deepa Kuppuswamy says
The topic which I found very interesting is the “Data Migration” process under Chapter 3.5.
Data conversion is required if the source and target systems utilize different field formats. Sizes, database structures or coding techniques. For example: in a drop-down menu representing the colours Red, Blue and Green as R, B and G in the new system. The objective of data conversion is to convert existing data into new required format, coding and structure while preserving the meaning and integrity of the data.
The data conversion process must provide some means such as audit trails and logs which allows for the verification of accuracy and completeness of the converted data. Following factors needs to be considered during a data migration project – how long the migration will take, the amount of downtime required, and the risk to the business due to technical compatibility issues, data corruption, application performance issues, and missed data or data loss. These factors should be evaluated before migration in order to avoid migration challenges.
Ryu Takatsuki says
The one thing that I took away from CISA reading is about planning implementation of infrastructure. According to the article, it mentioned ensuring the quality if the results, it is necessary to use a phased approach to fit the entire puzzle together. There are four different phases procurement phase, delivery time, installation plan and installation test plan. In my opinion, the installation test plan is also an important phase. A test plan is a document detailing the objectives, resources, and processes for a specific test for a software or hardware product. It is usually prepared by or with significant input from test engineers. Also, the plan typically contains a detailed understanding of the eventual workflow. From this chapter, I learned about plans to implement infrastructure.
Yuqing Tang says
IT Audits of Cloud and SaaS introduce the components of cloud computing, As the author mentions, some of the key factors for management when choosing the IaaS provider are flexible performance (including scalability) and availability while achieving physical and virtual security needs. Third-party service providers provide customers with hardware, operating systems and other software, servers, storage systems and various other IT components in a highly automated delivery model. In some cases, they can also handle tasks such as continuous system maintenance, data backup, and business continuity. Therefore, the flexibility makes IaaS is used in the cloud computing service.
There are many considerations for choosing SaaS too. Applications are installed by vendors or service providers and can be accessed over a network, usually the Internet. This pattern is often referred to as on demand software, which is the most mature cloud computing model because of its high degree of flexibility, proven support services, and strong scalability, which can reduce customer maintenance costs and investment.
Yuqing Tang says
Sorry, this is for question 3.
Yuqing Tang says
CISA Chapter 3.5 Business Application Development explains the methods of changeover. Among the three ways, parallel is the most widely used of the switching users from using applications from the existing system to the replacing system in the actual system switching work. Parallel switching provides an opportunity to compare the operation results of the new system with the old system, and can give a fair evaluation on the time requirements, error frequency and work efficiency of the new and old system. The risk is small, and the performance of the new and old systems can be compared simultaneously during the transition, while the primary system operators and other concerned personnel are fully trained. Although this method is expensive, but it is more secure, can save more data that could lost in the transition.
Sarah Puffen says
Any type of major change that impacts the productivity of a business, or the confidentiality/availability/integrity of data, should be taken seriously, and really shouldn’t be the area where management would look to cut corners and save money. However, I think we can assume that it does happen from time to time due to budget constraints or any other setbacks during a project that eat up resources. Change is never easy, and as we saw in the Mudra case study, a plan of action needs to be considered early on so to avoid pitfalls or resistance during the changeover period.
Xinye Yang says
Hey Yuqing
I also think that parallel is the most used information architecture, it saves time and effective. The fundamental concept of parallel architecture is this: Given a series of tasks to perform, divide those tasks into discrete elements, some or all of which can be processed at the same time on a set of computing resources
Haixin Sun says
One interesting point is virtualization. The concept of virtualization is generally believed to have its origins in the late 1960s when mainframe-enabled robust tine-sharing option environments with several users accessing the OS concurrently. However, the high cost of resources at that time is the main reason that is not able to deploy these computing capabilities on the scale as seen in virtualization like today’s devices model. The same rationale for the TSO model is being applied today via virtualization because the majority of current server technologies and underutilized and capable of supporting much more than one server functions.
Yuchong Wang says
Hi Haixin,
I also find it amazing how our database went from physical to virtual in less than 50 years. Now virtualization is a very important part of our lives because it creates so much convenience and it also saves lots of money for companies because the physical database is more expensive and hard to maintain. Though virtualization needs to be very careful because hackers can always find a way in if you allow them to do so. Therefore, updating the system and understand the system is key to stay safe for a virtual database.
Yuqing Tang says
Hi Haixin, thanks for pointing out the history of virtualization and I just have a few things to add. In 1960s, mainframes were certainly not portable, so the quality and availability of dial-up and leased telephone lines improved rapidly, as well as improved modem technology that enabled the mainframe to exist as a virtual terminal. In fact, virtual machines: because of advances in technology and the economics of microprocessors, this model of computing directly led to the creation of the personal computer in the 1980s. In addition to data transmitted over telephone lines, local networks eventually represented the possibility of continuous access to the Internet.
Yuan Liu says
There are several project phases of physical architecture analysis should be focus on. Including review of existing architecture, analysis and design, draft functional requirements, vendor and product selection, writing functional requirements, proof of concept. To start the process, the latest documents about the existing architecture must be reviewed. After reviewing the existing architecture, the analysis and design of the actual physical architecture has to be undertaken, adhering to good practices and meeting business requirements. With the first physical architecture design in hand, the first of functional requirements is composed. While the draft functional requirements are written, the vendor selection process proceeds in parallel. After finishing the draft functional requirements and feeding the second part of this project, the functional requirements document is written, which will be introduced at the second architecture workshop with staff from all affected parties.
Ryu Takatsuki says
Hi Yuan, I like your idea about physical architecture analysis. Based on my learning, a physical architecture model is an arrangement of physical elements, that provides the solution for a product, service, or enterprise. It is intended to satisfy logical architecture elements and system requirements. Therefore, I think your point is clear and easy to understand.
Xinye Yang says
The CISA Chapter 3.5 “Business Application Development” guides us on evaluating the business case for proposed investments in information system acquisition, development, maintenance. and conducting reviews to determine whither a project is progressing in accordance with project plans. The “V-life cycle” interested me most, this is a variant of the waterfall technique and it emphasizes the relationship between development and testing. The V-model helps IS auditors to review all relevant phases and report independently to management on adherence to planned objectives and company procedures. and to identify selected parts of the system and become involved in technical aspects on the basis of their skills.
Haixin Sun says
I agree with you that V-model helps to review all relevant phases and report independently to management on adherence to planned objectives and company procedures and to identify selected parts of the system and become involved in technical aspects on the basis of their skills. It is ideal for restricted projects and time management that need to keep to a fairly tight schedule.
Alexander Reichart-Anderson says
Chapter 3.6 of the CISA Manual focuses on Virtualization and Cloud Computing Environments. Since my start in the MIS department I’ve had a particular interest in cloud computing and the overall existence of “the cloud” (ooo aahhhh)! The cloud, as we learned last class session was developed by Amazon in the early years of what would become known as AWS. From the CISA chapter we know that there are two different types of virtualization/cloud computing types: 1) bare metal/native virtualization, or 2) hosted virtualization. The AWS cloud environment started as a bare metal environment as it ran on the serves positioned all across the world. However, in today’s world there are more operating systems (OS) such as windows, linux, and mac that can allow for hosting and cloud environments to run through an application. This allows for a more convenient experience for the user.
Penghui Ai says
Hi Alex,
Great comments and thank you for sharing the knowledge of the 2 types of cloud/virtualization computing. It is true that we have many different operating systems now, such as Windows and Linux. Users can choose the system they prefer to have different operating experiences.
Mei X Wang says
In CISA 3.5, the chapter breaks down the five basic phases of the software development life cycle. The five basic phases are Requirements Gathering/Definition, Design, Programming, Testing, Implementation, and Post-Implementation Review. Organization-centric computing usually relies on traditional SDLC to optimize their business needs such as collecting/storing/archiving/sharing data. For business users, they rely heavier on the traditional SDLC to have more specific software engineering compared to other kinds of business development.
Sarah Puffen says
The section on implementation planning and end-user training stood out to me, mainly because I was able to relate this to the training I experienced back when I first started working at my current serving gig. At the time, new servers had to go through 10 training shifts and shadow under each different front of the house position, such as host, busser, runner, server, and bartender, so to have a thorough understanding of what each job entails and be able to perform those duties if needed. This way, the company can get their money’s worth (all $2.83/hr of it), while the restaurant runs in an efficient manner, thus creating more revenue for the company. On top of that, servers must be certified in responsible alcohol management, and complete a course on handling customer PII. So, in a way, hiring a new server is like implementing a new system that requires data transfer and security controls that are specific to that company.
Raisa Ahmed says
CISA Chapter 3.6
ISACA “IT Audits of Cloud and SaaS” and CISA 3.6 “Virtualization and Cloud Computing Environments” both stress the fact that in order to develop a decent audit program, it is necessary to have a good understanding of the virtualization/cloud service provider (CSP) environments. Why? Because these environments have security risks as well. The virtualized computing environment consists of a server, guest machine, and hypervisor. A virtualization hypervisor comes in one of two forms: bare metal/native or hosted. The bare metal hypervisor refers to direct access to the hardware; no host OS is needed. The bare metal hypervisor allows for the virtualization of underlying hardware components to function as if they have direct access to the hardware. On the other hand, the hosted hypervisor requires you to install an OS first because the hypervisor coordinates resource usage in the virtual environment through the OS. An example of hosted virtualization is VMWare. Hosted virtualization allows the user to run more than one OS on top of a single host computer.
Alexander Reichart-Anderson says
Hi Raisa, I also chose to write about chapter 3.6 and I like how you brought in the security aspect in your post. The different aspects of hosing a cloud platform on a 3rd party system provides an innate risk to the the customer. In many cases, the bare metal and application style cloud platforms can have the same level of security risks. In the end, the applications are hosted on servers somewhere across the world so they are definitely still vulnerable.