• Log In
  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Sys & Infrast Lifecycle Mngt 1

Information Technology Audit and Cybersecurity, Temple University

Sys & Infrast Lifecycle Mngt 1

MIS 5203.001 ■ Spring 2021 ■ Wade Mackey
  • Home
  • Syllabus
    • Gradebook
  • Announcements
  • Course Work
    • 1 – Intro/SDLC
    • Planning
      • 2 – Prjct Mngmt & Governance
      • 3 – Business Case & Feasibility
    • Analysis
      • 4 – Requirements Determination
      • 5 – Process Modeling
      • 6 – Data Modeling
      • 7 -Test One
    • Design
      • 10 – HCI (UI)
      • 8 – Database
      • 9 – Software
      • 11 – Test Two
    • Implementation
      • 12 – Architecture
      • 13 – Development & Testing
      • 14 – Migration & Deployment
      • Test 3: Implementation
  • Projects
    • Project 1: Business Case Development
    • Project 2: SDLC
    • Project 3: Systems Design
    • Project 4: Process Re-engineering
    • Project 5: Controls

Unit 8 Reading Discussion Post

March 11, 2021 9 Comments

An important concept from this unit’s reading that stood out to me was the concept of normalization. Normalization is one of the main concepts applied when designing the logical database. Normalization is the process used by database designers to create intuitive, well-structured databases. Normalization takes complex inputs, such as data structures, and converts them into simple, non-redundant, low-maintenance structures. Normalization employs non redundancy by following the concept of functional dependence, which creates relationships between attributes that are dependent on each other. Normalization is such an important concept in logical database design because it promotes this non-redundancy. Redundancy in database design is not only bad practice – it is also costly and ineffective. If a database stores the same data in multiple places, these values may contradict each other, which causes the integrity of the database to fail as no one knows which value is correct. This is also costly because you are paying to store the same data more than once. Database storage is already costly as it is, so it is important to prevent redundancy as much as possible.

-Taylor Trench

Reader Interactions

Comments

  1. Elias Harake says

    March 19, 2021 at 10:03 am

    In this week’s reading, an important takeaway that I think is important to mention is about a system development methodology. According to the CISA textbook, a systems development methodology is a structure that an organization uses to plan and control the development of information systems and software and new business applications. The textbook also mentions three different types of models that can be used for a system development methodology. These three models are known as
    – Traditional waterfall
    – V-shaped
    – Iterative
    According to the book, the traditional SDLC (waterfall) model and variants of the model normally involve a life cycle verification approach that ensures that potential mistakes are corrected early and not solely during final acceptance testing. V-shaped models focus on the relationship between development phases and testing levels. V-Shape models are the most vague testing, the unit test and occurs immediately after programs have been written in code. And the iterative model is a cyclical process in which business requirements are developed and tested until the entire application is designed and tested to be launched. During each iteration, the development process goes through each phase, from requirements through testing, and each subsequent cycle incrementally improves the processes and efficiencies of the system.

    Reply
  2. Jonathan Mettus says

    March 20, 2021 at 11:28 am

    One thing about normalization I found interesting is that there are situations where you might actually want to denormalize. Normalization reduces redundancy or maintenance. But denormalization can be done for performance reasons. There is still the risk, however, of increased errors. It seems like a time-space tradeoff between how much normalization you want.

    Reply
  3. Haozhe Lin says

    March 23, 2021 at 7:32 pm

    This unit reading let me understand an important concept is database standardization. Database logic design determines the overall performance of the database and its application and the location of tuning. If the database logic design is not good, all the tuning methods will have a limited effect on improving the database performance. To make the method of database design perfect, the normalization theory of the database must be obeyed. Normalization theory provides theoretical guidance and tools for database logic design, which reduces data redundancy, saves storage space, and speeds up the speed of addition, deletion, and modification.

    Besides, when designing the standard database logic, we should also consider breaking the standard rules appropriately, that is, denormalization design, to reduce the number of indexes, tables, and join operations, to speed up the query speed. Common anti specification techniques include adding redundant columns, adding derived columns, and reorganizing tables.

    Add redundant columns: sometimes the columns to be queried are distributed in different tables. If the frequency of this join query is relatively high, you can add this column in other tables as needed, so that there are the same columns in multiple tables. It is often used to avoid join operation when querying. But its disadvantage is that it needs more disk space, at the same time, it needs to increase the workload of table maintenance because of the integrity problem.

    In a word, when designing the database logic, we must choose the database mode reasonably according to the application environment and the specific situation of the real world.

    Reply
    • Priyanka Ranu says

      March 25, 2021 at 6:45 pm

      I think normalization is quite an effective database design technique as it reduces data redundancy and has properties of simplicity, and minimal maintenance. If there is redundant data is will cause maintenance problems and waste disk space. There are three anomalies when database is not normalized which is insertion, update and deletion.

      Reply
  4. Priyanka Ranu says

    March 25, 2021 at 5:56 pm

    Normalization is a database design technique that reduces data redundancy and has properties of simplicity, and minimal maintenance. The result of normalization is that every nonprimary key attribute depends on the whole primary key. The two purposed that normalization is used for are: eliminating redundant data and ensuring data dependencies make sense.
    There are certain rules under normalization and book mentioned two frequently used rules:
    1. Second normal form (2NF) – each nonprimary key attribute is identified by the whole key.
    2. Third normal form (3NF) – nonprimary key attributes do not depend on each other.

    Reply
  5. Cami Chen says

    April 8, 2021 at 4:33 pm

    The one of key points in chapter 9 that I want to discuss the benefit of assigning a default value to a field. First, the default value can reduce entry time and errors, and it makes it a lot easier to insert new rows into a table. When we create a new record, we can set up a field with a default value that ensures data accuracy. For example, the user can make a new input with a default value, and Access processes it into the field automatically. Although the default values can help design and develop the database, the users cannot only reply on default values to avoid using with nulls, and they should consider the purpose of the values previously. However, using the default values can be overridden in the initial insert, we can use them in reasonable ways.

    Reply
  6. Shaneil Wilson says

    April 21, 2021 at 5:43 pm

    In chapter 9 we continue in the design phase which comprises two subphases the logical and physical design. Once a clear and precise logical database model, a physical design of the technical specifications for computer files and databases in which to store the data is then done. In logical database design, you use a process called normalization, which is a way to build a data model that has the properties of simplicity, nonredundancy, and minimal maintenance. The result of the logical design is translated into the physical database design consisting of data type, stored records/data structure, file organization, and media type. Auditors should have knowledge of how the systems are designed. We should be able to identify and understand controls designed to ensure the authorization, accuracy and completeness of data input and output. It is also requires that IS Auditors have adequate knowledge of the control techniques and how each may be evidenced as reports, logs and audit trails.

    Reply
  7. Zibai Yang says

    April 22, 2021 at 8:08 pm

    Data normalization is to scale the data to a small specific interval. It is often used in some comparison and evaluation index processing to remove the unit limit of the data and convert it into a dimensionless pure value, so that the indexes of different units or magnitudes can be compared and weighted.
    At present, there are many data standardization methods, which can be divided into linear methods (such as extreme value method, standard deviation method), broken line methods (such as three-fold line method), and curved methods (such as half-normal distribution). Different standardization methods will have different effects on the evaluation results of the system. Unfortunately, there is no general rule to follow in the selection of data standardization methods.

    Reply
  8. Jason Burwell says

    May 5, 2021 at 8:58 am

    I find Denormalization interesting, and it’s important to point out that you don’t need to use denormalization if there are no performance issues in the application, but if you notice the system is slowing down or if you’re aware that this could happen then you should think about applying this technique. Before going with it, you can always consider other options, like query optimization and proper indexing. You can also use denormalization if you’re already in production but it is better to solve issues in the development phase.

    Reply

Leave a Reply to Zibai Yang Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

RECENT ANNOUNCEMENTS

Week 14 Implementation Plan Risks – Prince Patel

What Are the Risks of an Implementation Plan? As is the case with any … [More...] about Week 14 Implementation Plan Risks – Prince Patel

Week 13 Smoke Testing- Prince Patel

Smoke Testing Smoke testing is performed on the ‘new’ build given by … [More...] about Week 13 Smoke Testing- Prince Patel

Week 12 (FaaS) Function as a Service! – Prince Patel

You all must have heard IaaS, PaaS & SaaS. But let me introduce you to … [More...] about Week 12 (FaaS) Function as a Service! – Prince Patel

Week 10 What is Guerrilla Usability Testing? – Prince Patel

Guerrilla testing In guerrilla testing, test subjects are chosen … [More...] about Week 10 What is Guerrilla Usability Testing? – Prince Patel

Week 9 Dev-ops Software Development Methodology – Prince Patel

DevOps development methodology DevOps is not just a development … [More...] about Week 9 Dev-ops Software Development Methodology – Prince Patel

Week 8 Database Design Steps – Prince Patel

How to Design Database: Steps of Designing Database Database designing … [More...] about Week 8 Database Design Steps – Prince Patel

[More Announcements...]

Copyright © 2025 · Department of Management Information Systems · Fox School of Business · Temple University