-
Ermira Zifla wrote a new post on the site MIS 0855: Data Science Fall 2016 8 years, 1 month ago
Here is the exercise.
And here is the spreadsheet to complete the exercise [In-Class Exercise 8.2 – OnTime Airline Stats [Jan 2014].xlsx].
-
Ermira Zifla wrote a new post on the site MIS 0855: Data Science Fall 2016 8 years, 1 month ago
Leave your response as a comment on this post by the beginning of class on October 26, 2016. Remember, it only needs to be three or four sentences. For these weekly questions, I’m mainly interested in your o […]
-
Like most of the students, I use GPA or grades in one course or entire semester as one of the Key Performance metrics. The overall GPA or grade in particular course is reliable indicator of my performance. It measurable since we get numeric scores on scale of 0-4.0. It achievable since a low gpa score indicates that I need to improve or work hard and on other hand a high GPA tells that you have worked hard for it. It mostly relevant, for example a high gpa gives you edge over other candidates in job/internship application or even for grad school application. GPA is time varient since I have lot of fluctuations over 4 years.
-
A common KPI that I like to use is, “how many tasks am I completing within one hour”? This is conforms to the SMART criteria through being specific, measurable and time-bound. Normally, I will set a goal to get three to five things done within the hour and if I don’t reach it, I know I either need to stop procrastinating, grab another cup of coffee or both. It helps me track progress, and I am able to determine whether what I am doing has relevance and whether or not I am achieving enough in a given time frame.
-
A typical KPI that I’ll use is the frame rate of my computer in order to judge my computer’s performance in games or when watching videos. Typically the ultimate goal is 60 frames per second (because my monitor’s refresh rate is 60 Hz); however, I consider it okay to get only 30 frames per second in cases where the game is very graphically intensive. With respect to the SMART guidelines, 60fps is very specific because there is an exact number to aim for, it is very measurable especially because most games have a frame rate counter built in, it is attainable in most cases because there are graphical options to set to increase performance (albeit reducing graphical quality), it is realistic because my hardware is good enough to attain this goal in most cases because it generally meets the minimum or recommended specifications of applications, and finally it is especially time-based because it is checked and measured every second, although sometimes it is based on the average for the last few seconds.
-
A KPI that I use on a regular basis is GPA. I use it regularly because all of my grades affect it and having to keep it up is something important as it is an important number for all students. It conforms to the SMART criteria because it is is specific, measured by grades,achievable by anyone who has grades, relevant to all students, and time-bound as it is cumulative for all of your years in school.
-
-
Ermira Zifla wrote a new post on the site MIS 0855: Data Science Fall 2016 8 years, 1 month ago
Here is the exercise.
-
– Number of meetings attended over span of the project
– Number of ideas submitted during brainstorming
– Number of tasks completed per week -
Number of references found
Team meetings attended over over the course of the project
Average percentage of the workload taken on by the individual person -
Cameron Markt, Dylan O’Neill, Wilbert Castillo
Student does all assigned work on time
# of Hours spent working on the project
Percentage of final work contributed per student -
Group: Marek Chorobski, Aaryan Patel,Parth Desai, Haibum Chung
Number of notes written within the time frame of the project
Length of notes written within the time frame of the project
Time spent working or not working -
Group 5
1) Amount of time spent working on group project
2) number of meetings attended
3) Number of resources provided pertaining to the project. -
Jon Pezzner
John Nilsen
Patrick Baran
Ryan Eckrote1. Number of questions answered per member.
2. Amount of time each member spent working on assignment.
3. Number of KPI’s proposed at member meetings.
4. Number of Ideas proposed at member meetings. -
Group: Maria Antoni, Sean Coogan, Hakeem Ellis, Mirelle Basha
Percentage of work load completion during the during of the project
Amount of individual, acceptable work turned in on time
The amount of ideas brought throughout the project -
Vinny, Rachel, Cary, Sheila
hours spent
tasks completed
meetings attended -
number of hours spent
number of tasks completed
number of meetings attended
-
-
Ermira Zifla wrote a new post on the site MIS 0855: Data Science Fall 2016 8 years, 1 month ago
Some quick instructions:
You must complete the quiz by the start of class on October 24, 2016.
When you click on the link, you may see a Google sign in screen. Use your AccessNet ID and password to […] -
Ermira Zifla wrote a new post on the site MIS 0855: Data Science Fall 2016 8 years, 1 month ago
Here is the exercise.
And here is the dataset you’ll need [Vandelay Orders by Zipcode.xlsx].
-
Ermira Zifla wrote a new post on the site MIS 0855: Data Science Fall 2016 8 years, 1 month ago
Leave your response as a comment on this post by the beginning of class on October 19, 2016. Remember, it only needs to be three or four sentences. For these weekly questions, I’m mainly interested in your o […]
-
I have worked with excel in couple of my projects while I was interning. The most common mistake I have made is #4 sorting a spreadsheet and not including columns. When your are given large data sets, its very easy to ignore few columns. And as a result, it very likely that the data analysis result wouldn’t be accurate. Moreover when your are required to present graphs, or even perform some basic excel functions like VLOOKUP or MATCH, the data presented wouldn’t be accurate.
-
I have fallen guilty to a few of the issues listed in the article, however among the most embarrassing would be opening a CSV file directly into excel. It’s an easy fix along with all of the other issues listed throughout, however it’s definitely something I wanted to avoid going forward. Along with a lot of other points made in the comments thus far, I would agree that it’s always important to conduct a back-up when working with any materials online.
-
-
Ermira Zifla wrote a new post on the site MIS 0855: Data Science Fall 2016 8 years, 1 month ago
Here are the instructions (in Word) (and as a PDF). Make sure you read them carefully! This is an assignment that should be done individually.
And here is the data file you’ll need: Vande […]
-
Ermira Zifla wrote a new post on the site MIS 0855: Data Science Fall 2016 8 years, 1 month ago
Here is the exercise.
-
Ermira Zifla wrote a new post on the site MIS 0855: Data Science Fall 2016 8 years, 1 month ago
Some quick instructions:
You must complete the quiz by the start of class on October 17, 2016.
When you click on the link, you may see a Google sign in screen. Use your AccessNet ID and password to […] -
Ermira Zifla wrote a new post on the site MIS 0855: Data Science Fall 2016 8 years, 1 month ago
Leave your response as a comment on this post by the beginning of class on October 12, 2016. Remember, it only needs to be three or four sentences. For these weekly questions, I’m mainly interested in your o […]
-
http://projects.fivethirtyeight.com/2016-election-forecast/?ex_cid=rrpromo
This article will probably be used the most on this discussion board, but this is because it’s the most featured article on the fivethirtyeight website, and it has phenomenal graphs and charts indicating the likelihood of the presidential candidates winning the election. This is relevant to me because I’m interested in seeing the election forecast to see the profitability of my favored candidate winning. During the time I view the site, it showed that Clinton had an 80% probability of winning compared to Trump’s 20%. This large disparity between the two candidates makes me feel suspicious that the source may be biased.
-
http://fivethirtyeight.com/features/election-update-are-trumps-polls-getting-worse/
This article discusses the Presidential Election, and it uses data to suggest that recent events have hurt Donald Trump’s stock in the election. This article is interesting to me because the Presidential Election is an intriguing current event for probably everyone living in the United States. Looking at data from current polls helps to give us a good gauge on what the result might be, especially using data visualizations of where the votes are going.
-
http://fivethirtyeight.com/features/first-debate-losers-arent-more-likely-to-rebound-in-the-second-debate/
Per usual, here’s another article about the 2016 Election along with infographics and data visualizations that describe why ‘losing points’ for both of the candidates are likely to be rough spots for them yet again and vise-versa. The hypothesis was proven through a series of researched evidence that shows how and why this happened to other elects in the past. This is relevant to us, because it ties together what we’ve been able to learn from Tableau as well as class notes! -
http://fivethirtyeight.com/features/the-year-of-the-cubs/
In this article it talks about the Cubs who are the favorites to win the World Series this year. It is interesting to me because it gives you data of there past history as a organization. The Cubs have not won a World Series in 106 years. This article explains the likeliness of that drought to be broken as it also shows a visual of the best team of every year in the regular season and how they do in post season ( Playoffs). Being a sports fan this was cool to see the translation of regular to post season probability of winning. -
This article discusses the use of certain words during the past to presidential debates. The data shows how frequently each candidate uses certain pronouns which can tell a lot about where the direction of the debate went. For example; Donald Trump’s use of the word “she” dramatically increased from the first debate to the second debate because of how often he attacked Hilary Clinton. And Hilary Clinton’s use of the word “I” greatly increased from the first debate to the second debate because of she had to defend her 30 years of work from Trump’a attack. The data also shows which candidate used more varied speech, and who just repeated similar sentences. Clinton outdid Donald in that data set by a considerable margin. -
http://fivethirtyeight.com/features/the-year-of-the-cubs/
This article puts into perspective the season the Chicago Cubs are having, and how they compare to other great MLB regular seasons from 1901-2016. As a sports fan, I like to see how the current era or sports compares with older eras, especially in this instance because the Cubs haven’t won a World Series since 1908, and they were far and away the best team in Major League Baseball this year. While the Cubs were the best regular season team this year, the article explains that compared to other teams who have won the most regular season games throughout the MLB history, this year’s Cubs don’t rank close to the top. However, the data seems to point to the Cubs ending their 108-year drought and winning the World Series this year.
-
http://fivethirtyeight.com/features/significant-digits-for-wednesday-oct-12-2016/
This article is the “daily digest” of data for October 12. It features an array of interesting data points related to current event and other relevant things. It also features some points that are important and may not be covered by mainstream media. For instance, the article says that the UN has asked for 120 million dollars for aid in Haiti. Haiti is in dire need after hurricane Matthew yet not many people have talked about it. In addition, the article keeps you updated on the political polls, stating that Clinton is 15 points above Trump.
-
https://www.wired.com/2016/10/big-data-algorithms-manipulating-us/
This article, written by a former Wall Street marketer who used the advantages of Big Data to formulate predictions on the market. The article discusses a number of ways that the average person is being manipulated through the use of Big Data by larger corporate structures. The most interesting point that I think he makes pertains directly to me and everyone else in the class; he discusses the college ranking model and how it is an algorithm that ranks all of the colleges in the country. The bases that this algorithm ranks colleges takes into account the number of accepted students verses rejected students as well as graduation rates . He discusses how this has changed the roles of administrators from people dedicated to improving the schools to people dedicated to improving the school’s rankings. Since this ranking doesn’t take into consideration the costs of the schools it has lead to increased tuition rates across the country and bloated administrations while effectively reducing the effectiveness of the college education.
-
The article is about how, as the title suggests, analytics of various social media sites out preformed the professional poll people. I found it a bit interesting and sad at the same time because if someone does this for a living, why are they unable to out-preform these analytics and if they can’t, why aren’t they using them themselves to predict in a more accurate fashion?
-
-
Ermira Zifla wrote a new post on the site MIS 0855: Data Science Fall 2016 8 years, 1 month ago
Here is the assignment. It is due by midnight on October 31, 2016.
Want extra credit? Enter your deliverable in the Temple Analytics Challenge! You don’t need to do anything more to the assig […]
-
Ermira Zifla wrote a new post on the site MIS 0855: Data Science Fall 2016 8 years, 1 month ago
Hi all,
Gabe Stein–the author of one of the first articles we covered in our class “I’m Beating the NSA to the Punch by Spying on Myself”– is going to do a live WebEx on Monday Oct. 3rd. Please reply to t […]
-
Ermira Zifla wrote a new post on the site MIS 0855: Data Science Fall 2016 8 years, 1 month ago
Another professor of MIS0855 (Shana Pote) has put together a video walkthrough of the first Tableau in-class activity that was done last Friday. Professor Pote kindly made the video available for our class, and I […]
-
Ermira Zifla wrote a new post on the site MIS 0855: Data Science Fall 2016 8 years, 1 month ago
Here is the study guide for the first midterm exam.
-
Ermira Zifla wrote a new post on the site MIS 0855: Data Science Fall 2016 8 years, 1 month ago
Here is the exercise.
And here is the graphic file you’ll need: Philadelphia Area Obesity Rates.png.
Right-click on the file and save it to your computer.
-
Ermira Zifla wrote a new post on the site MIS 0855: Data Science Fall 2016 8 years, 1 month ago
Here is the exercise.
Before you start, save this Tableau file and the studentloans2013 Excel workbook to your computer.
-
Ermira Zifla wrote a new post on the site MIS 0855: Data Science Fall 2016 8 years, 1 month ago
Some quick instructions:
You must complete the quiz by the start of class on October 10, 2016.
When you click on the link, you may see a Google sign in screen. Use your AccessNet ID and password to […] -
Ermira Zifla wrote a new post on the site MIS 0855: Data Science Fall 2016 8 years, 1 month ago
Here is the assignment.
Here is the worksheet as a Word document to make it easy to fill in and submit (along with your Tableau file).
And here is the data file you will need to complete the assignment […]
-
Ermira Zifla wrote a new post on the site MIS 0855: Data Science Fall 2016 8 years, 1 month ago
Here is the exercise.
And here is the spreadsheet you’ll need to complete the exercise [In-Class Exercise 4.3 – FoodAtlas.xlsx].
Make sure you right-click on the Excel file link and select “Sa […]
-
Ermira Zifla wrote a new post on the site MIS 0855: Data Science Fall 2016 8 years, 2 months ago
Leave your response as a comment on this post by the beginning of class on September 28, 2016. Remember, it only needs to be three or four sentences. For these weekly questions, I’m mainly interested in your o […]
-
From Hoven’s article “Stephen Few on Data Visualization: 8 Core Principles” , I think the most important principle of good data visualization is “Simplification”. The data graph presented should be simple, as in easy to understand. The data visual should convey what it is trying to say/project in simple manner to the audience. If the visual is complicated it would lead to ambiguity among audience. Hence simplicity of the data visual or data visualization tool is the most important principle.
-
In this article by Hoven the most important principle mentioned of the eight is the “ask why” principle. This Principle is very important because it ties in not only the importance of source of data, but what is causing the data to be collected, and what is the purpose for collecting the information. After knowing why the data is being collected, the data experts would then need to attempt to apply this newly found information for more efficient operation in the particular field. Without knowing why the data is appearing in certain ways, business’s would not be able to apply these new concepts to increase overall efficiency of their own business model. Overall without knowing why results are occurring change can never be implemented with these concepts, and corporations cannot exponentially grow.
-
In my opinion, I feel as if simplicity is the most important principle of data visualizations that Hoven discusses because when you look at a data visualization; you want it to be quick to read and concise. You want to limit all or most distractions that can distract the reader of your visualization from the main point. In the business world, if your employer asks you to create a visualization, they want it to be as clear and as simple as possible. They want to be able to look at it, get the information, and be able to make a quick decision based on it. If you have a billion things going on, your employer will not be happy that they have to search and understand a complex visualization.
-
I believe the most important core principal of data visualizations is the principal of simplicity. When data is conveyed to someone through visuals it is important that the main information is understood right away. Viewers can lose interest fast it there is to much to absorb at once. Also people viewing the data do not want to spend to much time deciphering your visual. The main point trying to be displayed should be clear and simple to grasp at first look for the most effective impact on its viewers.
-
From the eight principles given in the article “Stephen Few on Data Visualization: 8 Core Principles” I believe the most important principal is “simplify”. With huge amounts of data it is hard to be able to understand and interpret everything that is going on. By simplifying it you are able to grasp the essence or core aspect of the the data through data visualization. It is difficult to comprehend everything that is going on, and one might become overwhelmed without simplifying it to display a clear visualization on what is most important in the data and being concise about it.
-
I’ll have to say that Simplify is the most important principle. The goal of a good data visualization is to capture the viewer quickly and display some important data in a quickly comprehendible manner. All of the critical questions and other points of good data visualization are obviously important, but I think even attending and exploring fall into simplifying, you want the data to be quick and manageable, understandable and navigable and that all falls on making sure the data is simple enough to do those things. Of course you don’t want to ever over simplify the data because then it begins to lose its meaning and significance but over complicated data visualizations may as well be raw data.
-
In my opinion I believe the “ask why” is the most important of the eight principles. Looking at visual data and knowing what’s happening is just the tip of the ice berg. Digging deeper and questioning why it’s happening will lead to actionable results. I was once told that if I’m stuck between decisions to ask myself “why” continuously until I realize my true intentions. It’s the same with visual data. For example, you may see on a graph that the crime rate in Philadelphia is rising every year; but no action can be taken through this observation. If you search for the reason or the “why” factor, there is a strong possibility a solution can be created in order to reduce the crime rate.
-
In the article, “Stephen Few on Data Visualization: 8 Core Principles” the most important core principle is simplify. Simplifying is important because it enables you to provide the most important aspects of data and eliminating the distractions. Simplicity makes it easy to understand while its still engaging. Without this principle, it makes the other core principles difficult to apply making the reader confused.
-
- Load More