Additional tips; turning in report

(updated 11am June 12) Where to turn in your paper: You may put it under the door of my office (room 1315, 3rd floor), or in my faculty mailbox. Submit the PDF version on TritonEd, in the TurnitIn link.

Improving the course:  I think I have approximately the right mix of topics in Big Data Analytics, but there are other areas that I would like to improve. For example, how can I better smooth the workload in the projects, to reduce the end-of-year crunch? Here is a BDA End-of-year questionaire that asks a series of questions about what I should change.

Thanks for taking the class – I certainly enjoy teaching it.  Enjoy graduation! Enjoy life after the university!

===========================

We are finished with formal classes, so I will post here some additional advice for your projects. Most of this is based on things I noticed either in interim reports or in the final presentations on June 6. It’s going to take a few days to write and edit all of these notes, so keep checking this page through Saturday.

Some of this advice was written with one specific project or team in mind, but all of it applies to multiple projects.

  1. Don’t use loops in R. Most computer languages make heavy use of FOR loops. R avoids almost all uses of loops, and it runs faster and is easier to write and debug without them. Here is an example of how to rewrite code without needing a loop, taken from one of this year’s projects. In some cases, R code that avoids loops will run 100x faster (literally). BDA18 Avoid loops in R
  2. Don’t use CSV data when working with large datasets. CSV  (Comma separated value) files have become a lingua franca for exchanging data among different computer languages and environments. For that purpose, they are decent. But they are very inefficient, in terms of both speed and using up memory. One team mentioned that they were running into memory limits, but the problem was most likely due to their keeping CSV files around! Solution: Use CSV files with read.csv to get data into R. But after that, store your data as R objects (dataframes or some other kind). If you want to store intermediate results in a file, create the object inside R, then use RStudio to save it as an R object. (File name ends in .Rdata.)  When you want it again, use RStudio File/Open File command to load it. No additional conversion will be needed.
    I will add something about this to the notes on Handling Big Data. Dealing with the “big” in Big Data.
  3. Fix unbalanced accuracy in confusion matrices. Yesterday, I noticed several confusion matrices with much higher accuracy for one case than the other. Reminder: We spent 1.5 classes on this topic, and there are multiple solutions. It’s usually due to having much more data of one type than the other. See:
    1. Week 8 assignments and notes
  4. Good graphics. Because graphics are a concise way to communicate, even with non-specialists, I recommend having at least one superb image in every report. (See BDA18 Writing your final report ). I will try to post some examples from your projects, with comments on how to make them even better. On Friday.
  5. Revised discussion on how to write a good final report. Writing your final report June 8

Some discussion of data mining is nonsense – like everything else on the internet

Criticizing an article that appeared on Data Science Central, about logistic regression.

I recently came across a Twitter discussion of an article on a site called Data Science Central. The article was  Why Logistic Regression should be the last thing you learn when becoming a Data Scientist. [TL;DR Don’t believe the headline!]

The article  purports to explain that logistic regression is a bad technique, and nobody should use it. The article is nonsense. I critiqued it in the comments, but I’m not sure the editor will allow my comment to stand.  Data Science Central appears to be a one-man site, with 90% of the material written by David Granville, and it’s hard not to conclude that he made a serious mistake in writing his attack on  logistic regression.

So here is my response to his article. For my students – if you read something about Data Analytics that does not make sense to you, or contradicts something you have been taught, be suspicious. You can see some of the Twitter criticism here.

I am sorry to report that this article is nonsense.  It’s not the conclusion – use it or don’t use it, there are now many alternatives to logistic regression. (Which inthe machine learning world is a “linear classifier.” )

The difficulty is that most of the discussion is Just Wrong. Analytically incorrect. No correspondence to the usual definitions, use, and interpretation of logistic regression.

  • The diagram is incomprehensible. If it is intended to be the standard representation of logistic regression, it has multiple errors.
    • LR maps from -infinity to +infinity (on the X scale), not from 0 to 1.
    • The y axisis correct.
    • The colors and the points show the curve (called the logistic curve or similar) as the boundary between positive and negative outcomes, for points defined by two independent variables (shown as x and y). That is not at allwhat the curve means. See e.g. https://en.wikipedia.org/wiki/File:Logistic-curve.svg
  • “There are hundreds of types of logistic regression.” Maybe in an aworld with a different definition, but the standard definition does not include Poisson models. Of courseas always there are a variety of possible algorithms that can be used to solvea logistic model.
    • From https://www.medcalc.org/manual/logistic_regression.php “Logistic regression is a statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. The outcome is measured with a dichotomous variable (in which there are only two possible outcomes).
      In logistic regression, the dependent variable is binary or dichotomous, i.e. it only contains data coded as 1 (TRUE, success, pregnant, etc.) or 0 (FALSE, failure, non-pregnant, etc.).”
  • “If you transform your variable you can instead use linear regression.” Yes, and that is how logistic regressions are usually solved! That is, LRs are solved by transforming the variables (using alogit transform ) and solving the resulting equation, which is linear in the variables. In practice, many other transformation equations can be used instead, but the logit transform has a nice interpretation.
    • where 
  • “Coefficients are not easy to interpret.” I suppose that easy is in the eye of the beholder, but there is a standard and straightforward interpretation.
    • “The logistic regression coefficients show the change in the predicted logged odds of having the characteristic of interest for a one-unit change in the independent variables.” It does take a few examples to figure out what “log odds” means, unless you do a lot of horse racing. But after that, it is a clever and powerful way to think about changes in the probability of an outcome.
    • The (corrected) version of the logistic curve corresponds to an equivalent way to interpret the coefficient values.

There certainly are some mild criticisms of logistic regression, but in  situations where a linear model is reasonably accurate, it is a good quick model to try. Of course, if the situation is highly nonlinear, a tree model is going to be better. Furthermore, the particular logistic equation generally used should not be considered sacred.

My interpretation is that this article is an attack on a straw man, an undefined  and radically unconventional model that is here being called  “logistic regression.” It would be a shame if anyone took it seriously. We will see if the author/site manager leaves this comment up. If he does, I invite him to respond and explain  the meaning of his diagram.

By the way, I agree with  much of the discussion on the medcalcweb site I’m quoting, but not all of it.

 

Resources on data manipulation

Some are elsewhere on this site.  for May 23.

Continue reading “Resources on data manipulation”

Text mining homework: speeding up calculations

A few hours ago I received an email from Emily about the farm advertisement problem due on May 11. I wrote her back. But her question raises general issues for many projects. I’m sure other students also had similar problem with Friday’s homework.

In response, I just wrote BDA18 Memo =My program runs too slowly v 1.1.
New version! BDA18 My program =slow v 1.2 The memo includes most of my specific suggestions about the May 11 homework. (This may be too late for some students, but most of the ideas were also discussed briefly in class on Monday or Wednesday.)

There are probably multiple typos and errors in the memo. Please send me corrections by email, for class credit.

I am working on the assignment due tomorrow and have encountered a problem. When reducing the TF-IDF matrix to 20 concepts, RStudio always stops working (as indicated by the little ‘Stop’ sign in the console. I’m thinking this is because the farm-ads.csv dataset is too large. Without reducing the concepts, I am unable to move forward with the random forest part of the assignment. I am wondering if there is a solution to this problem or a way to work around it.
Apologies in advance for not approaching you with this question earlier. It’s been a very hectic week!
Thanks for your help,
Emily
By the way, I’m 98% sure that in fact RStudio did not “stop working.” It was probably still cranking away. Check the Activity Monitor application on your computer to be sure.

Text-mining resources for projects

(This page will be augmented the week of May 6.)

Many projects involve text mining, and they need to go far beyond Chapter 20 and the homework assignments for the course. I have put together specific resources on text mining, including examples, discussions, and R code for particular purposes.
Text Mining Material   This page is mandatory for anyone doing a TM project. Finding the right guides can save huge amounts of time and frustation.

Here is a page on web scraping. Not all text-mining projects need to scrape their own data, but it is the only way to get the latest information.  Scraping Twitter and other web sources. 

Lecture note supplements

From time to time I write  guides/tutorials on topics in lectures that people find confusing. Taken together, they add up to a supplemental textbook.

Homework week 4: Linear regression

This week we have 3 learning goals. It will take the entire week to do them.

  1. Linear regression for prediction. How it differs from hypothesis testing.
  2. Showing how to use R instead of, or in conjunction with, Rattle.
  3. Many specific tricks and issues that come up with linear regression, such as word equations and creating interactive variables.

Please see the attached document, which includes the readings, specific homework due Friday, and supplemental information about various useful ideas and techniques.

BDA18 Week 4 Readings + assign

You can get  data files here:  https://bda2020.wordpress.com/data-sets/

There is nothing due on Monday.

 

Misc. announcements: homework, Sony projects, tutorial on R, etc.

Several notices about events tomorrow and later.

Student Question: What cutoff in homework problem 10.4i?

Dana write:

Hey everyone!

I have a question about 10.4 i. exercise. In Rattle we use default setting and cannot change the cutoff, so are we supposed to guess the cutoff for the most accurate classification? Or are we supposed to get it some other way?
Response: Good question. I have 3 levels of answer. At the simplest, should you push the cutoff up or down from 50%? (Be sure to specify which way is which – sometimes it can be ambiguous).
At the next level, Rattle produces an ROC curve, as we did with the airplane data on Wednesday. See textbook p 131. The ROC curve is traced out by moving the threshold all the way from 0 to 1.
Third level: Soon, you will learn how to grab the code produced by Rattle, and run it in RStudio. There you can change the cutoff parameter and calculate different confusion matrices. For example function confusion.matrix(obs, pred, threshold = 0.5) allows any threshold. Through trial and error, you can experiment with different thresholds. Of course, there are more specialized functions that can come up with the optimal answer.

Sony Playstation Network project

Anyone considering the Sony Playstation Network project, read this memo.BDA18 Sony PSN project update 4-19  It points you to some data, and requests a revised proposal as soon as possible. Or, switch to another topic. Make comments on this page if you are looking for a partner, or on the Final paper ‘dating site’

R Tutorial Friday in room 3201 at 1pm

Feiyang will provide several resources for learning R at the level we need it in BDA. She will demonstrate how functions work, and illustrate function use with examples from the textbook. Other topics will include using R Help, and good cheat sheets. This will all be useful next week when we move away from Rattle and toward straight R.

Next week: Linear continuous models (Linear Regression)

Next week, we will look at a method that everyone has seen in a different context, OLS linear regression. I don’t like the textbook treatment of the topic, so I’m assigning a supplemental book. Please read:
Gareth James et al, An Introduction to Statistical Learning with Applications in R.(Supplementary textbook.) It’s available from Springerlink. Review section 3.2 which should be familiar, and read section 3.3 on variations on the basic model linear. Also read DMBA main textbook, only sections 6.1 and 6.2
A more detailed assignment will be posted Saturday, and nothing is due Sunday.