November 21, 2015

IDR 11: Results are in for first app

My algorithm looks to have worked with 97% accuracy. Out of 1000 examples classified, there were 28 false positives and 1 false negative. In my case, the false negatives are more important. So another way to interpret this is that I only "missed" on 1 example out of 300 that my algorithm should have caught, which is great.

Now I need to verify that there isn't anything fishy going on. Which features are having the biggest impact? Are my training and test examples as random as they should be? Am I unintentionally cheating in any way with the data?

I'm finally asking research and analysis questions.

Also, I've moved my external base of operations from a Starbucks to a Harris Teeter grocery store. They have wifi, power, and there's college football on a TV. On top of that, the food and drinks are cheaper and healthier.

IDR Series

No comments:

Post a Comment