Yesterday sucked. I realized that one of my main sources of feature data isn't going to work correctly with the other main source of features. This required adjusting my experiments pretty significantly.
But today, I solved that problem. I corrected the experiment in a way to uses data that I've been collecting for 2 years, so that's a really good thing.
I also realized that there are some things I can do to improve my model's predictive performance. I'm using the glmnet package in R, maintained by Trevor Hastie from Stanford (one of the professors from the StatLearning course and videos I mentioned in a previous post). I realized that I'm doing a couple of dumb things with glmnet, and that there are alternatives that are more appropriate for my dataset. I'm going to redo some of those statistics and see if they improve (they should, and possibly significantly).
So today, there's a lot more hope to go around than there has been the past couple of days. I have a follow-up with my professor in the morning, so I'll be pushing to get a large batch of experimental results today.
On a separate thread, I'm also still digging for better features while the current round of experiments is executing. ML is very sensitive to "garbage in, garbage out", and I may need some additional features to consider if the performance of my technique doesn't match up.
I expect to have an eventful and productive day!
Thanks for reading.