Recent papers from the NIPS 2015 workshop on feature extraction suggest that representational lea...
Recent papers from the NIPS 2015 workshop on feature extraction suggest that representational learning consisting of "supervised coupled" methods (such as the training of supervised deep neural networks) can significantly improve classification accuracy vis a vis unsupervised and/or uncoupled methods. Such methods jointly learn a representation function and a labeling function. If you are a machine learning practitioner in a field whose applications demand or require strict interpretability constraints, a major drawback of using deep neural networks is that they are notoriously difficult to interpret. In this talk, Alex will discuss "distilled learning" -- training a classifier and extracting its outputs for use as training labels for another model -- and "dark knowledge" -- implicit knowledge of the underlying data representation learned by a classifier. Together, Alex will show their efficacy in improving classification accuracy in more readily interpretable models such as single decision tree and logistic regression learners. Finally, Alex will discuss applications such as health sciences, credit decisions, and fraud detection.