Home | NYU Tandon School of Engineering



Syllabus for “Machine Learning for Finance” (Fall 2017)Instructor: Igor HalperinWeek 1: IntroductionWhat is Machine Learning and how it is related to Artificial Intelligence?Differences between ML and Statistical ModelingCore paradigms of ML: Supervised, Unsupervised and Reinforcement LearningML in Finance: main applicationsDifferences between ML in Finance and ML for techWeek 2: Foundations of Machine LearningGeneralization and bias-variance tradeoffUnder-fitting and overfittingRegularizationLinear Regression as a ML algorithmProbabilistic ML models: Maximum Likelihood, Maximum A-Posteriory probabilitiesHyperparameters and cross-validation Week 3: Linear regression and classification Linear regression modelsThe bias-variance decompositionRegularization Bayesian linear regression Discriminative classification models: logistic regression and Bayesian logistic regressionGenerative classification modelsRegression use case: prediction of company earningsWeek 4: Decision tree modelsBuilding Decision TreesClassification and Regression Trees (CART)Random Forest Trees Ensemble learning: Boosted TreesClassification use case: prediction of loan defaults with trees Week 5: Support Vector MachinesStatistical learning theory: learning with generalization guarantees Maximum margin separationKernel trickSupport Vector Machines (SVM) for classificationSVM for regression: Support Vector Regression (SVR)Are SVMs good for a large-scale ML?Regression use case: stock return prediction with SVR Week 6: Feed-forward neural networksPerceptron Multi-layer (feed-forward) neural networks: universal function approximationError backpropagationOptimization algorithmsDeep neural networks (deep learning)Week 7: Unsupervised learning and clustering methods K-means clusteringHierarchical clustering methodsClustering use case: Hierarchical clustering of stocksDimensionality reduction with neural networksWeek 8: Unsupervised learning The t-SNE data visualization method Dimensionality reduction with neural networks: the auto-encoderDeep learning and automated feature generationWeek 9: Latent variable models and dimensionality reduction Principal Component Analysis (PCA)Independent Component Analysis (ICA)Gaussian mixture modelsFactor analysisProbabilistic PCAExpectation Maximization (EM) algorithmDimensionality reduction use case: analysis of stock returns with PCA and ICAWeek 10: Sequence modeling with Hidden Markov Models (HMM) and Linear Dynamic Systems (LDS) Markov ModelsHidden Markov Models (HMM)Inference and Learning in HMMExtensions of HMMState-Space Models and Linear Dynamic SystemsInference and Learning in LDSExtensions of LDSWeek 11: Sequence modeling and Markov Decision Processes Recurrent Neural Networks (RNN)Long-Short Term Memory (LSTM) Networks Markov Decision ProcessesOptimal control and Bellman equationWeek 12: Reinforcement Learning with discrete actions RL approach to optimal controlThe Bellman equation for the action-value function Q-learningWeek 13: Reinforcement Learning for option pricing and hedging RL with continuous actions with linear approachesOption pricing and hedging using dynamic programmingOption pricing and hedging using RLWeek 14: Inverse Reinforcement LearningWhat is Inverse Reinforcement Learning IRL for portfolio optimizationIRL for market modeling Course projects and programming assignments:Students are expected to do all programming assignments and one of the course projects. Pre-requisitesLinear algebra Basic calculus Basic probability theoryA prior knowledge of Finance is not requiredPython programming skills (including numpy, Pandas, and iPython/Jupyter notebooks) For a refresher on linear algebra and probability theory in an amount needed for this course, see e.g. Chapters 2 and 3 in Goodfellow et. al., “Deep Learning” (2016) TextbooksNo single textbook that would cover everything for this course. Sorted by the frequency of references in the course, the list is as follows.Main references: C. Bishop, “Pattern Recognition and Machine Learning” (2006)I. Goodfellow, Y. Bengio, and A. Courville, “Deep Learning” (2016).A. Geron, “Hands-On Machine Learning with Scikit-Learn and Tensorflow” (2017)Additional references:S. Marsland, “ Machine Learning: An Algorithmic Perspective” (2009) K.P. Murphy, “Machine Learning: A Probabilistic Perspective” (2012) D. Barber, “Bayesian Reasoning and Machine Learning” (2012) N. Gershenfeld, “The Nature of Mathematical Modeling” (1999) ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download