Published by Addison-Wesley Professional (February 27, 2019) © 2019

Andrew Kelleher | Adam Kelleher
    VitalSource eTextbook (Lifetime access)
    €32,99
    Adding to cart… The item has been added
    ISBN-13: 9780134116563

    Machine Learning in Production: Developing and Optimizing Data Science Workflows and Applications ,1st edition

    Language: English

    Foundational Hands-On Skills for Succeeding with Real Data Science Projects

    This pragmatic book introduces both machine learning and data science, bridging gaps between data scientist and engineer, and helping you bring these techniques into production. It helps ensure that your efforts actually solve your problem, and offers unique coverage of real-world optimization in production settings.

    –From the Foreword by Paul Dix, series editor

    Machine Learning in Production is a crash course in data science and machine learning for people who need to solve real-world problems in production environments. Written for technically competent “accidental data scientists” with more curiosity and ambition than formal training, this complete and rigorous introduction stresses practice, not theory.

     

    Building on agile principles, Andrew and Adam Kelleher show how to quickly deliver significant value in production, resisting overhyped tools and unnecessary complexity. Drawing on their extensive experience, they help you ask useful questions and then execute production projects from start to finish.

     

    The authors show just how much information you can glean with straightforward queries, aggregations, and visualizations, and they teach indispensable error analysis methods to avoid costly mistakes. They turn to workhorse machine learning techniques such as linear regression, classification, clustering, and Bayesian inference, helping you choose the right algorithm for each production problem. Their concluding section on hardware, infrastructure, and distributed systems offers unique and invaluable guidance on optimization in production environments.

     

    Andrew and Adam always focus on what matters in production: solving the problems that offer the highest return on investment, using the simplest, lowest-risk approaches that work.

    • Leverage agile principles to maximize development efficiency in production projects
    • Learn from practical Python code examples and visualizations that bring essential algorithmic concepts to life
    • Start with simple heuristics and improve them as your data pipeline matures
    • Avoid bad conclusions by implementing foundational error analysis techniques
    • Communicate your results with basic data visualization techniques
    • Master basic machine learning techniques, starting with linear regression and random forests
    • Perform classification and clustering on both vector and graph data
    • Learn the basics of graphical models and Bayesian inference
    • Understand correlation and causation in machine learning models
    • Explore overfitting, model capacity, and other advanced machine learning techniques
    • Make informed architectural decisions about storage, data transfer, computation, and communication

    Register your book for convenient access to downloads, updates, and/or corrections as they become available. See inside book for details.

    Foreword xv

    Preface xvii

    About the Authors xxi

     

    Part I: Principles of Framing 1

     

    Chapter 1: The Role of the Data Scientist 3

    1.1 Introduction 3

    1.2 The Role of the Data Scientist 3

    1.3 Conclusion 6

     

    Chapter 2: Project Workflow 7

    2.1 Introduction 7

    2.2 The Data Team Context 7

    2.3 Agile Development and the Product Focus 10

    2.4 Conclusion 15

     

    Chapter 3: Quantifying Error 17

    3.1 Introduction 17

    3.2 Quantifying Error in Measured Values 17

    3.3 Sampling Error 19

    3.4 Error Propagation 21

    3.5 Conclusion 23

     

    Chapter 4: Data Encoding and Preprocessing 25

    4.1 Introduction 25

    4.2 Simple Text Preprocessing 26

    4.3 Information Loss 33

    4.4 Conclusion 34

     

    Chapter 5: Hypothesis Testing 37

    5.1 Introduction 37

    5.2 What Is a Hypothesis? 37

    5.3 Types of Errors 39

    5.4 P-values and Confidence Intervals 40

    5.5 Multiple Testing and “P-hacking” 41

    5.6 An Example 42

    5.7 Planning and Context 43

    5.8 Conclusion 44

     

    Chapter 6: Data Visualization 45

    6.1 Introduction 45

    6.2 Distributions and Summary Statistics 45

    6.3 Time-Series Plots 58

    6.4 Graph Visualization 61

    6.5 Conclusion 64

     

    Part II: Algorithms and Architectures 67

     

    Chapter 7: Introduction to Algorithms and Architectures 69

    7.1 Introduction 69

    7.2 Architectures 70

    7.3 Models 74

    7.4 Conclusion 77

     

    Chapter 8: Comparison 79

    8.1 Introduction 79

    8.2 Jaccard Distance 79

    8.3 MinHash 82

    8.4 Cosine Similarity 84

    8.5 Mahalanobis Distance 86

    8.6 Conclusion 88

     

    Chapter 9: Regression 89

    9.1 Introduction 89

    9.2 Linear Least Squares 96

    9.3 Nonlinear Regression with Linear Regression 105

    9.4 Random Forest 109

    9.5 Conclusion 115

     

    Chapter 10: Classification and Clustering 117

    10.1 Introduction 117

    10.2 Logistic Regression 118

    10.3 Bayesian Inference, Naive Bayes 122

    10.4 K-Means 125

    10.5 Leading Eigenvalue 128

    10.6 Greedy Louvain 130

    10.7 Nearest Neighbors 131

    10.8 Conclusion 133

     

    Chapter 11: Bayesian Networks 135

    11.1 Introduction 135

    11.2 Causal Graphs, Conditional Independence, and Markovity 136

    11.3 D-separation and the Markov Property 138

    11.4 Causal Graphs as Bayesian Networks 142

    11.5 Fitting Models 143

    11.6 Conclusion 147

     

    Chapter 12: Dimensional Reduction and Latent Variable Models 149

    12.1 Introduction 149

    12.2 Priors 149

    12.3 Factor Analysis 151

    12.4 Principal Components Analysis 152

    12.5 Independent Component Analysis 154

    12.6 Latent Dirichlet Allocation 159

    12.7 Conclusion 165

     

    Chapter 13: Causal Inference 167

    13.1 Introduction 167

    13.2 Experiments 168

    13.3 Observation: An Example 171

    13.4 Controlling to Block Non-causal Paths 177

    13.5 Machine-Learning Estimators 182

    13.6 Conclusion 187

     

    Chapter 14: Advanced Machine Learning 189

    14.1 Introduction 189

    14.2 Optimization 189

    14.3 Neural Networks 191

    14.4 Conclusion 201

     

    Part III: Bottlenecks and Optimizations 203

     

    Chapter 15: Hardware Fundamentals 205

    15.1 Introduction 205

    15.2 Random Access Memory 205

    15.3 Nonvolatile/Persistent Storage 206

    15.4 Throughput 208

    15.5 Processors 209

    15.6 Conclusion 212

     

    Chapter 16: Software Fundamentals 213

    16.1 Introduction 213

    16.2 Paging 213

    16.3 Indexing 214

    16.4 Granularity 214

    16.5 Robustness 216

    16.6 Extract, Transfer/Transform, Load 216

    16.7 Conclusion 216

     

    Chapter 17: Software Architecture 217

    17.1 Introduction 217

    17.2 Client-Server Architecture 217

    17.3 N-tier/Service-Oriented Architecture 218

    17.4 Microservices 220

    17.5 Monolith 220

    17.6 Practical Cases (Mix-and-Match Architectures) 221

    17.7 Conclusion 221

     

    Chapter 18: The CAP Theorem 223

    18.1 Introduction 223

    18.2 Consistency/Concurrency 223

    18.3 Availability 225

    18.4 Partition Tolerance 231

    18.5 Conclusion 232

     

    Chapter 19: Logical Network Topological Nodes 233

    19.1 Introduction 233

    19.2 Network Diagrams 233

    19.3 Load Balancing 234

    19.4 Caches 235

    19.5 Databases 238

    19.6 Queues 241

    19.7 Conclusion 243

     

    Bibliography 245

     

    Index 247