Powering Real Time Predictions at Fundera



Introduction



We have designed and implemented a flexible and scalable machine learning framework that allows  a) rapid development of models offline, b) easy and reliable deployment of models into production, and c) supports multiple frameworks and algorithms. The framework provides a microservice that enables the serving of near real-time predictions from multiple models. Models can be updated with custom frequency. The prediction service is scalable and hence is robust to hardware failure and can grow to support more traffic. We also have extensive monitoring and alerting in place to ensure both model and prediction service performance.

General

  1. Generalized framework that makes it easy to create a new model by creating a JSON config and sub-classing a base class. Subclass needs to define methods that specify how to fetch the data, how to identify positive negative examples, etc. There are additional methods that can be used to generate custom evaluations on both the training and testing data.
  2. The framework currently supports Vowpal Wabbit(VW) and XGBoost machine learning frameworks under the hood. 
  3. Features are enumerated in a json config file. The config file also allows: 
    1. Feature buckets: by specifying bucket boundaries for features 
    2. Specifying feature defaults for missing values: these can be absolute values or max, min, median, mean. The defaults are inferred from training data.
    3. Quadratic/cross features: ability to multiply two features to create a new feature. This includes crossing bucketed features. 
    4. Excluding non-cross/quadratic features: when we want to use a feature only in a cross feature and not as a feature itself. For context, VW allows us to do quadratic features but doesn’t allow this exclusion. 
    5. Ability to specify VW namespaces, and other command line flags. 
  4. Dynamic Config: Create/calculate features/thresholds/values from training data that are used during testing as well as for predictions. 
  5. Dynamic class generation that combines machine learning framework with the algorithm to use from it at runtime.

Model Development


  1. The framework trains models using various algorithms like linear and logistic regression, and calculates metrics like AUC/R^2 on the testing data. The training and testing run is launched with a runner script where the model, model version, framework and algorithm are specified through flags.
  2. We are able to pull data for training and testing from both Postgres and Snowflake. We specify the training and testing time periods to determine the data split between training and testing.
  3. Local caching: of sql results, of processed training and testing instances. These are very useful when developing a model. 
  4. Feature statistics: a flag on the runner script gets summary statistics on features for both the training and testing data. 
  5. Feature selection: ability to specify combinations of feature to cycle through to determine the best feature combination. 
  6. Ability to override the config from the command line on the fly.

Production / Real Time Prediction Service

  1. We create a flask based microservice where we have an endpoint for each model and version combination. The main website application hits the endpoint with an id and gets back a list of recommendations ranked by scores. We also return metadata like model update time and the feature values used to calculate the scores.
  2. On the website application (client) side we store all of this information in a postgres table.
  3. Since VW uses a daemon for inference, we create systemd services for each model and version combination being used in production. Support for multiple versions allows A/B testing on different versions of the same model. 
  4. The flask server also runs as a systemd service. 
  5. Daily training of models to use most recent data for training. At the completion of the run we upload the model and the config file to S3 which can be picked up by other servers. 
  6. SLA of  ~ 1 second for current model. Monitoring and alerting through datadog. 
  7. We use Datadog extensively to track microservice and model performance. 
  8. Bootstrap script: responsible for setting up the machine learning framework on a new EC2 instance. The script can checkout and install any git branch of the machine learning repository.
  9. Deploy script: responsible for stopping and restarting all the systemd services. This script can also deploy any git branch which is very useful for testing. We use ship-it for regular deploys.
  10. Scalability: We use a load balancer that gives us the ability to distribute calls between multiple EC2 instances of the service. We also have cloudwatch alerting setup that alerts when the load balancer determines a box is down. 
  11. We store daily trained models and their corresponding configs in the cloud. The setup script looks for the latest model and config and downloads them.

Future


  1. Rolling restarts: currently our traffic needs are met with just one EC2 instance, but as we deploy more models into production, we will need more EC2 instances. Model and code updates will then need rolling restarts on those boxes.
  2. Support for more machine learning libraries. Currently, we only use Vowpal Wabbit and XGBoost. We would like to extend support to libraries that enable us to build more complicated models like neural network models. 
  3. We are looking into using the results of models as features for another model to power real time predictions. We have a lot of missing data at inference time and having models that predict that data should improve the predictive power of the higher level models. 
  4. Ability to pull data from databases other than Postgres during inference.










Comments