Model Inference
ModelComponent has an inference function which is the main entrance to model evaluation. By default, the base ModelComponent slices entities into smaller batches and call batch_inference on each batch. The default batch size is 30. You should be able to configure the MODEL_BATCH_SIZE env variable to change the batch size. In order to set up model inference, you only need to define a class that inherits ModelComponent and implement batch_inference. You can also override this function if you want to customize the inference logic.Register features for the model
Here’s an example of defining features you need for your model inference:manifest_feature_names
is a @cached_property (you can also just do @property but for cached_property gives a little performance advantage) and is a set of feature strings, with each string in the following format:
- for Wyvern’s real-time features:
realtime_feature_component_name
+:
+feature_name
. The realtime_feature_component_name is thename
of the RealtimeFeatureComponent you defined. By default, it is the name of your model class. - for batch features: FEATURE_VIEW +
:
+ FEATURE_NAME
Define the whole model
Single Model
pipelines/product_ranking/models.py
_inference_helper
, each inference is pretty much handling the formula we’ve talked about, which is: