wefe.RIPA

class wefe.RIPA[source]

An implementation of the Relational Inner Product Association Test, proposed by [1][2].

RIPA is most interpretable with a single pair of target words, although this function returns the values for every attribute averaged across all base pairs.

NOTE: As the variance tends to be high depending on the base pair chosen, it is recommended that only a single pair of target words is used as input to the function.

This metric follows the following steps: 1. The input is the word vectors for a pair of target word sets, and an attribute set.

Example: Target Set A (Masculine), Target Set B (Feminine), Attribute Set (Career).

  1. Calculate the difference between the word vector of a pair of target set words.

  2. Calculate the dot product between this difference and the attribute word vector.

  3. Return the average RIPA score across all attribute words, and the average RIPA score for each target pair for an attribute set.

References

[1]: Ethayarajh, K., & Duvenaud, D., & Hirst, G. (2019, July). Understanding Undesirable Word Embedding Associations.
__init__(*args, **kwargs)
metric_name: str = 'Relational Inner Product Association'
metric_short_name: str = 'RIPA'
metric_template: Tuple[Union[int, str], Union[int, str]] = (2, 1)
run_query(query: wefe.query.Query, word_embedding: wefe.word_embedding_model.WordEmbeddingModel, lost_vocabulary_threshold: float = 0.2, preprocessor_args: Dict[str, Optional[Union[bool, str, Callable]]] = {'lowercase': False, 'preprocessor': None, 'strip_accents': False}, secondary_preprocessor_args: Optional[Dict[str, Optional[Union[bool, str, Callable]]]] = None, warn_not_found_words: bool = False, *args: Any, **kwargs: Any) Dict[str, Any][source]

Calculate the Example Metric metric over the provided parameters.

Parameters
queryQuery

A Query object that contains the target and attribute word sets to be tested.

word_embeddingWordEmbeddingModel

A WordEmbeddingModel object that contains certain word embedding pretrained model.

lost_vocabulary_thresholdfloat, optional

Specifies the proportional limit of words that any set of the query is allowed to lose when transforming its words into embeddings. In the case that any set of the query loses proportionally more words than this limit, the result values will be np.nan, by default 0.2

secondary_preprocessor_argsPreprocessorArgs, optional

A dictionary with the arguments that specify how the pre-processing of the words will be done, by default {} The possible arguments for the function are: - lowercase: bool. Indicates if the words are transformed to lowercase. - strip_accents: bool, {‘ascii’, ‘unicode’}: Specifies if the accents of

the words are eliminated. The stripping type can be specified. True uses ‘unicode’ by default.

  • preprocessor: Callable. It receives a function that operates on each

    word. In the case of specifying a function, it overrides the default preprocessor (i.e., the previous options stop working).

, by default { ‘strip_accents’: False, ‘lowercase’: False,

‘preprocessor’: None, }

secondary_preprocessor_argsPreprocessorArgs, optional

A dictionary with the arguments that specify how the secondary pre-processing of the words will be done, by default None. Indicates that in case a word is not found in the model’s vocabulary (using the default preprocessor or specified in preprocessor_args), the function performs a second search for that word using the preprocessor specified in this parameter.

warn_not_found_wordsbool, optional

Specifies if the function will warn (in the logger) the words that were not found in the model’s vocabulary , by default False.

Returns
Dict[str, Any]

A dictionary with the query name, the resulting score of the metric, and other scores.