understanding the factorisation machine formula

I am reading this tutorial about factorisation machines. I get the intuition behind it, compute the dot product between the (user/item)+(item/aux features)+(user/aux features). This dot product can impact y_hat. But I don't understand the formula below. I understand the first section, the bias. I understand the second section, first order weighting. But I don't understand the last section. I understand the <v,v>, this could be dot product of <user,item> or <item, aux feature>. But what are the x's? Would these …
Category: Data Science

Negative Latent Factors in Factorized Machines

I'm studing a specific implementation of a recommendation system leveraging on a factorization machine algorithm. For each person_id and item_id combination, I have an implicit rating of 1 or 0 depending on if the user downloaded the content or not. In the base model, I have just utilized as input variables the person_id and the item_id. I selected a latent factor number equal to 5. In the model output, some of the 5 the latent factors associated to some person_id …
Category: Data Science

How to use hashing trick with field-aware factorization machines

Field-aware factorization machines (FFM) have proved to be useful in click-through rate prediction tasks. One of their strengths comes from the hashing trick (feature hashing). When one uses hashing trick from sci-kit-learn, one ends up with a sparse matrix. How can then one work with such a sparse matrix to still implement field-aware factorization machines? SKLearn does not have an implementation of FFM. EDIT 1: I want to perform feature-hashing/hashing-trick for sure in order to be able to scale FFM …
Category: Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.