It really intrigues me what is the time complexity of learning phase of Hopfield neural network, how it depends of the number of training examples and number of attributes? Source code of Hopfield neural network Can we say that time complexity of Hopfield neural network quadratically depend on the number of training examples or quadratically depend on the number of attributes? Thank you!
I was wondering what is the big O, computation complexity of a deep neural architecture in a broad sense (dependent on different parameters of the net) ? How does a recurrent, convolutional and other architectures change it ?
I am working on a real-time recommender system predicting a product to a user using deep learning techniques (like wide & deep learning, deep & cross-network etc). Product catalogue can be huge (1000s to 1 million) and for a given user, the model needs to be evaluated against each product in real-time. As scalability is an important concern, is there any way to reduce the serving time complexity by tuning model architecture?
How can I measure or find the time complexity and memory complexity for a model like VGG16 or Resnet50? Also, will it be different from one machine to another like using GTX GPU or RTX GPU?
Is there a documented source of the time complexities taken by sklearn implementations of supervised algorithms - specifically of RandomForestClassifier and LogisticRegression? Alternatively, can we say that the time taken by sklearn's algorithms are roughly as much as the theoretical worst case times taken by those algorithms?
I have been thinking about this problem for a while and I'm curious if anyone knows of a good paper on this, has any ideas for algorithms or improvements to the framework. The task is to store the formulas for approximate tangent lines between previous inflection points and to add unique points/rays to memory over time. A rough framework of how I intend on achieving this is as follows: Generate a recursively smoothed filter on the data and call a …
I am loading in 1.5m images with 80,000 classes (or I will have to when I eventually train) into a Keras generator and am using a pandas dataframe to do so. The problem is, with so many images, my code takes a long time to run. I have an issue with the specific task of replacing a value in the dataframe; it takes too long: df = a pandas dataframe with all the names of the files in # Code …