I appreciate the fact that Jupyter runs in an isolated mode. I read several posts about it by now. What I don't understand is why the JUPYTER_PATH variable is ignored as well as appending manually (as a proof of concept) the path of the current site packages from my brewed Python dir. I couldn't find any documentation specific for the Lab so I assumed this should have worked out of the box. Any idea on how to avoid installing all …
I'm dealing with a materials science dataset and I'm in the following situation, I have data organized like this: Chemical_ Formula Property_name Property_Scalar He Electrical conduc. 1 NO_2 Resistance 50 CuO3 Hardness ... ... ... CuO3 Fluorescence 300 He Toxicity 39 NO2 Hardness 80 ... ... ... As you can understand it is really messy because the same chemical formula appears more than once through the entire dataset, but referred to a different property that is considered. My question is, …
So I opened up a google cloud account and have access to global and local (us east 1) resources (Compute Engine API , NVIDIA K80 GPUs) and connected it to my dropbox. Next, I followed this youtube video to try to connect it to my jupyter notebook. The code to be entered into the google cloud platform is as follows: sudo apt-get update sudo apt-get - -assume-yes upgrade sudo apt-get - -assume-yes install software-properties-common sudo apt-get install python-setuptools python-dev build-essentials …
I am on windows, using jupyter notebook, Mediapipe:Holistic Solution, Python, tensorflow. I am using a Holistic solution and trying to get the left hand, right hand and pose landmarks. I am giving my webcam feed as input. When I run the code below, there are no errors and everything is good. After this, I was trying to check if I got the landmarks by using "results.left_hand_landmarks.landmark" and length of the landmarks using "len(results.left_hand_landmarks.landmark)" I am getting this "AttributeError: 'NoneType' object …
For instance i have a row value on the dataset_1 : "Entity" = Apple dataset_2 : "Entity" = iCloud Apple (Entity is the column) I need to merge one dataset to the other by the column entity, but to do that i need them to have exacly the same value and Apple ≠ iCloud Apple. Both datasets are huge so i cant do this manually, one by one
I am trying to create a bar plot for a Pandas Series and the bar plot is not showing up in Jupyter notebook. When I run the cell, I only get the following and I do not see the bar plot. <matplotlib.figure.Figure at 0x7fa555abc080> Please advise.
Using Twint, I'm able to get tweets from each city in the list, but while creating a dataframe I only get tweet from the last city in the list, is it possible to append the tweets based on cities? india = ["chennai","pune","mumbai"] for city in india: print(city) c = twint.Config() c.Search = "phone" c.Lang= "en" c.Pandas = True c.Since = "2021-01-01" c.Near = city c.Limit = 10 twint.run.Search(c) data = twint.output.panda.Tweets_df(["date","tweet","near"]) data.head() date tweet near 0 2021-03-23 18:47:09 Cameron FaulknerThe …
I’m trying to train an artificial intelligence model on OVHCloud AI Notebooks with the Common Voice’s dataset from Mozilla. The problem is that this database has a 70 GB size. I have tried to download it on my computer, and then to export the database to the OVHCloud Object Storage so I can use it, but this is excessively long. Is it possible to download directly this file on the notebook ? (My file is a tar.gz file)
I am training my model on almost 200 000 images, i'm using Jupyter and now after 3 days of training ( i used 800 epochs and batch-size = 600) I have this " the kernel appears to have died. It will restart automaticaly" And this appears after 143 epochs only. Can anyone help me to solve this, and also can anyone advise me something in case of using big amount of data, because i am struggling with this dataset and …
I am currently learning reinforcement learning and wanted to use it on the car racing-v0 environment. I have successfully made it using PPO algorithm and now I want to use a DQN algorithm but when I want to train the model it gives me this error: AssertionError: The algorithm only supports (<class 'gym.spaces.discrete.Discrete'>,) as action spaces but Box([-1. 0. 0.], [1. 1. 1.], (3,), float32) was provided Here is my code: import os import gym from stable_baselines3 import DQN from …
This page describes the Jupyterhub Hub database setup: https://jupyterhub.readthedocs.io/en/stable/reference/database.html Instructions (and the config so far) point to using either PostgreSQL or MySQL. To reduce support skillsets we have to use SQL Server (ideally) or Oracle. Is this possible? If so, please include example config.
Does anyone know a way to add cell numbers (not line numbers within cells)? I have been using nbextensions for a while but it does not seem to have the ability to label cell numbers.
For en example for a trend movement is USA Beverage moves to USA Alcoholic Beverage after some time. So what are the statistical tests available to identify about this kind of movement in two time series. Is there any way to identify whether a trend has moved from one time series to another by analysing those two time series?
I am running Jupyter on a server on a virtual environment. I then tunnel my connection so I can access Jupyter on my browser. When I SSH into the server, I can use the Panda module in both Ipython and Python3. I ran this code in Ipython import pandas as pd In [2]: print(pd.__file__) /home/ubuntu/.local/lib/python3.6/site-packages/pandas/__init__.py Then I tried adding it to my path in Jupyter with the code below, still no luck. import os os.getcwd() import sys sys.path.append('/home/ubuntu/.local/lib/python3.6/site-packages/pandas/__init__.py') import pandas …
I am new to Azure ML. I am working on sentimental analysis on a small tweet dataset with the help of fastText embedding (fastText file 'wiki-news-300d-1M.vec' is around 2.3 GB which I downloaded in my folder). When I run the program in the Jupyter notebook everything runs well. But when I try to deploy the model in Azure ML, while I attempt to run the experiment: run = exp.start_logging() run.log("Experiment start time", str(datetime.datetime.now())) I am getting the error message: While …
Vineeth Sai indicated in this that with the following code: pip install cntk the problem is solved. However, I am getting the error shown in attached image:
I have a data set of 25 images. I wish to run Faster RCNN or yolov3 object detection models on this images.I want to create my custom trained model and get weights after running say 10 epochs. Later I can save these weights and use that for prediction. Build a model, train on my image data set and get weights. Is it possible?
I'm trying to use SVR to predict a certain feature. I create the model with the following code: from sklearn.svm import SVR from sklearn.preprocessing import StandardScaler X = data # this is the outcome variable y = data.iloc[:, 10].values sc_X = StandardScaler() sc_y = StandardScaler() X2 = sc_X.fit_transform(X) y = sc_y.fit_transform(y.reshape(-1, 1)) # my_custom_kernel looks at certain columns of X2 / scaled data my_regressor = SVR(kernel=my_custom_kernel) my_regressor = regressor.fit(X2, y) After creating the model, I want to test it to …