I would like to cluster/group the curves in the attached picture with Python. The data is already normalized and my approach would be to use dtw (dynamic time warping) to calculate the distance and with that feature use a clustering algorithm (like kmeans or DBSCAN) to classify them. Do I pick out one trajectory as a starting curve to compare the other curves to, or do I calculate an 'average' curve of all curves and use that as the starting …
When I develop a plugin, do I always have to use: load_plugin_textdomain('whatever', '', 'whatever/languages'); Or is there any setup, so that I put my translations into a specific folder with a specific name and they get loaded by default with the domain being the plugin slug? Reason is, I don't like to use that second deprecated paramenter. I prefer to omit the whole call to the function if a default setup exists.
I´m using a plugin called "private content login redirect". The plugin is actually only containing this piece of code: add_action('template_redirect', 'private_content_redirect_to_login', 9); function private_content_redirect_to_login() { global $wp_query,$wpdb; if (is_404()) { $private = $wpdb->get_row($wp_query->request); $location = wp_login_url($_SERVER["REQUEST_URI"]); if( 'private' == $private->post_status ) { wp_safe_redirect($location); exit; } } } What it does is this: When a non-logged in user is viewing a private page, it redirects them to the log-in page. After they log in, they are taken back to the private …
I have some training data (TRAIN) and some test data (TEST). Each row of each table contains an observed class (X) and some columns of binary (Y). I'm using a Python script that is intended to predict the probability (Pr) of X given Y in the test data based on the training data. It uses a Bernoulli naive Bayes classifier. Here is my script: https://stackoverflow.com/questions/55187516/look-up-bernoullinb-probability-in-dataframe It works on the dummy data that is included with the script. On the real …
I make a gp_minimize with 20 calls : the correct values of the functions are the blue bottom markers. What is the strange markers distributions at the top left ? optimize_result=gp_minimize(func_objective,space_xgboost,n_calls=n_calls,n_initial_points=n_initial_points,x0=x0_init,random_state=1) ax=plot_convergence(optimize_result,true_minimum=optimize_result.fun) ax.figure.tight_layout() ax.figure.savefig(f"results/convergence_hpo_skopt.png")
Developing a decoupled front-end consuming the REST API... I'm using create-react-app so can enter a proxy field corresponding to WP in my package.json and write my calls like fetch('/wp-json/v2/pages/...') if I run yarn start from the front-end directory... I'd love to actually have this theme in my wp-content/themes/my-theme directory though I'm no .htaccess god How can I make so that: Generic requests like mysite.com/jibble/jabble are handled by my SPA routing Protected routes like mysite.com/wp-admin/* and mysite.com/wp-content/uploads/* are still handled by …
I have sales data which is seasonal and has no trend. The frequency of this series is 15 mins. I don't know how to compute the exact period of seasonality - whether it is daily or weekly or monthly or yearly. But, from plotting it, I think there is a yearly pattern. I tried removing seasonality before forecasting by lagging the series by a year and differencing the two but even the result has a yearly pattern. Code with what …
I have a Gutenberg block with a couple of inner blocks (using InnerBlocks from @wordpress/block-editor). In the edit function, I pull the attributes of all inner blocks and put them in an array. There is one case when it doesn't work. If I create a new inner block in WordPress admin, then change some settings inside this block, then not clicking anywhere else, click the "Update" button, the attributes of the last inner block will not be pulled because between …
I recently read Fully Convolutional Networks for Semantic Segmentation by Jonathan Long, Evan Shelhamer, Trevor Darrell. I don't understand what "deconvolutional layers" do / how they work. The relevant part is 3.3. Upsampling is backwards strided convolution Another way to connect coarse outputs to dense pixels is interpolation. For instance, simple bilinear interpolation computes each output $y_{ij}$ from the nearest four inputs by a linear map that depends only on the relative positions of the input and output cells. In …
This is the original Dataframe: What I wanted : I wanted to convert this above data-frame into this multi-indexed column data-frame : I managed to do it by this piece of code : # tols : original dataframe cols = pd.MultiIndex.from_product([['A','B'],['Y','X'] ['P','Q']]) tols.set_axis(cols, axis = 1, inplace = False) What I tried : I tried to do this with the reindex method like this : cols = pd.MultiIndex.from_product([['A','B'],['Y','X'], ['P','Q']]) tols.reindex(cols, axis = 'columns') it resulted in an output like this …
I am trying to fetch all direct children of a page. But I am getting all children and grand children as well. Any ideas? PHP Source: $args = array( 'child_of' => $post->ID, 'parent ' => $post->ID, 'hierarchical' => 0, 'sort_column' => 'menu_order', 'sort_order' => 'asc' ); $mypages = get_pages( $args ); foreach( $mypages as $post ) { $post_tempalte = the_page_template_part(); get_template_part( 'content' , $post_tempalte ); } My $args should be correct according to the documentation, but it's totally ignoring parent …
i was thinking if there was anyway to add a okay icon overlay for video post types, i already came up with a code that allows me set up a default thumbnail if post has no thumb, sample code below. <div class="img-responsive"> <?php if ( has_post_thumbnail() ) : ?> <a href="<?php the_permalink(); ?>"><?php the_post_thumbnail('big-grid-one-image'); ?></a> <?php else : ?> <img src=" <?php echo get_template_directory_uri (); ?>/img/no-thumb/acardio-548px-450px.png" /> <?php endif; ?></div>
I have a multivariate time series containing 3D position data ($x,y,z)$ and orientation data (as quaternions) obtained from motion sensors. My goal is to forecast the future position/orientation, and for this I'm looking into use sequence models, esp. LSTMs. A quaternion has 4 elements, one of them denoting the real/scalar part (say $q_w$) and the other three denoting the imaginary/vector part (say $q_x, q_y, q_z$). So my time series has 7 columns in total. My question: Considering that quaternion elements …
Essentially I want to pass a program some variables, all gathered from a user on my site, and have the program give a "score" of how authentic the user is meant to be. I already have a large set of data with already set "scores" and want to start creating the scores myself ( currently done through a third party) After reading about machine learning and asking some friends I've chosen (still open to ideas) python, but I'm unsure which …
I am training a neural network with some convolution layers for multi class image classification. I am using keras to build and train the model. I am using 1600 images for all categories for training. I have used softmax as final layer activation function. The model predicts well on all True categories with high softmax probability. But when I test model on new or unknown data, it predicts with high softmax probability. How can I reduce that? Should I make …
My visual editor isn't working on a site after just launching it to a new server. I assumed a permissions issue, but the file itself is set to 644. I tried 755 on it as well and still no go. The directories all the way up are 755. I can access other files in the directory, but not this one file. Any ideas?
Is there a way to save template data into wp_post table? I made a custom template that automatically makes tables and some sentences with the meta-data that users insert in. I wanted to make users choose a template and write in a standardized format with custom field. Users can now just put some text on my custom field (bottom of my post edit page) and click the template on the sidebar, then the contents show on the post page! And …
I have a vector (column from an imported csv file) that I'd like to make some summary statistics from and put it in a table and put in a small report. Can R do this. So basically I have 12 columns (1 for each dataset and in the created table I want them as rows) and for each, I'd like to calculate mean, min, max, coefficient of varation, sd, kurtosis, etc... What is a good way to do this?
I'm working on custom woocommerce theme, using my own code. I need to add products to the cart without page refresh. I'm using the code below, It's working correctly but when I add product the shop page reload. Could someone help me how can I develop this. <?php $product = get_product(get_the_ID());?> <a href="<?= $product->add_to_cart_url(); ?>" title="Add to cart" class="addToCart"></a>
Whenever we have a dataset to be pre processed, before feeding it to the model we convert the categorical values to numerical values for which we generally use LabelEncoding, One Hot encoding etc techniques but all these are done manually going through each column. But what if are dataset is huge in terms of columns(eg : 2000 columns), here it wont be possible to go through each column manually, in such cases how do we handle encoding? Are there any …
I am currently looking into structuring data and work flows for my ML end to end pipeline. I therefore have multiple problems, and ideally I am looking for one platform that can do all: Visualize and organize multiple datasets. ideally something like the Kaggle datset webinterface Do dataset exploration to quickly visualize errors in data, biases in annotations etc. Annotate images and potentially point clouds commenting functionality for all features Keep track of who annotated what on what date dataset …
I am trying to insert some ad code in a site after X number of elements. The base code I am using was from using tags (from: http://www.wpbeginner.com/wp-tutorials/how-to-insert-ads-within-your-post-content-in-wordpress/): function prefix_insert_after_list_item( $insertion, $paragraph_id, $content ) { $closing_p = '</p>'; $paragraphs = explode( $closing_p, $content ); var_dump($paragraphs); foreach ($paragraphs as $index => $paragraph) { if ( trim( $paragraph ) ) { $paragraphs[$index] .= $closing_p; } if ( $paragraph_id == $index + 1 ) { $paragraphs[$index] .= $insertion; } } return implode( '', …
I have a dataset that has the following columns: Category, Product, Launch_Year, and columns named 2010, 2011 and 2012. These 13 columns contain sales of the product in that year. The goal is to create another column Launch_Sum that calculates the sum of the Category (not the Product) for each Launch_Year: test = pd.DataFrame({ 'Category':['A','A','A','B','B','B'], 'Product':['item1','item2','item3','item4','item5','item6'], 'Launch_Year':[2010,2012,2010,2012,2010,2011], '2010':[25,0,27,0,10,0], '2011':[50,0,5,0,20,39], '2012':[30,40,44,20,30,42] )} Category Product Launch_Year 2010 2011 2012 Launch_Sum (to be created) A item1 2010 25 50 30 52 A item2 …
np.random.normal(mean,sigma,size) allows to create a gaussian distribution based only on mean and variance. I want to create a distribution based on function_name(mean,sigma,skew,kurtosis,size). I tried scipy.stats.gengamma but I don't understand how to use it. It takes 2 parameters - a,c and creates a distribution. But it is difficult to interpret for what values of a & c, the function will give a particular value of skewness and kurtosis. Can anyone explain how to use gengamma or any other way to create …
I have a small size data set and I want to assess the effect of a certain type of cases on the overall model performance. For example, is the model biased against people of a certain age group? Using a single train-test split, the number of cases of a particular type becomes quite small, and I suspect findings may occur due to randomness. Would it in this scenario make sense to use multiple train-test splits, compute the average performances, and …
I have created a keras model that takes 3 images as input, passes them to individual CNN backbone(mobilenet_v2) and fuse the results from 3 individual streams. These fused outputs further goes through a FCN and gives probability values for 10 classes. Now when i pass 3 images to my model using model.predict(), I am getting an output of 3x10 (list of 3 outputs with 10 values in each). Here is the network snapshot and here is the output *[[0.04718336 0.07464679 …
I am working with time series in sklearn, my goal is to have a Pipeline step that replaces each row with a window centered on that row (think convolution). My problem here is that I need all rows (even unlabeled ones) in order to create the windows, but during fitting I want to drop all unlabeled rows. This requires access to both X and y in the transform process. Can this be done with a custom Transformer? From what I …
I'm using gravity forms for a sign-up form and I set up a hidden field that is automatically filled with a random string generated and passed to the form by add_filter("gform_field_value_random_number", "generate_random_number"); function generate_random_number($value){ $value = substr(str_shuffle(str_repeat("0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ" ,5)), 0, 7); } This is to use as a unique coupon code/user.This field also appears in their user profile. Up to here all is well. Where I'm having trouble is checking the database that no user already has that coupon code. At …
When adding category to Menu, I need it to list all the item that belong to that category in separate UL automatically. like this http://i.stack.imgur.com/ulOOS.jpg Of course I can do that manually, but I'm look for a way to add a filter, maybe, to the wp page menu function, but Im not sure how its can be done. I'm using thematic child theme Thank you.
I want to make sentiment analysis for an entity which was found, like Google NLP. Entity should have magnitude and score. Please share with me the possible research papers. p/s please not propose to make sentiment for sentence where the entity is located and them assign to entity from such sentence.
I am building this content based image retrieval system. I basically extract feature maps of size 1024x1x1 using any backbone. I then proceed to apply PCA on the extracted features in order to reduce dimensions. I use either nb_components=300 or nb_components=400. I achieved these performances (dim_pca means no pca applied) Is there any explanation of why k=300 works better then k=400 ? If I understand, k=400 is suppose to explain more variance then k=300 ? Is it my mistake or …
I have a CNN classification model that uses loss: binary cross entropy: optimizer_instance = Adam(learning_rate=learning_rate, decay=learning_rate / 200) model.compile(optimizer=optimizer_instance, loss='binary_crossentropy') We are saving the best model so the latest saved model is the one that achieved the best val_loss: es = EarlyStopping(monitor='val_loss', mode='min', verbose=0, patience=Config.LearningParameters.Patience) modelPath = modelFileFolder + Config.LearningParameters.ModelFileName checkpoint = keras.callbacks.ModelCheckpoint(modelPath , monitor='val_loss', save_best_only=True, save_weights_only=False, verbose=1) callbacks = [checkpoint,es] history = model.fit(x=training_generator, batch_size=Config.LearningParameters.Batch_size, epochs=Config.LearningParameters.Epochs, validation_data=validation_generator, callbacks=callbacks, verbose=1) on the course of the training the logs show that the …
I am try get linestring so I can measure the distance and time. Here in this linestring I am getting nan distance and time. Also, pleased to hear any of your suggestion on my code or logic. Thanks data: [[29.87819, 121.54944999999998], [24.23111845, 119.02311485000001], [5.402576549999999, 106.87891215000002], [1.367889, 104.27658300000002], [4.65750565, 98.40456015000001], [5.93498595, 82.50298040000001], [6.895460999999999, 75.83849285000002], [11.087761, 55.21659015], [11.986111, 50.79761100000002], [12.57124165, 44.563427950000005], [15.262399899999998, 41.828814550000004], [27.339266099999996, 34.20131845], [29.927166, 32.566855000000004], [32.36497615, 28.787162800000004], [36.25582884999999, 14.171143199999989], [37.089583, 11.039139000000006], [36.98901405, 4.773231850000002], [36.139162799999994, -4.182775300000003], [36.86918755, -8.487389949999994], [42.41353785, -9.331828900000005], [47.68888635, …
I have a unique website type that only requires 10-20 seconds to make a new post. I want my WordPress site to reload the editor page automatically after a new post has been published hence I don't need to click "Add New" button in order to load blank post editor page.
I'm working my way around using BERT for co-reference resolving. I'm following this highly-cited paper BERT for Coreference Resolution: Baselines and Analysis (https://arxiv.org/pdf/1908.09091.pdf). I have following questions, the details can't be found easily from the paper, hope you guys help me out. What’s the input? is it antecedents + parapraph? What’s the output? clusters <mention, antecedent> ? More importantly What’s the loss function? For comparison, in another highly-cited paper by [Clark .et al] using Reinforcement Learning, it's very clear about …
I am new to Data Science and currently am trying to predict customers churn for a company that offers of subscription-based bookings management software. Its customers are gyms. I have a small unbalanced dataset of a historical data (False 670, True 230) with 2 numerical predictors: age(days since subscription), number of active days in the last month(days on which a customer(gym) had bookings) and 1 categorical: logo (boolean, if a customers uploaded a logo in a software). Predictors have following …