I have point clouds for 400 objects. I have also some other features of these objects. For example their weights. How I can now train model to predict the weight based on the shape(point cloud)? I have seen pointnet but is seems it is only for classification?
I have an array with shape (55834, 250, 30) and I'd like to get an output of the same shape from this LSTM model. self.model = Sequential() self.model.add(LSTM( self.config.layers[0], input_shape=(channel.X_train.shape[1], channel.X_train.shape[2]), return_sequences=True)) self.model.add(Dropout(self.config.dropout)) self.model.add(TimeDistributed(Dense( self.config.n_predictions))) self.model.add(Activation('linear')) self.model.compile(loss=self.config.loss_metric, optimizer=self.config.optimizer, metrics=["mse"]) self.model.fit(channel.X_train, channel.y_train, batch_size=self.config.lstm_batch_size, epochs=self.config.epochs, validation_split=self.config.validation_split, callbacks=cbs, verbose=True) If I run it I get the error: ValueError: Error when checking target: expected activation_1 to have 3 dimensions, but got array with shape (55834, 30) Why am I losing one dimension? Doesn't the …
We do have models that predict the basic color from its description, by basic color I mean red, blue, black etc. But I would like to develop a model that can spit out the RGB or HEX colors by a description of it, an example being, "yellow that is glossy and sorta dark" should give the respective value for the same. Another example would be, "Clear green plastic". This is relative to 3d modelling where I input this text and …
I would like to know if there is any way we can automate 3D modeling processes. Like if I give the model a text input such as "create a sphere and give it a red color" and the we need to get the model. To be precise, I would like to create a bot that can perform actions in a software such as blender, like I tell the bot what I would like to do and then it does it. …
I want to know if training the model only on images of cars will give better results in term of the final shape details, instead of using a pre-trained model (trained on different images of objects) on a car image. Thanks.
I have an idea and not sure how to start. I have a x-ray image Of human skull. What kind of ML algorithm whold be the best to recreate the posture of the original skull. How I see the algo steps. Have an image of skull Have an 3d model of skull Arrange the 3d model in space and take an virtual x-ray of it. Compare the real and virtual x-rays and make the adjustments to parameters. I do not …
I have been working through Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis where the authors make the following claim (subjection 2.2.1): ...the Laplacian space is more robust to illumination changes and more indicative for face structure. what is Laplacian space in this context? How is it robust to illumination changes?
I want to do something similar to Point Cloud. What I am NOT trying to do is to recognize objects or reconstruct them. I have a 3d file and I want to somehow read the data to use it in machine learning algorithm. Where to look for info? Googling just gives a lot of examples of shape recognition and 2d to 3d shape reconstruction. I want it other way from 3d to 1d data
I have a machine learning model that uses csv with measured data about buildings: width, length, height etc. I use it to predict some features and it works properly. I would like to drop csv with length, height and width, and I would like to use some kind of algorithm to parse 3d model into the ML algorithm. The second reason is to try this approach with nonrectangular buildings, which are hard to describe in simple csv generated by humans. …
I was reading Facebook DensePose paper and i did not understand the method of choosing sampled points for each segmented part of a person in 2d image and getting correspondense on SMPL 3D model, i would like to know how did they decided for sampled points and did they give those points any identity for localization. here is the link https://arxiv.org/pdf/1802.00434.pdf
I know this must exist, but I'm having enormous trouble finding the right search terms. Say I have a bunch of labelled 3D points, and I capture multiple 2D images of it. If I want to reconstruct the 3D points, are there well-established algorithms/libraries for doing this? This is presumably the basis for 3D facial recognition, which is a well-established field of research, but the general case (i.e. non-faces) doesn't seem to have an obvious literature that I can find. …