There are 2 ways to get the output of an intermediate area, as posted here on this... moreThere are 2 ways to get the output of an intermediate area, as posted here on this forum:https://www.cluzters.ai/forums/topic/353/keras-how-to-get-the-output-of-each-layer?c=1597You don't need (or want) both of these, either works:
get_layer_output = K.function([model.layers, [model.layers)
layer_output = get_layer_output()
or
tmp_model = Model(model.layers.input, model.layers.output)
tmp_output = tmp_model.predict(x_train)
This works well for a smaller model, but I always get OOM if my model is too large. Is there an easy way to free the GPU memory of the original model before creating tmp_model or calling get_layer_output? I can save my weights to a text file and create another program in another session, but it seems like there should be an easier way. less
I am really new to Data Science/ML and have been working on Tensorflow to implement Linear Regression on California Housing Prices from Kaggle.
I tried to train a mode in two... moreI am really new to Data Science/ML and have been working on Tensorflow to implement Linear Regression on California Housing Prices from Kaggle.
I tried to train a mode in two different ways:
Using a Sequential model
Custom implementation
In both cases, the loss of the model was really high and I have not been able to understand what are the ways to improve it.
Dataset prep
df = pd.read_csv('california-housing-prices.zip')
df = df
print('Shape of dataset before removing NAs and duplicates {}'.format(df.shape))
df.dropna(inplace=True)
df.drop_duplicates(inplace=True)
input_train, input_test, target_train, target_test = train_test_split(df.values, df.values, test_size=0.2)
scaler = MinMaxScaler()
input_train = input_train.reshape(-1,1)
input_test = input_test.reshape(-1,1)
input_train = scaler.fit_transform(input_train)
input_test = scaler.fit_transform(input_test)
target_train = target_train.reshape(-1,1)
target_train = scaler.fit_transform(target_train)
target_test =... less
I trained quora question pair detection with LSTM but training accuracy is very low and always changes when i train. I dont understand what mistake i did.
I tried changing loss... moreI trained quora question pair detection with LSTM but training accuracy is very low and always changes when i train. I dont understand what mistake i did.
I tried changing loss and optimiser and with increased epoch.
import numpy as np
from numpy import array
from keras.callbacks import ModelCheckpoint
import keras
from keras.optimizers import SGD
import tensorflow as tf
from sklearn import preprocessing
import xgboost as xgb
from keras import backend as K
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from keras.preprocessing.text import Tokenizer , text_to_word_sequence
from keras.preprocessing.sequence import pad_sequences
from keras.layers.embeddings import Embedding
from keras.models import Sequential, model_from_json, load_model
from keras.layers import LSTM, Dense, Input, concatenate, Concatenate, Activation, Flatten
from keras.models import Model
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from... less
I'm trying to train a word embedding classifier using TF2.4 with Keras and using the tf.nn.sampled_softmax_loss. However, when calling the fit method of the model, "Cannot convert... moreI'm trying to train a word embedding classifier using TF2.4 with Keras and using the tf.nn.sampled_softmax_loss. However, when calling the fit method of the model, "Cannot convert a symbolic Keras input/output to a numpy array" TypeError occurs. Please help me to fix the error or with an alternative approach to do candidate sampling.
import tensorflow as tf
import numpy as np
I trained quora question pair detection with LSTM but training accuracy is very low and always changes when i train. I dont understand what mistake i did.
I tried changing loss... moreI trained quora question pair detection with LSTM but training accuracy is very low and always changes when i train. I dont understand what mistake i did.
I tried changing loss and optimiser and with increased epoch.
import numpy as np
from numpy import array
from keras.callbacks import ModelCheckpoint
import keras
from keras.optimizers import SGD
import tensorflow as tf
from sklearn import preprocessing
import xgboost as xgb
from keras import backend as K
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from keras.preprocessing.text import Tokenizer , text_to_word_sequence
from keras.preprocessing.sequence import pad_sequences
from keras.layers.embeddings import Embedding
from keras.models import Sequential, model_from_json, load_model
from keras.layers import LSTM, Dense, Input, concatenate, Concatenate, Activation, Flatten
from keras.models import Model
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from... less
I have two models that are in ONNX format. Both models are similar (both are pre-trained deep learning models, ex. ResNet50 models). The only difference between them is that the... moreI have two models that are in ONNX format. Both models are similar (both are pre-trained deep learning models, ex. ResNet50 models). The only difference between them is that the last layers are optimized/retrained for different data sets.
I want to merge the first k layers of these two models, as shown below. This should enhance the performance of inference.
To make my case clearer these are examples from other machine learning tools to implement this feature Ex. Pytorch, Keras.
*I try to install tensorflow and kerasI installed tensorflow and I imported it with no errorsKeras is installed but I can't import it *
(base) C:\Windows\system32>pip uninstall... more*I try to install tensorflow and kerasI installed tensorflow and I imported it with no errorsKeras is installed but I can't import it *
(base) C:\Windows\system32>pip uninstall keras
Found existing installation: Keras 2.3.1
Uninstalling Keras-2.3.1:
Would remove:
c:\users\asus\anaconda3\anaconda\lib\site-packages\docs\*
c:\users\asus\anaconda3\anaconda\lib\site-packages\keras-2.3.1.dist-info\*
c:\users\asus\anaconda3\anaconda\lib\site-packages\keras\*
Proceed (y/n)? y
Successfully uninstalled Keras-2.3.1
(base) C:\Windows\system32>pip install keras
Collecting keras
Using cached Keras-2.3.1-py2.py3-none-any.whl (377 kB)
Requirement already satisfied: six>=1.9.0 in c:\users\asus\anaconda3\anaconda\lib\site-packages (from keras) (1.14.0)
Requirement already satisfied: numpy>=1.9.1 in c:\users\asus\anaconda3\anaconda\lib\site-packages (from keras) (1.18.4)
Requirement already satisfied: keras-applications>=1.0.6 in c:\users\asus\anaconda3\anaconda\lib\site-packages (from keras)... less
I am using pixellib fot training custom image instance segmentation. I have created a dataset whiche can be seen below in link.... moreI am using pixellib fot training custom image instance segmentation. I have created a dataset whiche can be seen below in link. Dataset:https://drive.google.com/drive/folders/1MjpDNZtzGRNxEtCDcTmrjUuB1ics_3Jk?usp=sharing the code which I used to make a custom model is
I am using pixellib fot training custom image instance segmentation. I have created a dataset whiche can be seen below in link. Dataset:https://drive.google.com/drive/folders/1MjpDNZtzGRNxEtCDcTmrjUuB1ics_3Jk?usp=sharing the code which I used to make a custom model is
import pixellib
from pixellib.custom_train import instance_custom_training
I would like to know How to apply gradient clipping on this network on the RNN where there is a possibility of exploding gradients.
tf.clip_by_value(t, clip_value_min,... moreI would like to know How to apply gradient clipping on this network on the RNN where there is a possibility of exploding gradients.
tf.clip_by_value(t, clip_value_min, clip_value_max, name=None)
This is an example that could be used but where do I introduce this ? In the def of RNN
lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0)
# Split data because rnn cell needs a list of inputs for the RNN inner loop
_X = tf.split(0, n_steps, _X) # n_steps
tf.clip_by_value(_X, -1, 1, name=None)
But this doesn't make sense as the tensor _X is the input and not the grad what is to be clipped?
Do I have to define my own Optimizer for this or is there a simpler option? less
For example, I have 1D vector with dimension (5). I would like to reshape it into 2D matrix (1,5).
Here is how I do it with numpy
>>> import numpy as np
>>> a =... moreFor example, I have 1D vector with dimension (5). I would like to reshape it into 2D matrix (1,5).
Here is how I do it with numpy
>>> import numpy as np
>>> a = np.array()
>>> a.shape
(5,)
>>> a = np.reshape(a, (1,5))
>>> a.shape
(1, 5)
>>> a
array()
>>>
But how can I do that with Pytorch Tensor (and Variable). I don't want to switch back to numpy and switch to Torch variable again, because it will loss backpropagation information.
Here is what I have in Pytorch
>>> import torch
>>> from torch.autograd import Variable
>>> a = torch.Tensor()
>>> a
1
2
3
4
5
>>> a.size()
(5L,)
>>> a_var = variable(a)
>>> a_var = Variable(a)
>>> a_var.size()
(5L,)
.....do some calculation in forward function
>>> a_var.size()
(5L,)
Now I want it size to be (1, 5). How can I resize or reshape the dimension of pytorch tensor in Variable without loss grad information. (because I will feed into another model before... less
I am trying an Op that is not behaving as expected.
graph = tf.Graph()
with graph.as_default():
train_dataset = tf.placeholder(tf.int32,... moreI am trying an Op that is not behaving as expected.
graph = tf.Graph()
with graph.as_default():
train_dataset = tf.placeholder(tf.int32, shape=)
embeddings = tf.Variable(
tf.random_uniform(, -1.0, 1.0))
embed = tf.nn.embedding_lookup(embeddings, train_dataset)
embed = tf.reduce_sum(embed, reduction_indices=0)
So I need to know the dimensions of the Tensor embed. I know that it can be done at the run time but it's too much work for such a simple operation. What's the easier way to do it?
I found in many available neural network code implemented using TensorFlow that regularization terms are often implemented by manually adding an additional term to loss value.My... moreI found in many available neural network code implemented using TensorFlow that regularization terms are often implemented by manually adding an additional term to loss value.My questions are:1.Is there a more elegant or recommended way of regularization than doing it manually?2.I also find that get_variable has an argument regularizer. How should it be used? According to my observation, if we pass a regularizer to it (such as tf.contrib.layers.l2_regularizer, a tensor representing regularized term will be computed and added to a graph collection named tf.GraphKeys.REGULARIZATOIN_LOSSES. Will that collection be automatically used by TensorFlow (e.g. used by optimizers when training)? Or is it expected that I should use that collection by myself? less
I have trained a binary classification model with CNN, and here is my code
model = Sequential()
model.add(Convolution2D(nb_filters, kernel_size, kernel_size,
... moreI have trained a binary classification model with CNN, and here is my code
model = Sequential()
model.add(Convolution2D(nb_filters, kernel_size, kernel_size,
border_mode='valid',
input_shape=input_shape))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, kernel_size, kernel_size))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
# (16, 16, 32)
model.add(Convolution2D(nb_filters*2, kernel_size, kernel_size))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters*2, kernel_size, kernel_size))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
# (8, 8, 64) = (2048)
model.add(Flatten())
model.add(Dense(1024))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(2)) # define a binary classification problem
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=)
model.fit(x_train,... less
tf.nn.embedding_lookup(params, ids, partition_strategy='mod', name=None)
I cannot understand the duty of this function. Is it like a lookup table? Which means to return the... moretf.nn.embedding_lookup(params, ids, partition_strategy='mod', name=None)
I cannot understand the duty of this function. Is it like a lookup table? Which means to return the parameters corresponding to each id (in ids)?For instance, in the skip-gram model if we use tf.nn.embedding_lookup(embeddings, train_inputs), then for each train_input it finds the correspond embedding?
What is the difference between 'SAME' and 'VALID' padding in tf.nn.max_pool of tensorflow?In my opinion, 'VALID' means there will be no zero padding outside the edges when we do... moreWhat is the difference between 'SAME' and 'VALID' padding in tf.nn.max_pool of tensorflow?In my opinion, 'VALID' means there will be no zero padding outside the edges when we do max pool.According to A guide to convolution arithmetic for deep learning, it says that there will be no padding in pool operator, i.e. just use 'VALID' of tensorflow. But what is 'SAME' padding of max pool in tensorflow?
When I trained my neural network with Theano or Tensorflow, they will report a variable called "loss" per epoch.
How should I interpret this variable? Higher loss is better or... moreWhen I trained my neural network with Theano or Tensorflow, they will report a variable called "loss" per epoch.
How should I interpret this variable? Higher loss is better or worse, or what does it mean for the final performance (accuracy) of my neural network?