There are 2 ways to get the output of an intermediate area, as posted here on this... moreThere are 2 ways to get the output of an intermediate area, as posted here on this forum:https://www.cluzters.ai/forums/topic/353/keras-how-to-get-the-output-of-each-layer?c=1597You don't need (or want) both of these, either works:
get_layer_output = K.function([model.layers, [model.layers)
layer_output = get_layer_output()
or
tmp_model = Model(model.layers.input, model.layers.output)
tmp_output = tmp_model.predict(x_train)
This works well for a smaller model, but I always get OOM if my model is too large. Is there an easy way to free the GPU memory of the original model before creating tmp_model or calling get_layer_output? I can save my weights to a text file and create another program in another session, but it seems like there should be an easier way. less
I need to utilize TensorFlow for a project to classify items based on their attributes to a certain class (either 1, 2, or 3).
Only problem is almost every TF tutorial or example... moreI need to utilize TensorFlow for a project to classify items based on their attributes to a certain class (either 1, 2, or 3).
Only problem is almost every TF tutorial or example I find online is about image recognition or text classification. I can't find anything about classification based on numbers. I guess what I'm asking for is where to get started. If anyone knows of a relevant example, or if I'm just thinking about this completely wrong.
We are given the 13 attributes for each item, and need to use the TF neural network to classify each item correctly (or mark the margin of error). But nothing online is showing me even how to start with this kind of dataset.
Example of dataset: (first value is class, other values are attributes)
2, 11.84, 2.89, 2.23, 18, 112, 1.72, 1.32, 0.43, 0.95, 2.65, 0.96, 2.52, 500
3, 13.69, 3.26, 2.54, 20, 107, 1.83, 0.56, 0.5, 0.8, 5.88, 0.96, 1.82, 680
3, 13.84, 4.12, 2.38, 19.5, 89, 1.8, 0.83, 0.48, 1.56, 9.01, 0.57, 1.64, 480
2, 11.56, 2.05, 3.23, 28.5, 119,... less
I'm trying to build a Python Lambda to send images to TensorFlow Serving for inferences. I have at least two dependencies: CV2 and tensorflow_serving.apis. I've run multiple... moreI'm trying to build a Python Lambda to send images to TensorFlow Serving for inferences. I have at least two dependencies: CV2 and tensorflow_serving.apis. I've run multiple tutorials showing it's possible to run tensorflow in a lambda, but they provide the package to install and don't explain how they got it to fit in the limit of less than 256MB unzipped.
How to Deploy ... Lambda and TensorFlow
Using TensorFlow and the Serverless Framework...
I've tried following the official instructions for packaging but just this downloads 475MB of dependencies:
$ python -m pip install tensorflow-serving-api --target .
Collecting tensorflow-serving-api
Downloading https://files.pythonhosted.org/packages/79/69/1e724c0d98f12b12f9ad583a3df7750e14ec5f06069aa4be8d75a2ab9bb8/tensorflow_serving_api-1.12.0-py2.py3-none-any.whl
...
$ du -hs .
475M .
I see that others have fought this dragon and won (1) (2) by doing contortions to rip out all unused libraries from all dependencies or compile from scratch. But... less
I am really new to Data Science/ML and have been working on Tensorflow to implement Linear Regression on California Housing Prices from Kaggle.
I tried to train a mode in two... moreI am really new to Data Science/ML and have been working on Tensorflow to implement Linear Regression on California Housing Prices from Kaggle.
I tried to train a mode in two different ways:
Using a Sequential model
Custom implementation
In both cases, the loss of the model was really high and I have not been able to understand what are the ways to improve it.
Dataset prep
df = pd.read_csv('california-housing-prices.zip')
df = df
print('Shape of dataset before removing NAs and duplicates {}'.format(df.shape))
df.dropna(inplace=True)
df.drop_duplicates(inplace=True)
input_train, input_test, target_train, target_test = train_test_split(df.values, df.values, test_size=0.2)
scaler = MinMaxScaler()
input_train = input_train.reshape(-1,1)
input_test = input_test.reshape(-1,1)
input_train = scaler.fit_transform(input_train)
input_test = scaler.fit_transform(input_test)
target_train = target_train.reshape(-1,1)
target_train = scaler.fit_transform(target_train)
target_test =... less
Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH:... moreCould not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/nickopotamus/.local/share/r-miniconda/envs/r-reticulate/lib:/usr/lib/R/lib::/lib:/usr/lib/x86_64-linux-gnu:/usr/lib/jvm/default-java/lib/server
sudo find / -name 'libcudart.so.11.0' finds the file in:
/home/nickopotamus/.local/share/r-miniconda/envs/r-reticulate/lib/libcudart.so.11.0
/home/nickopotamus/anaconda3/pkgs/cudatoolkit-11.3.1-h2bc3f7f_2/lib/libcudart.so.11.0
/home/nickopotamus/anaconda3/pkgs/cudatoolkit-11.2.0-h73cb219_8/lib/libcudart.so.11.0
/home/nickopotamus/anaconda3/pkgs/cudatoolkit-11.2.72-h2bc3f7f_0/lib/libcudart.so.11.0
/usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudart.so.11.0
The top entry at least appears to be in the path that the error is searching, so I'm at a bit of a loss as to what to try next. Is it a conflict with the other anaconda packages (which I can't seem to remove), or am I simply being... less
I copied and pasted tensorflow's official Basic classification: Classify images of clothing code https://www.tensorflow.org/tutorials/keras/classification
import tensorflow as... moreI copied and pasted tensorflow's official Basic classification: Classify images of clothing code https://www.tensorflow.org/tutorials/keras/classification
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
and ran it. Upon running it printed a load of gibberish and wouldn't stop (almost like when you accidentally put a print in a while loop):
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz
so I terminated it. The above is just a VERY small portion of what printed. I ran it again, only to get an error straight away.
line 7, in <module>
(train_images, train_labels), (test_images,... less
I have two question:
(1) The question about importing some subpackages inside tensorflow.keras.
(2) How to differentiate between the packages installed by 'pip install' and 'conda... moreI have two question:
(1) The question about importing some subpackages inside tensorflow.keras.
(2) How to differentiate between the packages installed by 'pip install' and 'conda install'.(under windows)
I am using anaconda with tensorflow 2.0.0. I am trying to import package like:
import tensorflow.keras.utils.np_utils
However, the error shown that:
---------------------------------------------------------------------------
> ModuleNotFoundError Traceback (most recent call
> last) <ipython-input-2-ee1bc59a14ab> in <module>
> ----> 1 import tensorflow.keras.utils.np_utils
>
> ModuleNotFoundError: No module named 'tensorflow.keras.utils.np_utils'
I am confused about why this is happening, I install the tensorflow with command:
conda install tensorflow==2.0.0
from Anaconda prompt.
Yes, I know the anaconda should have already had all the data science package inside it, the reason that I uninstall tensorflow provided by anaconda and reinstall it was before using... less
I'm new into the ML Scene and I want to create a phonegap app involving Tensorflow but I'm unsure where to start or if this is even possible. Can anyone give me a hand (Probably... moreI'm new into the ML Scene and I want to create a phonegap app involving Tensorflow but I'm unsure where to start or if this is even possible. Can anyone give me a hand (Probably by linking me to some resources)? My app will just use tensor flow image recognition (probably pre-trained).
Thanks, Felix. (This is a repost of this same question in the data science category which failed to garner a response)
In the Tensorflow ML Basics with Keras tutorial for making a basic text classification, when preparing the trained model for export, the tutorial suggests including the... moreIn the Tensorflow ML Basics with Keras tutorial for making a basic text classification, when preparing the trained model for export, the tutorial suggests including the TextVectorization layer into the Model so it can "process raw strings". I understand why to do this.
But then the code snippet is:
export_model = tf.keras.Sequential()
Why when preparing the model for export, does the tutorial also include a new activation layer layers.Activation('sigmoid')? Why not incorporate this layer into the original model? less
From C:\Anaconda3\envs\tensorflow_cpu\lib\site-packages\tensorflow\python\platform\app.py:125: main (from __main__) is deprecated and will be removed in a future... moreFrom C:\Anaconda3\envs\tensorflow_cpu\lib\site-packages\tensorflow\python\platform\app.py:125: main (from __main__) is deprecated and will be removed in a future version.
Instructions for updating:
Use object_detection/model_main.py.
Traceback (most recent call last):
File "train.py", line 184, in <module>
tf.app.run()
File "C:\Anaconda3\envs\tensorflow_cpu\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run
_sys.exit(main(argv))
File "C:\Anaconda3\envs\tensorflow_cpu\lib\site-packages\tensorflow\python\util\deprecation.py", line 306, in new_func
return func(*args, **kwargs)
File "train.py", line 180, in main
graph_hook_fn=graph_rewriter_fn)
File "C:\Users\arfan\Documents\TensorFlow\models\research\object_detection\legacy\trainer.py", line 248, in train
detection_model = create_model_fn()
File "C:\Users\arfan\Documents\TensorFlow\models\research\object_detection\builders\model_builder.py", line 122, in build
raise ValueError('Unknown meta... less
I am trying to do some deep learning work. For this, I first installed all the packages for deep learning in my Python environment.Here is what I did.In Anaconda, I created an... moreI am trying to do some deep learning work. For this, I first installed all the packages for deep learning in my Python environment.Here is what I did.In Anaconda, I created an environment called tensorflow as follows
conda create -n tensorflow
Then installed the data science Python packages, like Pandas, NumPy, etc., inside it. I also installed TensorFlow and Keras there. Here is the list of packages in that environment
(tensorflow) SFOM00618927A:dl i854319$ conda list
# packages in environment at /Users/i854319/anaconda/envs/tensorflow:
#
appdirs 1.4.3 <pip>
appnope 0.1.0 py36_0
beautifulsoup4 4.5.3 py36_0
bleach 1.5.0 py36_0
cycler 0.10.0 py36_0
decorator 4.0.11 py36_0
entrypoints 0.2.2 py36_1
freetype 2.5.5 ... less
I'm new to TensorFlow and Data Science. I made a simple module that should figure out the relationship between input and output numbers. In this case, x and x squared. The code in... moreI'm new to TensorFlow and Data Science. I made a simple module that should figure out the relationship between input and output numbers. In this case, x and x squared. The code in Python:
import numpy as np
import tensorflow as tf
# TensorFlow only log error messages.
tf.logging.set_verbosity(tf.logging.ERROR)
model = tf.keras.Sequential([
tf.keras.layers.Dense(units = 1, input_shape = )
model.compile(loss = "mean_squared_error", optimizer = tf.keras.optimizers.Adam(0.0001))
model.fit(features, labels, epochs = 50000, verbose = False)
print(model.predict())
I tried a different number of units, and adding more layers, and even using the relu activation function, but the results were always wrong. It works with other relationships like x and 2x. What is the problem here? less
I recently started studying deep learning and other ML techniques, and I started searching for frameworks that simplify the process of build a net and training it, then I found... moreI recently started studying deep learning and other ML techniques, and I started searching for frameworks that simplify the process of build a net and training it, then I found TensorFlow, having little experience in the field, for me, it seems that speed is a big factor for making a big ML system even more if working with deep learning, so why python was chosen by Google to make TensorFlow? Wouldn't it be better to make it over an language that can be compiled and not interpreted?
What are the advantages of using Python over a language like C++ for machine learning? less
I have a django form, which is collecting user response. I also have a tensorflow sentences classification model. What is the best/standard way to put these two together.... moreI have a django form, which is collecting user response. I also have a tensorflow sentences classification model. What is the best/standard way to put these two together. Details:
tensorflow model was trained on the Movie Review data from Rotten Tomatoes.
Everytime a new row is made in my response model , i want the tensorflow code to classify it( + or - ).
Basically I have a django project directory and two .py files for classification. Before going ahead myself , i wanted to know what is the standard way to implement machine learning algorithms to a web app.
It'd be awesome if you could suggest a tutorial or a repo. Thank you ! less
What is the difference between 'SAME' and 'VALID' padding in tf.nn.max_pool of tensorflow?
In my opinion, 'VALID' means there will be no zero padding outside the edges when we... moreWhat is the difference between 'SAME' and 'VALID' padding in tf.nn.max_pool of tensorflow?
In my opinion, 'VALID' means there will be no zero padding outside the edges when we do max pool.
According to A guide to convolution arithmetic for deep learning, it says that there will be no padding in pool operator, i.e. just use 'VALID' of tensorflow. But what is 'SAME' padding of max pool in tensorflow?
We have been training a neural net on the AI engine with a data-set consisting of 96 000 000 data points. The neural net was trained in a distributed manner, and as customary we... moreWe have been training a neural net on the AI engine with a data-set consisting of 96 000 000 data points. The neural net was trained in a distributed manner, and as customary we used 20 % of the data-set as evaluation data. In order to train distributed we used TensorFlow estimators and the method tf.estimator.train_and_evaluate. Since our data-set is very large, our evaluation set is also quite large. Looking into the cpu usage of the master vs the workers nodes, and testing with an evaluation data-set consisting of only 100 samples, it appears as though the evaluation is not distributed and happens only on the master node. This makes the amount of ML units consumed increase by a factor of approximately 5 between having the standard size evaluation data (20 % of the total data) and only having 100 data points for evaluation, while the amount of training data is the same.We see two possible solutions to this problem:Doing also the evaluation distributed, but is that technically possible on the AI... less
How to save/restore a model after training? (26... moreThis question already has answers here:
How to save/restore a model after training? (26 answers)
Closed 3 years ago.
I'm relatively new to machine learning and the Tensorflow framework. I was trying to take my trained model heavily influenced by the code presented here, using the MNIST handwritten digit dataset and perform inferences on testing examples that I have created. However, I am doing the training on a remote machine with a GPU and am trying to save the data to a directory so that I can transfer the data and inference on a local machine
It seems that I was able to save some of the model with tf.saved_model.simple_save, however, I'm unsure of how to use the saved data to do inferencing and to use the data to make a prediction given a new image. It seems like there are multiple ways to save a model, but I am unsure of what the convention or of what the "correct way" is to do it with the Tensorflow framwork.
So far, this is the line that I think I would need, but am unsure if it is... less
I am trying to follow this tutorial: https://medium.com/@natu.neeraj/training-a-keras-model-on-google-cloud-ml-cb831341c196to upload and train a Keras model on Google Cloud... moreI am trying to follow this tutorial: https://medium.com/@natu.neeraj/training-a-keras-model-on-google-cloud-ml-cb831341c196to upload and train a Keras model on Google Cloud Platform, but I can't get it to work.Right now I have downloaded the package from GitHub, and I have created a cloud environment with AI-Platform and a bucket for storage.I am uploading the files (with the suggested folder structure) to my Cloud Storage bucket (basically to the root of my storage), and then trying the following command in the cloud terminal:
gcloud ai-platform jobs submit training JOB1
--module-name=trainer.cnn_with_keras
--package-path=./trainer
--job-dir=gs://mykerasstorage
--region=europe-north1
--config=gs://mykerasstorage/trainer/cloudml-gpu.yaml
But I get errors, first the cloudml-gpu.yaml file can't be found, it says "no such folder or file", and trying to just remove it, I get errors because it says the --init--.py file is missing, but it isn't, even if it is empty (which it... less
*I try to install tensorflow and kerasI installed tensorflow and I imported it with no errorsKeras is installed but I can't import it *
(base) C:\Windows\system32>pip uninstall... more*I try to install tensorflow and kerasI installed tensorflow and I imported it with no errorsKeras is installed but I can't import it *
(base) C:\Windows\system32>pip uninstall keras
Found existing installation: Keras 2.3.1
Uninstalling Keras-2.3.1:
Would remove:
c:\users\asus\anaconda3\anaconda\lib\site-packages\docs\*
c:\users\asus\anaconda3\anaconda\lib\site-packages\keras-2.3.1.dist-info\*
c:\users\asus\anaconda3\anaconda\lib\site-packages\keras\*
Proceed (y/n)? y
Successfully uninstalled Keras-2.3.1
(base) C:\Windows\system32>pip install keras
Collecting keras
Using cached Keras-2.3.1-py2.py3-none-any.whl (377 kB)
Requirement already satisfied: six>=1.9.0 in c:\users\asus\anaconda3\anaconda\lib\site-packages (from keras) (1.14.0)
Requirement already satisfied: numpy>=1.9.1 in c:\users\asus\anaconda3\anaconda\lib\site-packages (from keras) (1.18.4)
Requirement already satisfied: keras-applications>=1.0.6 in c:\users\asus\anaconda3\anaconda\lib\site-packages (from keras)... less
I have a TensorFlow model that I built (a 1D CNN) that I would now like to implement into .NET.In order to do so I need to know the Input and Output nodes.When I uploaded the... moreI have a TensorFlow model that I built (a 1D CNN) that I would now like to implement into .NET.In order to do so I need to know the Input and Output nodes.When I uploaded the model on Netron I get a different graph depending on my save method and the only one that looks correct comes from an h5 upload. Here is the model.summary():
If I save the model as an h5 model.save("Mn_pb_model.h5") and load that into the Netron to graph it, everything looks correct:
However, ML.NET will not accept h5 format so it needs to be saved as a pb.In looking through samples of adopting TensorFlow in ML.NET, this sample shows a TensorFlow model that is saved in a similar format to the SavedModel format - recommended by TensorFlow (and also recommended by ML.NET here "Download an unfrozen ..."). However when saving and loading the pb file into Netron I get this:
And zoomed in a little further (on the far right side),
As you can see, it looks nothing like it should.Additionally the input nodes and output nodes are... less
TensorFlow always (pre-)allocates all free memory (VRAM) on my graphics card, which is ok since I want my simulations to run as fast as possible on my... more
TensorFlow always (pre-)allocates all free memory (VRAM) on my graphics card, which is ok since I want my simulations to run as fast as possible on my workstation.
However, I would like to log how much memory (in sum) TensorFlow really uses. Additionally it would be really nice, if I could also log how much memory single tensors use.
This information is important to measure and compare the memory size that different ML/AI architectures need.
Any tips?
I am using pixellib fot training custom image instance segmentation. I have created a dataset whiche can be seen below in link.... moreI am using pixellib fot training custom image instance segmentation. I have created a dataset whiche can be seen below in link. Dataset:https://drive.google.com/drive/folders/1MjpDNZtzGRNxEtCDcTmrjUuB1ics_3Jk?usp=sharing the code which I used to make a custom model is
I am using pixellib fot training custom image instance segmentation. I have created a dataset whiche can be seen below in link. Dataset:https://drive.google.com/drive/folders/1MjpDNZtzGRNxEtCDcTmrjUuB1ics_3Jk?usp=sharing the code which I used to make a custom model is
import pixellib
from pixellib.custom_train import instance_custom_training