I'm looking for information on how should a Python Machine Learning project be organized. For Python usual projects there is Cookiecutter and for R ProjectTemplate.
This is my... moreI'm looking for information on how should a Python Machine Learning project be organized. For Python usual projects there is Cookiecutter and for R ProjectTemplate.
This is my current folder structure, but I'm mixing Jupyter Notebooks with actual Python code and it does not seems very clear.
.
├── cache
├── data
├── my_module
├── logs
├── notebooks
├── scripts
├── snippets
└── tools
I work in the scripts folder and currently adding all the functions in files under my_module, but that leads to errors loading data(relative/absolute paths) and other problems.
I could not find proper best practices or good examples on this topic besides this kaggle competition solution and some Notebooks that have all the functions condensed at the start of such Notebook. less
I have two models that are in ONNX format. Both models are similar (both are pre-trained deep learning models, ex. ResNet50 models). The only difference between them is that the... moreI have two models that are in ONNX format. Both models are similar (both are pre-trained deep learning models, ex. ResNet50 models). The only difference between them is that the last layers are optimized/retrained for different data sets.
I want to merge the first k layers of these two models, as shown below. This should enhance the performance of inference.
To make my case clearer these are examples from other machine learning tools to implement this feature Ex. Pytorch, Keras.
I recently started studying deep learning and other ML techniques, and I started searching for frameworks that simplify the process of build a net and training it, then I found... moreI recently started studying deep learning and other ML techniques, and I started searching for frameworks that simplify the process of build a net and training it, then I found TensorFlow, having little experience in the field, for me, it seems that speed is a big factor for making a big ML system even more if working with deep learning, so why python was chosen by Google to make TensorFlow? Wouldn't it be better to make it over an language that can be compiled and not interpreted?
What are the advantages of using Python over a language like C++ for machine learning? less
In every instance in all of my classes where I reference R.id.something, the R is in red and it says "cannot resolve symbol R". Also every time there is R.layout.something it is... moreIn every instance in all of my classes where I reference R.id.something, the R is in red and it says "cannot resolve symbol R". Also every time there is R.layout.something it is underlined in red and says "cannot resolve method setContentView(?)". The project always builds fine. It is annoying to see this all the time. I have read many other questions on here about something similar but most involved importing projects from Eclipse. I am using what I believe to be the most recent version of Android Studio and the project was created with Android Studio and worked without any "cannot resolve R" problems. I would like to know what causes this if anyone knows. less
I am confused about the method view() in the following code snippet.
class Net(nn.Module):
def __init__(self):
super(Net,... moreI am confused about the method view() in the following code snippet.
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2,2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16*5*5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16*5*5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
My confusion is regarding the following line.
x = x.view(-1, 16*5*5)
What does tensor.view() function do? I have seen its usage in many places, but I can't understand how it interprets its parameters.What happens if I give negative values as parameters to the view() function? For example, what happens if I call, tensor_variable.view(1, 1, -1)?Can... less
What are the differences between all these cross-entropy losses?
Keras is talking... moreWhat are the differences between all these cross-entropy losses?
Keras is talking about
Softmax cross-entropy with logits
Sparse softmax cross-entropy with logits
Sigmoid cross-entropy with logits
What are the differences and relationships between them? What are the typical applications for them? What's the mathematical background? Are there other cross-entropy types that one should know? Are there any cross-entropy types without logits? less
what is the difference between linq to sql class and entity framework and what kind of situation linq to sql should use and when entity framework will be the best option.
Node.js is a perfect match for our web project, but there are few computational tasks for which we would prefer Python. We also already have a Python code for them. We are highly... moreNode.js is a perfect match for our web project, but there are few computational tasks for which we would prefer Python. We also already have a Python code for them. We are highly concerned about speed, what is the most elegant way how to call a Python "worker" from node.js in an asynchronous non-blocking way?
I have read the answer here. But, I can't apply it on one of my example so I probably still don't get it.
Here is my example: Suppose that my program is trying to learn PCA... moreI have read the answer here. But, I can't apply it on one of my example so I probably still don't get it.
Here is my example: Suppose that my program is trying to learn PCA (principal component analysis). Or diagonalization process. I have a matrix, and the answer is it's diagonalization:
A = PDP-1
If I understand correctly:
In supervised learning I will have all tries with it's errors
My question is:
What will I have in unsupervised learning?
Will I have error for each trial as I go along in trials and not all errors in advance? Or is it something else? less
I want to do some timing comparisons between CPU & GPU as well as some profiling and would like to know if there's a way to tell pytorch to not use the GPU and instead use the... moreI want to do some timing comparisons between CPU & GPU as well as some profiling and would like to know if there's a way to tell pytorch to not use the GPU and instead use the CPU only? I realize I could install another CPU-only pytorch, but hoping there's an easier way.
Classification problems, such as logistic regression or multinomial logistic regression, optimize a cross-entropy loss. Normally, the cross-entropy layer follows the softmax... moreClassification problems, such as logistic regression or multinomial logistic regression, optimize a cross-entropy loss. Normally, the cross-entropy layer follows the softmax layer, which produces probability distribution.In tensorflow, there are at least a dozen of different cross-entropy loss functions:tf.losses.softmax_cross_entropytf.losses.sparse_softmax_cross_entropytf.losses.sigmoid_cross_entropytf.contrib.losses.softmax_cross_entropytf.contrib.losses.sigmoid_cross_entropytf.nn.softmax_cross_entropy_with_logitstf.nn.sigmoid_cross_entropy_with_logits...Which one works only for binary classification and which are suitable for multi-class problems? When should you use sigmoid instead of softmax? How are sparse functions different from others and why is it only softmax? less
I am using TensorFlow to train a neural network. This is how I am initializing the GradientDescentOptimizer:
init = tf.initialize_all_variables()
sess =... moreI am using TensorFlow to train a neural network. This is how I am initializing the GradientDescentOptimizer:
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
mse = tf.reduce_mean(tf.square(out - out_))
train_step = tf.train.GradientDescentOptimizer(0.3).minimize(mse)
The thing here is that I don't know how to set an update rule for the learning rate or a decay value for that.
How can I use an adaptive learning rate here?
I have Keras installed with the Tensorflow backend and CUDA. I'd like to sometimes on demand force Keras to use CPU. Can this be done without say installing a separate CPU-only... moreI have Keras installed with the Tensorflow backend and CUDA. I'd like to sometimes on demand force Keras to use CPU. Can this be done without say installing a separate CPU-only Tensorflow in a virtual environment? If so how? If the backend were Theano, the flags could be set, but I have not heard of Tensorflow flags accessible via Keras.
I want to return variable from function using return and after that call same function, but resume from return. It this possible? Example:
def abc(): return 5 return 6 var = abc()... moreI want to return variable from function using return and after that call same function, but resume from return. It this possible? Example:
def abc(): return 5 return 6 var = abc() # var = 5 ### var = abc() # var = 6
Now that .NET v3.5 SP1 has been released (along with VS2008 SP1), we now have access to the .NET entity framework.
My question is this. When trying to decide between using the... moreNow that .NET v3.5 SP1 has been released (along with VS2008 SP1), we now have access to the .NET entity framework.
My question is this. When trying to decide between using the Entity Framework and LINQ to SQL as an ORM, what's the difference?
The way I understand it, the Entity Framework (when used with LINQ to Entities) is a 'big brother' to LINQ to SQL? If this is the case - what advantages does it have? What can it do that LINQ to SQL can't do on its own?
How can I call a Python function with my Node.js (express) backend server?
I want to call this function and give it an image url
def predictImage(img_path): # load model model =... moreHow can I call a Python function with my Node.js (express) backend server?
I want to call this function and give it an image url
def predictImage(img_path): # load model model = load_model("model.h5") # load a single image new_image = load_image(img_path) # check prediction pred = model.predict(new_image) return str(pred)
First up, this is most certainly homework (so no full code samples please). That said...
I need to test an unsupervised algorithm next to a supervised algorithm, using the Neural... moreFirst up, this is most certainly homework (so no full code samples please). That said...
I need to test an unsupervised algorithm next to a supervised algorithm, using the Neural Network toolbox in Matlab. The data set is the UCI Artificial Characters Database. The problem is, I've had a good tutorial on supervised algorithms, and been left to sink on unsupervised.
So I know how to create a self organising map using selforgmap, and then I train it using train(net, trainingSet). I don't understand what to do next. I know that it's clustered the data I gave it into (hopefully) 10 clusters (one for each letter).
Two questions then:
How can I then label the clusters (given that I have a comparison pattern)?
Am I trying to turn this into a supervised learning problem when I do this?
How can I create a confusion matrix on (another) testing set to compare to the supervised algorithm?
I think I'm missing something conceptual or jargon-based here - all my searches come up with supervised learning... less
If you download a Tableau Public dashboard, you'll get access to the datasets that where use to make it.
I believe it is the same for Tableau Desktop dashboards.
Which leads me to... moreIf you download a Tableau Public dashboard, you'll get access to the datasets that where use to make it.
I believe it is the same for Tableau Desktop dashboards.
Which leads me to : are Tableau Desktop documents, stored on a Tableau Server, downloadable by anyone with access to that link ?
I would like to publish a Tableau Desktop dashboard on a Tableau Server so I can put it on a website yet I don't want the viewers to be able to download the dashboard. Knowing this will likely determine whether or not I buy Tableau Server. less
I'm new with Pytorch and machine learning I'm follow this tutorial in this tutorial https://www.learnopencv.com/image-classification-using-transfer-learning-in-pytorch/ and use... moreI'm new with Pytorch and machine learning I'm follow this tutorial in this tutorial https://www.learnopencv.com/image-classification-using-transfer-learning-in-pytorch/ and use my custom dataset. Then I have same problem in this tutorial but I dont know how to make early stopping in pytorch and if do you have better without create early stopping process please tell me.
From the official Hive documentation:Hive aims to provide acceptable (but not optimal) latency for interactive data browsing, queries over small data sets or test queries.
I'm... moreFrom the official Hive documentation:Hive aims to provide acceptable (but not optimal) latency for interactive data browsing, queries over small data sets or test queries.
I'm not an expert about database architecture, and I would like to know if there is an alternative when the assumption above is not true, that is, when queries are made over a big data set.
I am trying to set up FTP on Amazon Cloud Server, but without luck. I search over net and there is no concrete steps how to do it.
I found those commands to run:
$ yum install... moreI am trying to set up FTP on Amazon Cloud Server, but without luck. I search over net and there is no concrete steps how to do it.
I found those commands to run:
$ yum install vsftpd
$ ec2-authorize default -p 20-21
$ ec2-authorize default -p 1024-1048
$ vi /etc/vsftpd/vsftpd.conf
#<em>---Add following lines at the end of file---</em>
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=<Public IP of your instance>
$ /etc/init.d/vsftpd restart
Oftentimes I need to troubleshoot a workbook that another person at my company has created and published to our server. To troubleshoot, I need to see their connection details,... moreOftentimes I need to troubleshoot a workbook that another person at my company has created and published to our server. To troubleshoot, I need to see their connection details, specifically their Custom SQL, to understand what data they are using in their extract.
Is there any way to view this connection info (specifically their SQL code) when viewing the published workbook on the server (web) version?
I am an admin and I am able to download their workbook to my desktop version of tableau, then open it, then reconnect to the data, then look through the data connections they created, to see their SQL. But it's a really cumbersome process.
All I'm looking to do is, when looking at a published workbook, see the data connection details so that I can see the Custom SQL, without going through the process of downloading I described above. less
def fib(max): n, a, b = 0, 0, 1 while n < max: yield b a, b = b, a + b n = n + 1 return 'done' print(next(fib(6))) print(next(fib(6))) print(next(fib(6)))
the result... moredef fib(max): n, a, b = 0, 0, 1 while n < max: yield b a, b = b, a + b n = n + 1 return 'done' print(next(fib(6))) print(next(fib(6))) print(next(fib(6)))
the result is 1,1,1. However, if I change the content in print() as below:
f = fib(6) print(next(f)) print(next(f)) print(next(f))
the result will be 1, 1, 2. Why does this happen?