I'm pretty new to R programming. I'm using R studio, and all of a sudden it's no longer showing the workspace, it's only showing the console.
This is what it is supposed to look... moreI'm pretty new to R programming. I'm using R studio, and all of a sudden it's no longer showing the workspace, it's only showing the console.
This is what it is supposed to look like
And this is what mine looks like now.
In Eclipse, whenever the workspace gets out of wack, I can just reset the perspective and everything it back to normal. I can't figure out how to do that in R Studio.
The default layout seemed to disappear after I restarted my computer.
I am trying to figure out how the new version of GCM or Firebase Cloud Messaging works so I moved one of my projects to the new Firebase console, If I did not have the API KEY or... moreI am trying to figure out how the new version of GCM or Firebase Cloud Messaging works so I moved one of my projects to the new Firebase console, If I did not have the API KEY or I want to create a new one... where can I do it?
There is a sentence in Eclipse Paho Project website such as; "The Paho project provides scalable open-source client implementations of open and standard messaging protocols aimed... moreThere is a sentence in Eclipse Paho Project website such as; "The Paho project provides scalable open-source client implementations of open and standard messaging protocols aimed at new, existing, and emerging applications for Machine‑to‑Machine (M2M) and Internet of Things (IoT)."
I am confused a little bit. What is the difference between IoT and M2M?
I have a dictionary: keys are strings, values are integers.Example:
stats = {'a':1000, 'b':3000, 'c': 100}
I'd like to get 'b' as an answer, since it's the key with a higher... moreI have a dictionary: keys are strings, values are integers.Example:
stats = {'a':1000, 'b':3000, 'c': 100}
I'd like to get 'b' as an answer, since it's the key with a higher value.I did the following, using an intermediate list with reversed key-value tuples:
inverse =
print max(inverse)
Is that one the better (or even more elegant) approach?
There seems to be several ways to create a copy of a tensor in Pytorch, including
y = tensor.new_tensor(x) #a
y = x.clone().detach()... moreThere seems to be several ways to create a copy of a tensor in Pytorch, including
y = tensor.new_tensor(x) #a
y = x.clone().detach() #b
y = torch.empty_like(x).copy_(x) #c
y = torch.tensor(x) #d
b is explicitly preferred over a and d according to a UserWarning I get if I execute either a or d. Why is it preferred? Performance? I'd argue it's less readable.Any reasons for/against using c?
I am currently in the process of building some Tableau workbooks where we will need to redact visualizations or text tables if the results fall below a certain threshold (e.g.... moreI am currently in the process of building some Tableau workbooks where we will need to redact visualizations or text tables if the results fall below a certain threshold (e.g. only ten data points are returned after filters are applied). Does anyone know how to create calculated fields or know of other methods to redact in Tableau?
In many real-life situations where you apply MapReduce, the final algorithms end up being several MapReduce steps.
i.e. Map1 , Reduce1 , Map2 , Reduce2 , and so on.
So you have... moreIn many real-life situations where you apply MapReduce, the final algorithms end up being several MapReduce steps.
i.e. Map1 , Reduce1 , Map2 , Reduce2 , and so on.
So you have the output from the last reduce that is needed as the input for the next map.
The intermediate data is something you (in general) do not want to keep once the pipeline has been successfully completed. Also because this intermediate data is in general some data structure (like a 'map' or a 'set') you don't want to put too much effort in writing and reading these key-value pairs.
What is the recommended way of doing that in Hadoop?
Is there a (simple) example that shows how to handle this intermediate data in the correct way, including the cleanup afterward? less
I am new to the field of neural networks and I would like to know the difference between Deep Belief Networks and Convolutional Networks. Also, is there a Deep Convolutional... moreI am new to the field of neural networks and I would like to know the difference between Deep Belief Networks and Convolutional Networks. Also, is there a Deep Convolutional Network which is the combination of Deep Belief and Convolutional Neural Nets?
This is what I have gathered till now. Please correct me if I am wrong.
For an image classification problem, Deep Belief networks have many layers, each of which is trained using a greedy layer-wise strategy. For example, if my image size is 50 x 50, and I want a Deep Network with 4 layers namely
My input layer will have 50 x 50 = 2500 neurons, HL1 = 1000 neurons (say) , HL2 = 100 neurons (say) and output layer = 10 neurons, in order to train the weights (W1) between Input Layer and HL1, I use an AutoEncoder (2500 - 1000 - 2500) and learn W1 of size 2500 x 1000 (This is unsupervised learning). Then I feed forward all images through the first hidden layers to obtain a set of... less
I am facing problem while downloading 'caret' package in R studios. The code below was taken from the caret documentation.
install.packages("caret", dependencies = c("Depends",... moreI am facing problem while downloading 'caret' package in R studios. The code below was taken from the caret documentation.
install.packages("caret", dependencies = c("Depends", "Suggests"))
it works fine while installing but it gives Errors and Warnings while unpacking few packages like mentioned below:
ERROR: dependencies ‘eiPack’, ‘ei’, ‘MCMCpack’, ‘Zelig’ are not available for package ‘ZeligEI’
* removing ‘/home/shazil/R/x86_64-pc-linux-gnu-library/3.4/ZeligEI’
Warning in install.packages :
installation of package ‘ZeligEI’ had non-zero exit status
At the end when the whole installation process is finished it says:
The downloaded source packages are in
‘/tmp/RtmpeiP5GO/downloaded_packages’
After that when I use the library() command, the following Error appears
> library(caret)
Error in library(caret) : there is no package called ‘caret’
I am using Ubuntu 16.04, Dell machine Core i5 7th Gen, 6GB RAM AMD RADEON GRAPHICS
Would really appreciate... less
I am new to API coding and working on a module which signs in to Tableau Server and gets the list of workbooks stored in a site. The code is to be written in C# using the Tableau... moreI am new to API coding and working on a module which signs in to Tableau Server and gets the list of workbooks stored in a site. The code is to be written in C# using the Tableau Rest API.
I was able to sign in to Tableau server successfully using the Rest API. However, I was not able to query the workbooks. Below is my code.
class Program
{
static HttpClient client = new HttpClient();
static async Task CallWebAPIAsync()
{
using (var client = new HttpClient())
{
client.Timeout = TimeSpan.FromMilliseconds(Timeout.Infinite);
client.BaseAddress = new Uri("https://my server url.com");
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new... less
I would like to read a CSV in spark and convert it as DataFrame and store it in HDFS with df.registerTempTable("table_name")I have tried:
scala> val df =... moreI would like to read a CSV in spark and convert it as DataFrame and store it in HDFS with df.registerTempTable("table_name")I have tried:
scala> val df = sqlContext.load("hdfs:///csv/file/dir/file.csv")
Error which I got:
java.lang.RuntimeException: hdfs:///csv/file/dir/file.csv is not a Parquet file. expected magic number at tail but found
at parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:418)
at org.apache.spark.sql.parquet.ParquetRelation2$MetadataCache$$anonfun$refresh$6.apply(newParquet.scala:277)
at org.apache.spark.sql.parquet.ParquetRelation2$MetadataCache$$anonfun$refresh$6.apply(newParquet.scala:276)
at scala.collection.parallel.mutable.ParArray$Map.leaf(ParArray.scala:658)
at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply$mcV$sp(Tasks.scala:54)
at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:53)
at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:53)
at... less
How can I increase the memory available for Apache spark executor nodes?
I have a 2 GB file that is suitable to loading in to Apache Spark. I am running apache spark for the... moreHow can I increase the memory available for Apache spark executor nodes?
I have a 2 GB file that is suitable to loading in to Apache Spark. I am running apache spark for the moment on 1 machine, so the driver and executor are on the same machine. The machine has 8 GB of memory.
When I try count the lines of the file after setting the file to be cached in memory I get these errors:
2014-10-25 22:25:12 WARN CacheManager:71 - Not enough space to cache partition rdd_1_1 in memory! Free memory is 278099801 bytes.
I looked at the documentation here and set spark.executor.memory to 4g in $spark.home
The UI shows this variable is set in the Spark Environment. You can find screenshot here
However when I go to the Executor tab the memory limit for my single Executor is still set to 265.4 MB. I also still get the same error.
I tried various things mentioned here but I still get the error and don't have a clear idea where I should change the setting.
I am running my code interactively from the spark-shell less
I have recently discovered you can use R within Tableau, to return bool, int, long etc. This happens by the... moreI have recently discovered you can use R within Tableau, to return bool, int, long etc. This happens by the following:
install.packages("Rserve")
library(Rserve)
Rserve()
// Should say "Starting RServe..."
Then in Tableau:
// For Tableau under 'Help' > 'Settings and Performance' > 'Manage R Connections'
// Server: 127.0.0.1 and Port:6311
// Make sure that 'RStudio' with 'RServer' is installed and running prior to Tableau connection
However I would like to do the same thing with Python, so Python can be used as a script in Tableau (not using Tableau's api in Python) - anyone know if this is possible? The snippet above was taken from here
I am developing a application in python which gives job recommendation based on the resume uploaded. I am trying to tokenize resume before processing further. I want to tokenize... moreI am developing a application in python which gives job recommendation based on the resume uploaded. I am trying to tokenize resume before processing further. I want to tokenize group of words. For example Data Science is a keyword when i tokenize i will get data and science separately. How to overcome this situation. Is there any library which does these extraction in python?
As someone who just got into data science (no prior coding history) I am new to using terminals, Python, and coding in general. While I do have some basic Python knowledge now,... moreAs someone who just got into data science (no prior coding history) I am new to using terminals, Python, and coding in general. While I do have some basic Python knowledge now, and I want to work on my first machine learning project, I am looking to use some packages that are not standard to python or jupyter lab, namely: TensorFlow.
After much struggle I was able to download TensorFlow in my terminal (i'm on Mac). Yet when I try to import to module I come to the following problem:
when I create a new file in jupyterlab (accessed via Anaconda) I have the option to create a python file using python 3 or python 3.7.2. When using python 3, I have access to packages to sklearn, SciPy, yet no TensorFlow. Then when I create a 3.7.2. file I can import the TensorFlow package, yet I cannot import the sklearn and SciPy packages anymore....
Did someone experience similar problems? Are there ways to solve this?
P.s. Using the 'pip install ...' command in terminal only sees to work rarely. Or I must be something... less
I'm trying to figure out how saving works in R Studio.
When i create a new project, a .RProj file is created. Whenever I work in R Studio, Save and Save As are greyed out in the... moreI'm trying to figure out how saving works in R Studio.
When i create a new project, a .RProj file is created. Whenever I work in R Studio, Save and Save As are greyed out in the File menu. The only way I know how to create a .RProj file is when starting a new project.
In the environment section, I can see a floppy disk Save icon. When I click that, it creates a .RData file. When ever I want to save, I click on that save icon and overwrite the file.
Can someone please explain what the best practices are for saving when using R Studio and the key distinctions between the .RProj and .RData files? less
So I wrote this code to return back every string in the given lst: list once. Here is my code
def make_unique(lst: list):
s =
for x in... moreSo I wrote this code to return back every string in the given lst: list once. Here is my code
def make_unique(lst: list):
s =
for x in lst:
if lst.count(x) == 1:
s.append(x)
else:
return(x)
return s
When I put in the input:
print(make_unique(lst=))
The output returns
row
but I want my output to return
which is basically all the strings in the list printed once. How can I do this??
I tried installing Hadoop following this http://hadoop.apache.org/common/docs/stable/single_node_setup.html document. When I tried executing this
bin/hadoop jar... moreI tried installing Hadoop following this http://hadoop.apache.org/common/docs/stable/single_node_setup.html document. When I tried executing this
bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs+'
I am getting the following Exception
java.lang.OutOfMemoryError: Java heap space
Please suggest a solution so that i can try out the example. The entire Exception is listed below. I am new to Hadoop I might have done something dumb . Any suggestion will be highly appreciated.
anuj@anuj-VPCEA13EN:~/hadoop$ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs+'
11/12/11 17:38:22 INFO util.NativeCodeLoader: Loaded the native-hadoop library
11/12/11 17:38:22 INFO mapred.FileInputFormat: Total input paths to process : 7
11/12/11 17:38:22 INFO mapred.JobClient: Running job: job_local_0001
11/12/11 17:38:22 INFO util.ProcessTree: setsid exited with exit code 0
11/12/11 17:38:22 INFO mapred.Task: Using ResourceCalculatorPlugin :... less
I tried to start spark 1.6.0 (spark-1.6.0-bin-hadoop2.4) on Mac OS Yosemite 10.10.5 using
"./bin/spark-shell".... moreI tried to start spark 1.6.0 (spark-1.6.0-bin-hadoop2.4) on Mac OS Yosemite 10.10.5 using
"./bin/spark-shell".
It has the error below. I also tried to install different versions of Spark but all have the same error. This is the second time I'm running Spark. My previous run works fine.
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's repl log4j profile: org/apache/spark/log4j-defaults-repl.properties
To adjust logging level use sc.setLogLevel("INFO")
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.6.0
/_/
Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_79)
Type in expressions to have them evaluated.
Type :help for more information.
16/01/04 13:49:40 WARN Utils: Service... less
I am trying learn deep learning and specifically using convolutional neural networks. I'd like to apply a simple network on some audio data. Now, as far as I understand CNNs are... moreI am trying learn deep learning and specifically using convolutional neural networks. I'd like to apply a simple network on some audio data. Now, as far as I understand CNNs are often used for image and object recognition, and therefore when using audio people often use the spectrogram (specifically mel-spectrogram) instead of the signal in the time-domain. My question is, is it better to use an image (i.e. RGB or greyscale values) of the spectrogram as the input to the network, or should I use the 2d magnitude values of the spectrogram directly? Does it even make a difference?
Thank you. less
Lastly, I started to learn neural networks and I would like know the difference between Convolutional Deep Belief Networks and Convolutional Networks. In here, there is a similar... moreLastly, I started to learn neural networks and I would like know the difference between Convolutional Deep Belief Networks and Convolutional Networks. In here, there is a similar question but there is no exact answer for it. We know that Convolutional Deep Belief Networks are CNNs + DBNs. So, I am going to do an object recognition. I want to know which one is much better than other or their complexity. I searched but I couldn't find anything maybe doing something wrong.
I am new to the field of neural networks and I would like to know the difference between Deep Belief Networks and Convolutional Networks. Also, is there a Deep Convolutional... moreI am new to the field of neural networks and I would like to know the difference between Deep Belief Networks and Convolutional Networks. Also, is there a Deep Convolutional Network which is the combination of Deep Belief and Convolutional Neural Nets?
This is what I have gathered till now. Please correct me if I am wrong.
For an image classification problem, Deep Belief networks have many layers, each of which is trained using a greedy layer-wise strategy. For example, if my image size is 50 x 50, and I want a Deep Network with 4 layers namely
My input layer will have 50 x 50 = 2500 neurons, HL1 = 1000 neurons (say) , HL2 = 100 neurons (say) and output layer = 10 neurons, in order to train the weights (W1) between Input Layer and HL1, I use an AutoEncoder (2500 - 1000 - 2500) and learn W1 of size 2500 x 1000 (This is unsupervised learning). Then I feed forward all images through the first hidden layers to obtain a set of... less
I am trying to run a simple NaiveBayesClassifer using hadoop, getting this error
Exception in thread "main" java.io.IOException: No FileSystem for scheme: file
at... moreI am trying to run a simple NaiveBayesClassifer using hadoop, getting this error
Exception in thread "main" java.io.IOException: No FileSystem for scheme: file
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1375)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:180)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
at org.apache.mahout.classifier.naivebayes.NaiveBayesModel.materialize(NaiveBayesModel.java:100)
Code :
Configuration configuration = new Configuration();
NaiveBayesModel model = NaiveBayesModel.materialize(new Path(modelPath), configuration);// error in this line..
modelPath is pointing to NaiveBayes.bin file, and configuration object is printing... less
Reading through the documentation of Tableau Server I was not able to determine if the following works:
I have set-up Tableau Server 2020.4.0 along with the PostgreSQL driver
I... moreReading through the documentation of Tableau Server I was not able to determine if the following works:
I have set-up Tableau Server 2020.4.0 along with the PostgreSQL driver
I added a connection to an internal, i.e. non-public, PostgreSQL DB via Tableau Server
I can access the PostgreSQL via logging in to Tableau Server just fine
I am also able to connect to the Tableau Server through Tableau Desktop BUT I cannot connect to the PostgreSQL as it is not directly accessible from the client machine running Tableau Desktop.
Is there a way to access this non-public PostgreSQL database connected to Tableau Server from Tableau Desktop through Tableau Server? less