I am Test Automation engineer and recently got opportunity to explore RPA tool blueprism. After exploring I found it similar to UI automation tools supporting various... moreI am Test Automation engineer and recently got opportunity to explore RPA tool blueprism. After exploring I found it similar to UI automation tools supporting various technologies. Can anyone tell me what value RPA adds compare to traditional tools. I was interested to see how it can use 'intelligence' but couldn't find any feature.
Can expert in this forum help me understand what RPA can do which traditional tool can not do ?
I was wondering if I could set up a lambda function for AWS, triggered whenever a new text file is uploaded into an s3 bucket. In the function, I would like to get the contents of... moreI was wondering if I could set up a lambda function for AWS, triggered whenever a new text file is uploaded into an s3 bucket. In the function, I would like to get the contents of the text file and process it somehow. I was wondering if this was possible...?
For example, if I upload foo.txt, with contents foobarbaz, I would like to somehow get foobarbaz in my lambda function so I can do stuff with it. I know I can get metadata from getObject, or a similar method.
Thanks!
Is there any way to change the VM Size of an Azure Cloud Service without having to rebuild the package?
The vmsize parameter is defined in the .csdef file rather than... moreIs there any way to change the VM Size of an Azure Cloud Service without having to rebuild the package?
The vmsize parameter is defined in the .csdef file rather than the .cscfg file that is uploaded into Azure and doesn't appear in the other included XML files of the package.
Please note that we're not looking to change the instance count (scale out) but the size type (i.e. from Extra Small (A0) to Medium (A2)).
I have got into a bad state with my ASP.Net MVC 5 project, using Code-First Entity Framework. I don't care about losing data, I just want to be able to start fresh, recreate the... moreI have got into a bad state with my ASP.Net MVC 5 project, using Code-First Entity Framework. I don't care about losing data, I just want to be able to start fresh, recreate the database and start using Code-First migrations.
Currently I am in a state where every attempt to Update-Database results in an exception being thrown or getting an error message. Also the website can't access the database correctly. How can I wipe all migrations, re-create the database and start from scratch without having to create a new project? In other words, I want to keep my code but drop the database.
Later I will also want to get the deployment database (SQL Server on Azure) in sync. Again, I don't mind dropping all the data - I just want to get it working.
Please provide any how-to steps to get back to a clean state. Much appreciated. less
I am using the readfromexcel component under Data Reader category. I already installed the microsoft.ace.oledb.12.0 and re-tried. However, it gives the same error. Can someone... moreI am using the readfromexcel component under Data Reader category. I already installed the microsoft.ace.oledb.12.0 and re-tried. However, it gives the same error. Can someone please help me with this?
I have seen Microsoft Cognitive services such as text analytics and some other services. Here I have gone through RPA and now I need to integrate RPA with any kind of Cognitive... moreI have seen Microsoft Cognitive services such as text analytics and some other services. Here I have gone through RPA and now I need to integrate RPA with any kind of Cognitive services, just for a demo purpose. Do we have any reference for that to understand much about it ?
If I need to use Azure cognitive services with RPA tools, which one will be the best among UiPath,Automation Anywhere and BluePrism ?
I'm trying to do a task of Gmail to delete the particular messages in Automation Anywhere using Object Cloning and looping, looping is being done only for the first message even... moreI'm trying to do a task of Gmail to delete the particular messages in Automation Anywhere using Object Cloning and looping, looping is being done only for the first message even after giving the $counter$ in the path(after capturing) where it has to select the multiple emails in Gmail inbox. please have a look at the code in the attached screenshot. Any inputs are appreciated Thank You!
1.open browser
2.object cloning:Get Property 'HTML Inner Text'of static text""from windows 'Inbox*';Assign to variable"$Prompt-Assignment",Source:Window;play-Type:object
3.Start loop"$Prompt-Assignment$"Times
4.object cloning:Get Property 'HTML Inner Text' of static text"Indeed" from windows 'Inbox*';Assign to variable "$Vsubject",Source:Window;play-Type:object
5.if $Vsubject$ EqualTo(=)"Indeed" Then
6.object cloning:click on pane windows 'Inbox*';Click type:Click;Source:Window;play-Type:object less
I wonder if it is possible to use Cloud Armor with GAE Flex? Because in Cloud Armor's documentation, it says that you have to use an HTTPS Load Balancer. Since GAE Flex doesn't... moreI wonder if it is possible to use Cloud Armor with GAE Flex? Because in Cloud Armor's documentation, it says that you have to use an HTTPS Load Balancer. Since GAE Flex doesn't have a load balancer, how can we use Cloud Armor with GAE Flex? We have to use a WAF to prevent DDOS attacks. Is it possible to use Cloud Armor with GAE Flex through HTTPS Load Balancer? If so, can you explain how I can achieve this goal?
Thank you.
I noticed that there doesn't seem to be an option to download an entire S3 bucket from the AWS Management Console.
Is there an easy way to grab everything in one of my buckets? I... moreI noticed that there doesn't seem to be an option to download an entire S3 bucket from the AWS Management Console.
Is there an easy way to grab everything in one of my buckets? I was thinking about making the root folder public, using wget to grab it all, and then making it private again but I don't know if there's an easier way
I am writing my own code for a decision tree. I need to decide on when to terminate the tree building process. I could think of limiting the height of the tree, but this seems... moreI am writing my own code for a decision tree. I need to decide on when to terminate the tree building process. I could think of limiting the height of the tree, but this seems trivial. Could anyone give me a better idea on how to implement my termination function.
Here in my tree building algorithm.
I don't know if this is a right place to ask this question, but a community dedicated to Data Science should be the most appropriate place in my opinion.
I have just started with... moreI don't know if this is a right place to ask this question, but a community dedicated to Data Science should be the most appropriate place in my opinion.
I have just started with Data Science and Machine learning. I am looking for long term project ideas which I can work on for like 8 months.
A mix of Data Science and Machine learning would be great.
A project big enough to help me understand the core concepts and also implement them at the same time would be very beneficial.
I have an Arduino Duemilanove with an Atmega 328. I am working on Ubuntu 12.04, and the Arduino IDE's version is 1.0. Recently, I tried to upload a few of the sample sketches onto... moreI have an Arduino Duemilanove with an Atmega 328. I am working on Ubuntu 12.04, and the Arduino IDE's version is 1.0. Recently, I tried to upload a few of the sample sketches onto it, such as the Blink one. However, none of my attempts are working and they result in the same error every time I try it - avrdude: stk500_recv(): programmer is not responding.
I have enabled '/dev/ttyUSB0' under Tools -> Serial Port. I have also selected the correct board (Duemilanove with Atmega 328) from the list. Yet, I am not able to resolve the issue. I have searched online as well and none of the other responses for this problem seem to be working for me. Could someone tell me why I am encountering this issue and help me resolve it?
Update: I tried turning the onboard Atmega and fitting it in the other direction. Now, I encounter no problems uploading, but nothing happens afterwards. The onboard LED also does not seem to be blinking. less
I am making a vector of "waypoints" on the Arduino. Each waypoint is an object. The Arduino will obviously need to store multiple waypoints for waypoint navigation. But instead of... moreI am making a vector of "waypoints" on the Arduino. Each waypoint is an object. The Arduino will obviously need to store multiple waypoints for waypoint navigation. But instead of storing these waypoints in a standard preprogrammed array, the user will need to be able to add, remove waypoints and move them around. Unfortunately the Arduino does not offer a vector type as a built-in library.
I am currently contemplating two options:
In Container for objects like C++ 'vector'?, someone posted a general purpose library. It does not contain any index deletion, or movement operations. But it does contain some memory management strategies.
I have used malloc, dealloc, calloc in the past. But I do not like that option at all, especially with classes. But is this a better option in my senario?
I want to implement machine learning on hardware platform s which can learning by itself Is there any way to by which machine learning on hardware works seamlessly?
I was under the impression that the Raspberry Pi's ARM processor, although having an armhf microarchitecture, still followed the Von Neumann architecture... moreI was under the impression that the Raspberry Pi's ARM processor, although having an armhf microarchitecture, still followed the Von Neumann architecture (principally sharing main memory for instructions and data).
However I came across this single line in a Computer Science textbook (A Level Computer Science for AQA Unit 2, Kevin R Bond 2016, pg265)
The Raspberry Pi computer is based on the Harvard architecture
Having searched online, I can't find any solid sources that either prove or disprove this statement. Is this in error? I would appreciate a source given in an answer.
(I'm aware the Raspberry Pi SE exists, but given the fact that the tag does not exist there, I thought it more appropriate to post it here) less
When you connect a data source, Tableau automatically infers the type of each column of your data. Whether it's Number (decimal), Number (whole), String, Boolean, etc. A few... moreWhen you connect a data source, Tableau automatically infers the type of each column of your data. Whether it's Number (decimal), Number (whole), String, Boolean, etc. A few questions about this:
1) Has Tableau ever misclassified one (or more) columns of your data with their automatic labeling? If so, would you please give details?
2) Do you feel that these data type options can be improved? For example, Tableau doesn't seem to be distinguishing between unordered categorical data and ordinal data.
Thanks!
My organization does not have QlikView WorkBench license. My question is, what are the limitations I will run into as I start using IIS with QlikView instead of QlikView web... moreMy organization does not have QlikView WorkBench license. My question is, what are the limitations I will run into as I start using IIS with QlikView instead of QlikView web server.
Is it necessary to have workbench installed with license to be able to develop a web application using Visual studio to display QlikView files?
Currently we have QlikView Web server(non IIS install). If I migrate to IIS install, I just want to know if I may get stuck without a QlikView workbench.
Searched a lot on the net for this info but in vain, so please give some details. I am well versed with Javascript, Ajax, HTML and so on but not yet used them with QlikView . less
Hi all,
We have count data, but it appears that this is overdispersed. Therefore the assumed Poisson distribution should be replaced by a quasi-Poisson or a negative binomial.... moreHi all,
We have count data, but it appears that this is overdispersed. Therefore the assumed Poisson distribution should be replaced by a quasi-Poisson or a negative binomial. Although there is some literature around this topic (for instance see http://fisher.utstat.toronto.edu/reid/sta2201s/QUASI-POISSON.pdf), it is rather technical, and we were wondering if there is a pragmatic approach in R to determine whether to use Poisson, quiasi-Poisson or negative binomial as the underlying distribution of the response data?
Thanks in advance! less
I am a relatively new user to Hadoop (using version 2.4.1). I installed hadoop on my first node without a hitch, but I can't seem to get the Resource Manager to start on my second... moreI am a relatively new user to Hadoop (using version 2.4.1). I installed hadoop on my first node without a hitch, but I can't seem to get the Resource Manager to start on my second node.I cleared up some "shared library" problems by adding this to yarn-env.sh and hadoop-env.sh:export HADOOP_HOME="/usr/local/hadoop"export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"I also added this to hadoop-env.sh:export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/nativebased on the advice of this post at horton works http://hortonworks.com/community/forums/topic/hdfs-tmp-dir-issue/That cleared up all of my error messages; when I run /sbin/start-yarn.sh I get this:starting yarn daemonsstarting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-HdNode.outlocalhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-HdNode.outThe only problem is, JPS says that the Resource Manager isn't running.What's going on here? less
Can anybody explain Apache Flume for me in a plain language? I'd appreciate an explanation with a practical example instead of abstract theoretical definitions, then I can... moreCan anybody explain Apache Flume for me in a plain language? I'd appreciate an explanation with a practical example instead of abstract theoretical definitions, then I can understand better.
What is it used for? At which stage of a BigData analysis is it used?
And what are prerequisites for learning it?
Please
As you would explain for a non-technical person
I'm new to Spark and I'm trying to read CSV data from a file with Spark. Here's what I am doing :sc.textFile('file.csv') .map(lambda line: (line.split(','), line.split(',')))... moreI'm new to Spark and I'm trying to read CSV data from a file with Spark. Here's what I am doing :sc.textFile('file.csv') .map(lambda line: (line.split(','), line.split(','))) .collect()I would expect this call to give me a list of the two first columns of my file but I'm getting this error :File "", line 1, in IndexError: list index out of rangealthough my CSV file as more than one column.
Hello!
I want to fit a dataset with a sum of two distribution: Gaussin + Poisson.
The dataset can have up to 3000 numbers, this should be enough for reasonable fitting. Is there... moreHello!
I want to fit a dataset with a sum of two distribution: Gaussin + Poisson.
The dataset can have up to 3000 numbers, this should be enough for reasonable fitting. Is there any convenient way to do it without programming? For example, with Origin software? Or RStudio?
I'm curious if anyone can point to some successful extract, transform, load (ETL) automation libraries, papers, or use cases for somewhat inhomogenious data?
I would be... moreI'm curious if anyone can point to some successful extract, transform, load (ETL) automation libraries, papers, or use cases for somewhat inhomogenious data?
I would be interested to see any existing libraries dealing with scalable ETL solutions. Ideally these would be capable of ingesting 1-5 petabytes of data containing 50 billion records from 100 inhomogenious data sets in tens or hundreds of hours running on 4196 cores (256 I2.8xlarge AWS machines). I really do mean ideally, as I would be interested to hear about a system with 10% of this functionality to help reduce our team's ETL load.
Otherwise, I would be interested to see any books or review articles on the subject or high quality research papers. I have done a literature review and have only found lower quality conference proceedings with dubious claims.
I've seen a few commercial products advertised, but again, these make dubious claims without much evidence of their efficacy.
The datasets are rectangular and can take the form of fixed... less
I am using CDH 5.2. I am able to use spark-shell to run the commands.How can I run the file(file.spark) which contain spark commands.Is there any way to run/compile the scala... moreI am using CDH 5.2. I am able to use spark-shell to run the commands.How can I run the file(file.spark) which contain spark commands.Is there any way to run/compile the scala programs in CDH 5.2 without sbt?Thanks in advance