I need to detect the language of many short texts, using R. I am using the textcat package, which find which among many (say 30) European languages is the one of each text. However, I know my texts are either French or English (or, more generally, a small subset of the langages handled by textcat).
How could add this knowledge when calling textcat functions ?
Thanks,
my.profiles <- TC_byte_profiles[names(TC_byte_profiles) %in% c("english", "french")]
my.profiles
my.text <- c("This is an English sentence.",
"Das ist ein deutscher Satz.",
"Il s'agit d'une phrase française.",
"Esta es una frase en espa~nol.")
textcat(my.text, p = my.profiles)
# [1] "english" "english" "french" "french"
here's one of their examples:
library("textcat")
textcat(c(
"This is an English sentence.",
"Das ist ein deutscher Satz.",
"Esta es una frase en espa~nol."))
[1] "english" "german" "spanish"
Try http://cran.r-project.org/web/packages/cldr/ which brings Google Chrome's language detection to R.
#install from archive
url <- "http://cran.us.r-project.org/src/contrib/Archive/cldr/cldr_1.1.0.tar.gz"
pkgFile<-"cldr_1.1.0.tar.gz"
download.file(url = url, destfile = pkgFile)
install.packages(pkgs=pkgFile, type="source", repos=NULL)
unlink(pkgFile)
# or devtools::install_version("cldr",version="1.1.0")
#usage
library(cldr)
demo(cldr)