R/utils.R
tokenize_transform_vec_docs.Rd
String tokenization and transformation ( vector of documents )
tokenize_transform_vec_docs( object = NULL, as_token = FALSE, to_lower = FALSE, to_upper = FALSE, utf_locale = "", remove_char = "", remove_punctuation_string = FALSE, remove_punctuation_vector = FALSE, remove_numbers = FALSE, trim_token = FALSE, split_string = FALSE, split_separator = " \r\n\t.,;:()?!//", remove_stopwords = FALSE, language = "english", min_num_char = 1, max_num_char = Inf, stemmer = NULL, min_n_gram = 1, max_n_gram = 1, skip_n_gram = 1, skip_distance = 0, n_gram_delimiter = " ", concat_delimiter = NULL, path_2folder = "", threads = 1, vocabulary_path_file = NULL, verbose = FALSE )
object | a character string vector of documents |
---|---|
as_token | if TRUE then the output of the function is a list of (split) token. Otherwise is a vector of character strings (sentences) |
to_lower | either TRUE or FALSE. If TRUE the character string will be converted to lower case |
to_upper | either TRUE or FALSE. If TRUE the character string will be converted to upper case |
utf_locale | the language specific locale to use in case that either the to_lower or the to_upper parameter is TRUE and the text file language is other than english. For instance if the language of a text file is greek then the utf_locale parameter should be 'el_GR.UTF-8' ( language_country.encoding ). A wrong utf-locale does not raise an error, however the runtime of the function increases. |
remove_char | a character string with specific characters that should be removed from the text file. If the remove_char is "" then no removal of characters take place |
remove_punctuation_string | either TRUE or FALSE. If TRUE then the punctuation of the character string will be removed (applies before the split function) |
remove_punctuation_vector | either TRUE or FALSE. If TRUE then the punctuation of the vector of the character strings will be removed (after the string split has taken place) |
remove_numbers | either TRUE or FALSE. If TRUE then any numbers in the character string will be removed |
trim_token | either TRUE or FALSE. If TRUE then the string will be trimmed (left and/or right) |
split_string | either TRUE or FALSE. If TRUE then the character string will be split using the split_separator as delimiter. The user can also specify multiple delimiters. |
split_separator | a character string specifying the character delimiter(s) |
remove_stopwords | either TRUE, FALSE or a character vector of user defined stop words. If TRUE then by using the language parameter the corresponding stop words vector will be uploaded. |
language | a character string which defaults to english. If the remove_stopwords parameter is TRUE then the corresponding stop words vector will be uploaded. Available languages are afrikaans, arabic, armenian, basque, bengali, breton, bulgarian, catalan, croatian, czech, danish, dutch, english, estonian, finnish, french, galician, german, greek, hausa, hebrew, hindi, hungarian, indonesian, irish, italian, latvian, marathi, norwegian, persian, polish, portuguese, romanian, russian, slovak, slovenian, somalia, spanish, swahili, swedish, turkish, yoruba, zulu |
min_num_char | an integer specifying the minimum number of characters to keep. If the min_num_char is greater than 1 then character strings with more than 1 characters will be returned |
max_num_char | an integer specifying the maximum number of characters to keep. The max_num_char should be less than or equal to Inf (in this function the Inf value translates to a word-length of 1000000000) |
stemmer | a character string specifying the stemming method. Available method is the porter2_stemmer. See details for more information. |
min_n_gram | an integer specifying the minimum number of n-grams. The minimum number of min_n_gram is 1. |
max_n_gram | an integer specifying the maximum number of n-grams. The minimum number of max_n_gram is 1. |
skip_n_gram | an integer specifying the number of skip-n-grams. The minimum number of skip_n_gram is 1. The skip_n_gram gives the (max.) n-grams using the skip_distance parameter. If skip_n_gram is greater than 1 then both min_n_gram and max_n_gram should be set to 1. |
skip_distance | an integer specifying the skip distance between the words. The minimum value for the skip distance is 0, in which case simple n-grams will be returned. |
n_gram_delimiter | a character string specifying the n-gram delimiter (applies to both n-gram and skip-n-gram cases) |
concat_delimiter | either NULL or a character string specifying the delimiter to use in order to concatenate the end-vector of character strings to a single character string (recommended in case that the end-vector should be saved to a file) |
path_2folder | a character string specifying the path to the folder where the file(s) will be saved |
threads | an integer specifying the number of cores to run in parallel |
vocabulary_path_file | either NULL or a character string specifying the output path to a file where the vocabulary should be saved once the text is tokenized |
verbose | either TRUE or FALSE. If TRUE then information will be printed out |
a character vector
It is memory efficient to give a path_2folder in case that a big file should be saved, rather than return the vector of all character strings in the R-session.
The skip-grams are a generalization of n-grams in which the components (typically words) need not to be consecutive in the text under consideration, but may leave gaps that are skipped over. They provide one way of overcoming the data sparsity problem found with conventional n-gram analysis.
Many character string pre-processing functions (such as the utf-locale or the split-string function ) are based on the boost library ( https://www.boost.org/ ).
Stemming of the english language is done using the porter2-stemmer, for details see https://github.com/smassung/porter2_stemmer
The list of stop-words in the available languages was downloaded from the following link, https://github.com/6/stopwords-json
library(textTinyR) token_doc_vec = c("CONVERT to lower", "remove.. punctuation11234", "trim token and split ") res = tokenize_transform_vec_docs(object = token_doc_vec, to_lower = TRUE, split_string = TRUE)