Formula interface for elm_train, transforms a data frame and formula into the necessary input for elm_train, automatically calls onehot_encode for classification.

elm(
  formula,
  data,
  nhid,
  actfun,
  init_weights = "normal_gaussian",
  bias = FALSE,
  moorep_pseudoinv_tol = 0.01,
  leaky_relu_alpha = 0,
  seed = 1,
  verbose = FALSE
)

Arguments

formula

formula used to specify the regression or classification.

data

data.frame with the data

nhid

a numeric value specifying the hidden neurons. Must be >= 1

actfun

a character string specifying the type of activation function. It should be one of the following : 'sig' ( sigmoid ), 'sin' ( sine ), 'radbas' ( radial basis ), 'hardlim' ( hard-limit ), 'hardlims' ( symmetric hard-limit ), 'satlins' ( satlins ), 'tansig' ( tan-sigmoid ), 'tribas' ( triangular basis ), 'relu' ( rectifier linear unit ) or 'purelin' ( linear )

init_weights

a character string spcecifying the distribution from which the input-weights and the bias should be initialized. It should be one of the following : 'normal_gaussian' (normal / Gaussian distribution with zero mean and unit variance), 'uniform_positive' ( in the range [0,1] ) or 'uniform_negative' ( in the range [-1,1] )

bias

either TRUE or FALSE. If TRUE then bias weights will be added to the hidden layer

moorep_pseudoinv_tol

a numeric value. See the references web-link for more details on Moore-Penrose pseudo-inverse and specifically on the pseudo inverse tolerance value

leaky_relu_alpha

a numeric value between 0.0 and 1.0. If 0.0 then a simple relu ( f(x) = 0.0 for x < 0, f(x) = x for x >= 0 ) activation function will be used, otherwise a leaky-relu ( f(x) = alpha * x for x < 0, f(x) = x for x >= 0 ). It is applicable only if actfun equals to 'relu'

seed

a numeric value specifying the random seed. Defaults to 1

verbose

a boolean. If TRUE then information will be printed in the console

Value

elm object which can be used with predict, residuals and fitted.

Examples

elm(Species ~ ., data = iris, nhid = 20, actfun="sig")
#> Extreme learning model, elm (classification):
#> 
#> call:  elm(Species ~ ., data = iris, nhid = 20, actfun = "sig") 
#> hidden units       : 20 
#> activation function: sig 
#> accuracy           : 0.9733333 
#> 
#> confusion matrix :
#>             predicted
#> observed     setosa versicolor virginica
#>   setosa         50          0         0
#>   versicolor      0         48         2
#>   virginica       0          2        48

mod_elm <- elm(Species ~ ., data = iris, nhid = 20, actfun="sig")

# predict classes
predict(mod_elm, newdata = iris[1:3,-5])
#> [1] setosa setosa setosa
#> Levels: setosa versicolor virginica

# predict probabilities
predict(mod_elm, newdata = iris[1:3,-5], type="prob")
#>         setosa versicolor   virginica
#> [1,] 0.9775201 0.02021516 0.002264738
#> [2,] 0.9519374 0.02871766 0.019344988
#> [3,] 0.9620877 0.02854287 0.009369432

# predict elm output
predict(mod_elm, newdata = iris[1:3,-5], type="raw")
#>        setosa  versicolor   virginica
#> [1,] 1.018682 -0.02106638 0.002360102
#> [2,] 1.009991 -0.03046901 0.020524740
#> [3,] 1.020310 -0.03027020 0.009936443

data("Boston")
elm(medv ~ ., data = Boston, nhid = 40, actfun="relu")
#> Extreme learning model, elm (regression):
#> 
#> call:  elm(medv ~ ., data = Boston, nhid = 40, actfun = "relu") 
#> hidden units       : 40 
#> activation function: relu 
#> mse                : 20.20491 

data("ionosphere")
elm(class ~ ., data = ionosphere, nhid=20, actfun="relu")
#> Extreme learning model, elm (classification):
#> 
#> call:  elm(class ~ ., data = ionosphere, nhid = 20, actfun = "relu") 
#> hidden units       : 20 
#> activation function: relu 
#> accuracy           : 0.8603989 
#> 
#> confusion matrix :
#>         predicted
#> observed   b   g
#>        b  89  37
#>        g  12 213