parallel processing - Spark - Add Worker from Local Machine (standalone spark cluster manager)? -


when running spark 1.4.0 in single machine, can add worker using command "./bin/spark-class org.apache.spark.deploy.worker.worker myhostname:7077". official documentation points out way adding "myhostname:7077" "conf/slaves" file followed executing command "sbin/start-all.sh" invoke master , workers listed in conf/slaves file. however, later method doesn't work me (with time-out error). can me this?

here conf/slaves file (assume master url myhostname:700):

myhostname:700

the conf.slaves file should list of hostnames, don't need include port # spark runs on (i think if try , ssh on port timeout comes from).


Comments

Popular posts from this blog

powershell Start-Process exit code -1073741502 when used with Credential from a windows service environment -

twig - Using Twigbridge in a Laravel 5.1 Package -

c# - LINQ join Entities from HashSet's, Join vs Dictionary vs HashSet performance -