parallel processing - Spark - Add Worker from Local Machine (standalone spark cluster manager)? -


when running spark 1.4.0 in single machine, can add worker using command "./bin/spark-class org.apache.spark.deploy.worker.worker myhostname:7077". official documentation points out way adding "myhostname:7077" "conf/slaves" file followed executing command "sbin/start-all.sh" invoke master , workers listed in conf/slaves file. however, later method doesn't work me (with time-out error). can me this?

here conf/slaves file (assume master url myhostname:700):

myhostname:700

the conf.slaves file should list of hostnames, don't need include port # spark runs on (i think if try , ssh on port timeout comes from).


Comments

Popular posts from this blog

twig - Using Twigbridge in a Laravel 5.1 Package -

firemonkey - How do I make a beep sound in Android using Delphi and the API? -

jdbc - Not able to establish database connection in eclipse -