java - Submit a storm multi-node topology -
i execute topology of horton works, simulate trucks events. work on single node (localhost), use multi-nodes.
the event producer works, , can read topic without problem.
but when run storm topology, crashed...
my modification in event_topology.properties :
kafka.zookeeper.host.port=10.0.0.24:2181 #kafka topic consume. kafka.topic=vehicleevent #location in zk kafka spout store state. kafka.zkroot=/vehicle_event_spout #kafka spout executors. spout.thread.count=1 #hdfs bolt settings hdfs.path=/vehicle-events hdfs.url=hdfs://10.0.0.24:8020 hdfs.file.prefix=vehicleevents #data moved hdfs hive partition #on first write after 5th minute. hdfs.file.rotation.time.minutes=5 #hbase bolt settings hbase.persist.all.events=true #hive settings hive.metastore.url=thrift://10.0.0.23:9083 hive.staging.table.name=vehicle_events_text_partition hive.database.name=default
after several tries, test modification in topology.java :
final config conf = new config(); conf.setdebug(true); conf.put(config.nimbus_host, "10.0.0.23"); conf.put(config.nimbus_thrift_port, 6627); conf.put(config.storm_zookeeper_port, 2181); conf.put(config.storm_zookeeper_servers, arrays.aslist("10.0.0.24", "10.0.0.23")); // stormsubmitter.submittopology(topology_name, conf, builder.createtopology()); final localcluster cluster = new localcluster(); cluster.submittopology(topology_name, conf, builder.createtopology()); utils.waitforseconds(10); cluster.killtopology(topology_name); cluster.shutdown(); }
if have suggestion, lot :)
Comments
Post a Comment