java - How to keep data with each channel on NIO Server -


i have java nio server receives data clients.

when channel ready read i.e key.isreadable() return true read(key) called read data.

currently using single read buffer channels , in read() method , clear buffer , read , put byte array , supposing data in 1 shot.

but let's not complete data in 1 shot(i have special characters @ data ending detect).

problem :

so how keep partial data channel or how deal partial read problem ? or globally ?

i read somewhere attachments not good.

take @ reactor pattern. here link basic implementation professor doug lea:

http://gee.cs.oswego.edu/dl/cpjslides/nio.pdf

the idea have single reactor thread blocks on selector call. once there io events ready, reactor thread dispatches events appropriate handlers. in pdf above, there inner class acceptor within reactor accepts new connections.

author uses single handler read , write events , maintains state of handler. prefer have separate handlers reads , writes not easy work 'state machine'. there can 1 attachment per event, kind of injection needed switch read/write handlers.

to maintain state between subsequent read/writes have couple of things:

  • introduce custom protocol tells when message read
  • have timeout or cleanup mechanism stale connections
  • maintain client specific sessions

so, can this:

public class reactor implements runnable{      selector selector = selector.open();      serversocketchannel serversocketchannel = serversocketchannel.open();      public reactor(int port) throws ioexception {          serversocketchannel.socket().bind(new inetsocketaddress(port));          serversocketchannel.configureblocking(false);          // let reactor handle new connection events         registeracceptor();      }      /**      * registers acceptor handler new client connections.      *       * @throws closedchannelexception      */     private void registeracceptor() throws closedchannelexception {           selectionkey selectionkey0 = serversocketchannel.register(selector, selectionkey.op_accept);          selectionkey0.attach(new acceptor());     }      @override     public void run(){          while(!thread.interrupted()){              startreactorloop();          }      }      private void startreactorloop() {          try {              // wait new events each registered or new clients             selector.select();              // selection keys pending events             set<selectionkey> selectedkeys = selector.selectedkeys();              iterator<selectionkey> selectedkeysiterator = selectedkeys.iterator();              while (selectedkeysiterator.hasnext()) {                  // dispatch handler given key                 dispatch(selectedkeysiterator.next());                  // remove dispatched key collection                 selectedkeysiterator.remove();             }          } catch (ioexception e) {             // todo add handling of exception             e.printstacktrace();         }     }      private void dispatch(selectionkey interestedevent) {          if (interestedevent.attachment() != null) {              eventhandler handler = (eventhandler) interestedevent.attachment();              handler.processevent();         }      }      private class acceptor implements eventhandler {          @override         public void processevent() {              try {                  socketchannel clientconnection = serversocketchannel.accept();                  if (clientconnection != null) {                      registerchannel(clientconnection);                  }              } catch (ioexception e) {e.printstacktrace();}          }     /**      *  save channel - key association - in map perhaps.      * required subsequent/partial reads/writes      */     private void registerchannel(socketchannel clientchannel) {           // notify injection mechanism of new connection (so can activate read handler) } 

once read event handled, notify injection mechanism write handler can injected.

new instances of read , write handlers created injection mechanism once, when new connection available. injection mechanism switches handlers needed. lookup of handlers each channel done map filled @ connection acceptance method `registerchannel().

read , write handlers have bytebuffer instances, , since each socket channel has own pair of handlers, can maintain state between partial reads , writes.

two tips improve performance:

  • try first read when connection accepted. if don't read enough data defined header in custom protocol, register channel interest read events.

  • try write first without registering interest write events , if don't write data, register interest write.

this reduce number of selector wakeups.

something this:

socketchannel socketchannel;  byte[] outdata;  final static int max_output = 1024;  bytebuffer output = bytebuffer.allocate(max_output);  // if message not written if (socketchannel.write(output) < messagesize()) {  // register interest write event selectionkey selectionkey = socketchannel.register(selector, selectionkey.op_write);          selectionkey.attach(writehandler);         selector.wakeup(); 

}

finally, there should timed task checks if connections still alive/selectionkeys canceled. if client breaks tcp connection, server not know of this. result, there number of event handlers in memory, bind attachments stale connections result memory leak.

this reason why may attachments not good, issue can dealt with.

to deal here 2 simple ways:

  • tcp keep alive enabled

  • periodic task check timestamp of last activity on given channel. if idle long, server should terminate connection.


Comments

Popular posts from this blog

powershell Start-Process exit code -1073741502 when used with Credential from a windows service environment -

twig - Using Twigbridge in a Laravel 5.1 Package -

c# - LINQ join Entities from HashSet's, Join vs Dictionary vs HashSet performance -