java - How to read all records from table(> 10 million records) and serve each record as chunk response? -


i try fetch record database using hibernate's scrollable result , reference this github project, tried send each record chunk response.

controller:

@transactional(readonly=true) public result fetchall() {     try {         final iterator<string> sourceiterator = summary.fetchall();         response().setheader("content-disposition", "attachment; filename=summary.csv");          source<string, ?> s =  source.from(() -> sourceiterator);         return ok().chunked(s.via(flow.of(string.class).map(i -> bytestring.fromstring(i+"\n")))).as(http.mimetypes.text);      } catch (exception e) {         return badrequest(e.getmessage());     } } 

service:

 public static iterator<string> fetchall() {     statelesssession session = ((session) jpa.em().getdelegate()).getsessionfactory().openstatelesssession();     org.hibernate.query query = session.createquery("select l.id summary l")             .setfetchsize(integer.min_value).setcacheable(false).setreadonly(true);     scrollableresults results = query.scroll(scrollmode.forward_only);      return new models.scrollableresultiterator<>(results, string.class); } 

iterator:

public class scrollableresultiterator<t> implements iterator<t> {    private final scrollableresults results;    private final class<t> type;     public scrollableresultiterator(scrollableresults results, class<t> type) {       this.results = results;       this.type = type;    }     @override    public boolean hasnext() {       return results.next();    }     @override    public t next() {       return type.cast(results.get(0));    } } 

for test purpose, having 1007 records in table, whenever call end point, return 503 records.

enabled akka log level debug , tried again, logs following line 1007 times 2016-07-25 19:55:38 +0530 [debug] org.hibernate.loader.loader in application-akka.actor.default-dispatcher-73 - result row: log confirm fetching all, couldn't remaining 1 got left.

i run same query in workbench , export file locally , compared file generated end point, kept lhs record generated end point , rhs file exported workbench. first row matches, second , third didn't match. after got matches alternate records until end.

enter image description here

please correct me, if doing wrong , suggest me correct approach generating csv large db records.

for sake of testing, removed csv conversion logic in above snippet.

// controller code // prepare chunked text stream exportaschuncked eac = new exportaschuncked(); response().setheader("content-disposition","attachment; filename=results.csv"); chunks<string> chunks = new stringchunks() {      // called when stream ready     public void onready(chunks.out<string> out) {         try {             eac.exportdata(scrollableiterator, out);         }catch (xoexception e) {             logger.error(error_while_downloading_response, e);         }         out.close();     }  }; // serves stream 200 ok return ok(chunks);   // export chunk logic class exportaschuncked {      void exportdata(iterator<string> data, chunks.out<string> out) {             while(data.hasnext()) {                   out.write(data.next());             }      } } 

Comments

Popular posts from this blog

jOOQ update returning clause with Oracle -

java - Warning equals/hashCode on @Data annotation lombok with inheritance -

java - BasicPathUsageException: Cannot join to attribute of basic type -