domain driven design - Stream Version in Event Sourcing -


in event sourcing, store individual domain events have happened 1 aggregate instance, known event stream. along event stream store stream version.

should version related each domain event, or should related transactional changes (aka commands)?


example:

our current state of event store is:

aggregate_id | version | event -------------|---------|------ 1            | 1       | e1 1            | 2       | e2 

a new command executed in aggregate 1. command produces 2 new events e3 , e4.

approach 1:

aggregate_id | version | event -------------|---------|------ 1            | 1       | e1 1            | 2       | e2 1            | 3       | e3 1            | 4       | e4 

with approach optimistic concurrency can done storage mechanism using unique index replaying events until version 3 leave aggregate/system in inconsistent state.

approach 2:

aggregate_id | version | event -------------|---------|----- 1            | 1       | e1 1            | 2       | e2 1            | 3       | e3 1            | 3       | e4 

replaying events until version 3 leave aggregate/system in consistent state.

thanks!

short answer: #1.

the write of events e3 , e4 should part of same transaction.

notice 2 approaches don't differ in case concerned about. if read in first case can miss e4, can read in second case. in use case loading aggregate write; loading first 3 events tell next version should #4.

in case of approach #1, attempting write version 4 produces unique constraint conflict; command handler won't able tell whether problem bad load of data, or optimistic concurrency failure, in either case result no write, , book of record still in consistent state.

in case of approach #2, attempting write version 4 doesn't conflict anything. write succeeds, , have e5 not consistent e4. bleah.

for references on schemas event stores, might consider reviewing:

my preferred schema, assuming compelled roll own, separates stream events.

stream_id    | sequence | event_id -------------|----------|------ 1            | 1        | e1 1            | 2        | e2 

the stream gives filter (stream id) identify events want, , order (sequence) ensure events read in same order events write. beyond that, it's kind of artificial thing, side effect of way happened choose our aggregate boundaries. role should pretty limited.

the actual event data, lives somewhere else.

event_id | data | meta_data | ... -------------------------------------- e1       | ...  | ... | ... e2       | ...  | ... | ... 

if need able identify events associated particular command, that's part of event meta-data, not part of stream history (see: correlationid, causationid).


Comments

Popular posts from this blog

jOOQ update returning clause with Oracle -

java - Warning equals/hashCode on @Data annotation lombok with inheritance -

java - BasicPathUsageException: Cannot join to attribute of basic type -