-
Notifications
You must be signed in to change notification settings - Fork 805
Description
Enhancement Description
There should be a proper way to identify attempts to reuse ids of event sourcing aggregates marked as deleted.
Current Behaviour
LockingRepository
tries to load an aggregate and creates it in case of not found.
public abstract class LockingRepository<T, A extends Aggregate<T>>
extends AbstractRepository<T, LockAwareAggregate<T, A>> {
...
@Override
protected LockAwareAggregate<T, A> doLoadOrCreate(String aggregateIdentifier,
Callable<T> factoryMethod) throws Exception {
Lock lock = spanFactory.createObtainLockSpan(aggregateIdentifier)
.runSupplier(() -> lockFactory.obtainLock(aggregateIdentifier));
try {
final A aggregate = doLoadWithLock(aggregateIdentifier, null);
CurrentUnitOfWork.get().onCleanup(u -> lock.release());
return new LockAwareAggregate<>(aggregate, lock);
} catch (AggregateNotFoundException ex) {
final A aggregate = doCreateNewForLock(factoryMethod);
CurrentUnitOfWork.get().onCleanup(u -> lock.release());
return new LockAwareAggregate<>(aggregate, lock);
} catch (Throwable ex) {
logger.debug("Exception occurred while trying to load/create an aggregate. Releasing lock.", ex);
lock.release();
throw ex;
}
}
...
}
In turn, EventSourcingRepository
on load of a deleted aggregate throws AggregateDeletedException
that extends AggregateNotFoundException
.
public class EventSourcingRepository<T> extends LockingRepository<T, EventSourcedAggregate<T>> {
...
@Override
protected EventSourcedAggregate<T> doLoadWithLock(String aggregateIdentifier, Long expectedVersion) {
SnapshotTrigger trigger = snapshotTriggerDefinition.prepareTrigger(aggregateFactory.getAggregateType());
DomainEventStream eventStream = readEvents(aggregateIdentifier);
if (!eventStream.hasNext()) {
throw new AggregateNotFoundException(aggregateIdentifier, "The aggregate was not found in the event store");
}
AggregateModel<T> model = aggregateModel();
EventSourcedAggregate<T> aggregate = spanFactory
.createInitializeStateSpan(model.type(), aggregateIdentifier)
.runSupplier(() -> doLoadAggregate(aggregateIdentifier, trigger, eventStream, model));
if (aggregate.isDeleted()) {
throw new AggregateDeletedException(aggregateIdentifier);
}
return aggregate;
}
...
}
As LockingRepository
catches just AggregateNotFoundException
, so creates a new aggregate with already used ID and fails on transaction commit only.
Wanted Behaviour
LockingRepository
should handle AggregateDeletedException
explicitly in a way like:
@Override
protected LockAwareAggregate<T, A> doLoadOrCreate(String aggregateIdentifier,
Callable<T> factoryMethod) throws Exception {
Lock lock = spanFactory.createObtainLockSpan(aggregateIdentifier)
.runSupplier(() -> lockFactory.obtainLock(aggregateIdentifier));
try {
try {
final A aggregate = doLoadWithLock(aggregateIdentifier, null);
CurrentUnitOfWork.get().onCleanup(u -> lock.release());
return new LockAwareAggregate<>(aggregate, lock);
} catch (AggregateDeletedException ex) {
throw ex;
} catch (AggregateNotFoundException ex) {
final A aggregate = doCreateNewForLock(factoryMethod);
CurrentUnitOfWork.get().onCleanup(u -> lock.release());
return new LockAwareAggregate<>(aggregate, lock);
}
} catch (Throwable ex) {
logger.debug("Exception occurred while trying to load/create an aggregate. Releasing lock.", ex);
lock.release();
throw ex;
}
}
Please notice the two nested try-catch blocks. The current implementation with a single block releases the lock on load errors only.
Possible Workarounds
No good way. I had to copy LockingRepository
to my codebase and patch it there, so it throws AggregateDeletedException
that is possible to handle.