DbEngine2.java

1
package com.renomad.minum.database;
2
3
import com.renomad.minum.state.Context;
4
import com.renomad.minum.utils.CryptoUtils;
5
6
import java.io.IOException;
7
import java.nio.charset.StandardCharsets;
8
import java.nio.file.Files;
9
import java.nio.file.Path;
10
import java.security.MessageDigest;
11
import java.util.*;
12
import java.util.concurrent.atomic.AtomicInteger;
13
import java.util.concurrent.atomic.AtomicLong;
14
import java.util.concurrent.locks.ReentrantLock;
15
import java.util.function.Function;
16
import java.util.stream.Stream;
17
18
import static com.renomad.minum.database.ChecksumUtility.generateChecksumErrorMessage;
19
import static com.renomad.minum.database.ChecksumUtility.getMessageDigest;
20
import static com.renomad.minum.utils.Invariants.mustBeFalse;
21
import static com.renomad.minum.utils.Invariants.mustBeTrue;
22
23
/**
24
 * a memory-based disk-persisted database class.
25
 *
26
 * <p>
27
 *     Engine 2 is a database engine that improves on the performance from the first
28
 *     database provided by Minum. It does this by using different strategies for disk persistence.
29
 * </p>
30
 * <p>
31
 *     The mental model of the previous Minum database has been an in-memory data
32
 *     structure in which every change is eventually written to its own file on disk for
33
 *     persistence.  Data changes affect just their relevant files.  The benefit of this approach is
34
 *     extreme simplicity. It requires very little code, relying as it does on the operating system's file capabilities.
35
 * </p>
36
 * <p>
37
 *     However, there are two performance problems with this approach.  First is when the
38
 *     data changes are arriving at a high rate.  In that situation, the in-memory portion keeps up to date,
39
 *     but the disk portion may lag by minutes.  The second problem is start-up time.  When
40
 *     the database starts, it reads files into memory.  The database can read about 6,000
41
 *     files a second in the best case.  If there are a million data items, it would take
42
 *     about 160 seconds to load it into memory, which is far too long.
43
 * </p>
44
 * <p>
45
 *      The new approach to disk persistence is to append each change to a file.  Append-only file
46
 *      changes can be very fast.  These append files are eventually consolidated into files
47
 *      partitioned by their index - data with indexes between 1 and 1000 go into one file, between
48
 *      1001 and 2000 go into another, and so on.
49
 *  </p>
50
 *  <p>
51
 *      Startup is magnitudes faster by this approach.  What took the previous database 160 seconds
52
 *      to load requires only 2 seconds. Writes to disk are also faster. What would have taken
53
 *      several minutes to write should only take a few seconds now.
54
 *  </p>
55
 *  <p>
56
 *      This new approach uses a different file structure than the previous. If it is
57
 *      desired to use the new engine on existing data, it is possible to convert the old
58
 *      data format to the new.  Construct an instance of the new engine, pointing
59
 *      at the same name as the previous, and it will convert the data.  If the previous
60
 *      call looked like this:
61
 *  </p>
62
 *  {@code
63
 *  Db<Photograph> photoDb = context.getDb("photos", Photograph.EMPTY);
64
 *  }
65
 *  <p>
66
 *  Then converting to the new database is just replacing it with the following
67
 *  line. <b>Please, backup your database before this change.</b>
68
 *  </p>
69
 *  <p>
70
 * {@code
71
 *     DbEngine2<Photograph> photoDb = context.getDb2("photos", Photograph.EMPTY);
72
 * }
73
 *  </p>
74
 *  <p>
75
 *     Once the new engine starts up, it will notice the old file structure and convert it
76
 *     over.  The methods and behaviors are mostly the same between the old and new engines, so the
77
 *     update should be straightforward.
78
 * </p>
79
 * <p>
80
 *     (By the way, it *is* possible to convert back to the old file structure,
81
 *     by starting the database the old way again.  Just be aware that each time the
82
 *     files are converted, it takes longer than normal to start the database)
83
 * </p>
84
 * <p>
85
 *     However, something to note is that using the old database is still fine in many cases,
86
 *     particularly for prototypes or systems which do not contain large amounts of data. If
87
 *     your system is working fine, there is no need to change things.
88
 * </p>
89
 *
90
 * @param <T> the type of data we'll be persisting (must extend from {@link DbData})
91
 */
92
public final class DbEngine2<T extends DbData<?>> extends AbstractDb<T> {
93
94
    private final ReentrantLock loadDataLock;
95
    private final ReentrantLock consolidateLock;
96
    private final ReentrantLock writeLock;
97
    int maxLinesPerAppendFile;
98
    boolean hasLoadedData;
99
    final DatabaseAppender databaseAppender;
100
    final DatabaseConsolidator databaseConsolidator;
101
102
    /**
103
     * Here we track the number of appends we have made.  Once it hits
104
     * a certain number, we will kick off a consolidation in a thread
105
     */
106
    final AtomicInteger appendCount = new AtomicInteger(0);
107
108
    /**
109
     * Used to determine whether to kick off consolidation.  If it is
110
     * already running, we don't want to kick it off again. This would
111
     * only affect us if we are updating the database very fast.
112
     */
113
    boolean consolidationIsRunning;
114
115
    /**
116
     * Constructs an in-memory disk-persisted database.
117
     * Loading of data from disk happens at the first invocation of any command
118
     * changing or requesting data, such as {@link #write(DbData)}, {@link #delete(DbData)},
119
     * or {@link #values()}.  See the private method loadData() for details.
120
     * @param dbDirectory this uniquely names your database, and also sets the directory
121
     *                    name for this data.  The expected use case is to name this after
122
     *                    the data in question.  For example, "users", or "accounts".
123
     * @param context used to provide important state data to several components
124
     * @param instance an instance of the {@link DbData} object relevant for use in this database. Note
125
     *                 that each database (that is, each instance of this class), focuses on just one
126
     *                 data, which must be an implementation of {@link DbData}.
127
     */
128
    public DbEngine2(Path dbDirectory, Context context, T instance) {
129
        super(dbDirectory, context, instance);
130
131
        this.databaseConsolidator = new DatabaseConsolidator(dbDirectory, context);
132
        try {
133
            this.databaseAppender = new DatabaseAppender(dbDirectory, context);
134
        } catch (IOException e) {
135
            throw new DbException("Error while initializing DatabaseAppender in DbEngine2", e);
136
        }
137
        this.loadDataLock = new ReentrantLock();
138
        this.consolidateLock = new ReentrantLock();
139
        this.writeLock = new ReentrantLock();
140
        this.maxLinesPerAppendFile = context.getConstants().maxAppendCount;
141
    }
142
143
    /**
144
     * Write data to the database.  Use an index of 0 to store new data, and a positive
145
     * non-zero value to update data.
146
     * <p><em>
147
     *     Example of adding new data to the database:
148
     * </em></p>
149
     * {@snippet :
150
     *          final var newSalt = StringUtils.generateSecureRandomString(10);
151
     *          final var hashedPassword = CryptoUtils.createPasswordHash(newPassword, newSalt);
152
     *          final var newUser = new User(0L, newUsername, hashedPassword, newSalt);
153
     *          userDb.write(newUser);
154
     * }
155
     * <p><em>
156
     *     Example of updating data:
157
     * </em></p>
158
     * {@snippet :
159
     *         // write the updated salted password to the database
160
     *         final var updatedUser = new User(
161
     *                 user().getIndex(),
162
     *                 user().getUsername(),
163
     *                 hashedPassword,
164
     *                 newSalt);
165
     *         userDb.write(updatedUser);
166
     * }
167
     *
168
     * @param newData the data we are writing
169
     * @return the data with its new index assigned.
170
     * @throws DbException if there is a failure to write
171
     */
172
    @Override
173
    public T write(T newData) {
174 2 1. write : changed conditional boundary → KILLED
2. write : negated conditional → KILLED
        if (newData.getIndex() < 0) throw new DbException("Negative indexes are disallowed");
175
        // load data if needed
176 1 1. write : negated conditional → KILLED
        if (!hasLoadedData) loadData();
177
178 1 1. write : removed call to java/util/concurrent/locks/ReentrantLock::lock → KILLED
        writeLock.lock();
179
        try {
180
            boolean newElementCreated = processDataIndex(newData);
181 1 1. write : removed call to com/renomad/minum/database/DbEngine2::writeToDisk → KILLED
            writeToDisk(newData);
182 1 1. write : removed call to com/renomad/minum/database/DbEngine2::writeToMemory → TIMED_OUT
            writeToMemory(newData, newElementCreated);
183
        } catch (IOException ex) {
184
           throw new DbException("failed to write data " + newData, ex);
185
        } finally {
186 1 1. write : removed call to java/util/concurrent/locks/ReentrantLock::unlock → TIMED_OUT
            writeLock.unlock();
187
        }
188
189
        // returning the data at this point is the most convenient
190
        // way users will have access to the new index of the data.
191 1 1. write : replaced return value with null for com/renomad/minum/database/DbEngine2::write → KILLED
        return newData;
192
    }
193
194
195
    private void writeToDisk(T newData) throws IOException {
196
        logger.logTrace(() -> String.format("writing data to disk: %s", newData));
197
        String serializedData = newData.serialize();
198
        mustBeFalse(serializedData == null || serializedData.isBlank(),
199
                "the serialized form of data must not be blank. " +
200
                        "Is the serialization code written properly? Our datatype: " + emptyInstance);
201
        databaseAppender.appendToDatabase(DatabaseChangeAction.UPDATE, serializedData);
202
        appendCount.incrementAndGet();
203
        consolidateIfNecessary();
204
    }
205
206
    /**
207
     * If the append count is large enough, we will call the
208
     * consolidation method on the DatabaseConsolidator and
209
     * reset the append count to 0.
210
     */
211
    boolean consolidateIfNecessary() {
212 3 1. consolidateIfNecessary : changed conditional boundary → KILLED
2. consolidateIfNecessary : negated conditional → KILLED
3. consolidateIfNecessary : negated conditional → KILLED
        if (appendCount.get() > maxLinesPerAppendFile && !consolidationIsRunning) {
213 1 1. consolidateIfNecessary : removed call to java/util/concurrent/locks/ReentrantLock::lock → KILLED
            consolidateLock.lock(); // block threads here if multiple are trying to get in - only one gets in at a time
214
            try {
215 1 1. consolidateIfNecessary : removed call to com/renomad/minum/database/DbEngine2::consolidateInnerCode → KILLED
                consolidateInnerCode();
216
            } finally {
217 1 1. consolidateIfNecessary : removed call to java/util/concurrent/locks/ReentrantLock::unlock → TIMED_OUT
                consolidateLock.unlock();
218
            }
219 1 1. consolidateIfNecessary : replaced boolean return with false for com/renomad/minum/database/DbEngine2::consolidateIfNecessary → KILLED
            return true;
220
        }
221 1 1. consolidateIfNecessary : replaced boolean return with true for com/renomad/minum/database/DbEngine2::consolidateIfNecessary → KILLED
        return false;
222
    }
223
224
    /**
225
     * This code is only called in production from {@link #consolidateIfNecessary()},
226
     * and is necessarily protected by mutex locks.  However, it is provided
227
     * here as its own method for ease of testing.
228
     */
229
    void consolidateInnerCode() {
230 3 1. consolidateInnerCode : negated conditional → KILLED
2. consolidateInnerCode : negated conditional → KILLED
3. consolidateInnerCode : changed conditional boundary → KILLED
        if (appendCount.get() > maxLinesPerAppendFile && !consolidationIsRunning) {
231
            context.getExecutorService().submit(() -> {
232
                try {
233
                    consolidationIsRunning = true;
234 1 1. lambda$consolidateInnerCode$2 : removed call to com/renomad/minum/database/DatabaseConsolidator::consolidate → KILLED
                    databaseConsolidator.consolidate();
235
                    consolidationIsRunning = false;
236
                } catch (Exception e) {
237
                    logger.logAsyncError(() -> "Error during consolidation: " + e);
238
                }
239
            });
240 1 1. consolidateInnerCode : removed call to java/util/concurrent/atomic/AtomicInteger::set → KILLED
            appendCount.set(0);
241
        }
242
    }
243
244
    /**
245
     * Delete data
246
     * <p><em>Example:</em></p>
247
     * {@snippet :
248
     *      userDb.delete(user);
249
     * }
250
     * @param dataToDelete the data we are serializing and writing
251
     * @throws DbException if there is a failure to delete
252
     */
253
    @Override
254
    public void delete(T dataToDelete) {
255
        // load data if needed
256 1 1. delete : negated conditional → KILLED
        if (!hasLoadedData) loadData();
257
258 1 1. delete : removed call to java/util/concurrent/locks/ReentrantLock::lock → KILLED
        writeLock.lock();
259
        try {
260 1 1. delete : removed call to com/renomad/minum/database/DbEngine2::deleteFromDisk → KILLED
            deleteFromDisk(dataToDelete);
261 1 1. delete : removed call to com/renomad/minum/database/DbEngine2::deleteFromMemory → TIMED_OUT
            deleteFromMemory(dataToDelete);
262
        } catch (IOException ex) {
263
            throw new DbException("failed to delete data " + dataToDelete, ex);
264
        } finally {
265 1 1. delete : removed call to java/util/concurrent/locks/ReentrantLock::unlock → TIMED_OUT
            writeLock.unlock();
266
        }
267
    }
268
269
    private void deleteFromDisk(T dataToDelete) throws IOException {
270
        logger.logTrace(() -> String.format("deleting data from disk: %s", dataToDelete));
271
        databaseAppender.appendToDatabase(DatabaseChangeAction.DELETE, dataToDelete.serialize());
272
        appendCount.incrementAndGet();
273
        consolidateIfNecessary();
274
    }
275
276
277
    /**
278
     * Tells the database to load its data into memory immediately rather
279
     * than wait for a command that would require data (like {@link #write(DbData)},
280
     * {@link #delete(DbData)}, or {@link #values()}). This may be valuable
281
     * in cases where the developer wants greater control over the timing - such
282
     * as getting the data loaded into memory immediately at program start.
283
     */
284
    private void loadDataFromDisk() throws IOException {
285
        logger.logDebug(() -> "Loading data from disk. Db Engine2. Directory: " + dbDirectory);
286
287
        // if we find the "index.ddps" file, it means we are looking at an old
288
        // version of the database.  Update it to the new version, and then afterwards
289
        // remove the old version files.
290 1 1. loadDataFromDisk : negated conditional → KILLED
        if (Files.exists(dbDirectory.resolve("index.ddps"))) {
291 1 1. loadDataFromDisk : removed call to com/renomad/minum/database/DbFileConverter::convertClassicFolderStructureToDbEngine2Form → KILLED
            new DbFileConverter(context, dbDirectory).convertClassicFolderStructureToDbEngine2Form();
292
        }
293
294 1 1. loadDataFromDisk : removed call to com/renomad/minum/utils/FileUtils::makeDirectory → TIMED_OUT
        fileUtils.makeDirectory(dbDirectory);
295
        // if there are any remnant items in the current append-only file, move them
296
        // to a new file
297
        databaseAppender.saveOffCurrentDataToReadyFolder();
298 1 1. loadDataFromDisk : removed call to com/renomad/minum/database/DatabaseAppender::flush → TIMED_OUT
        databaseAppender.flush();
299
300
        // consolidate whatever files still exist in the append logs
301 1 1. loadDataFromDisk : removed call to com/renomad/minum/database/DatabaseConsolidator::consolidate → KILLED
        databaseConsolidator.consolidate();
302
303
        // load the data into memory
304 1 1. loadDataFromDisk : removed call to com/renomad/minum/database/DbEngine2::walkAndLoad → KILLED
        walkAndLoad(dbDirectory);
305
306 1 1. loadDataFromDisk : negated conditional → TIMED_OUT
        if (data.isEmpty()) {
307
            this.index = new AtomicLong(1);
308
        } else {
309 1 1. loadDataFromDisk : Replaced long addition with subtraction → TIMED_OUT
            var initialIndex = Collections.max(data.keySet()) + 1L;
310
            this.index = new AtomicLong(initialIndex);
311
        }
312
    }
313
314
    /**
315
     * Loops through each line of data in the consolidated data files,
316
     * converting each to its strongly-typed form and adding to the database
317
     */
318
    void walkAndLoad(Path dbDirectory) {
319
        List<String> consolidatedFiles = new ArrayList<>(
320
                Arrays.stream(Objects.requireNonNull(
321
                        dbDirectory.resolve("consolidated_data").toFile().list()))
322 2 1. lambda$walkAndLoad$5 : replaced boolean return with true for com/renomad/minum/database/DbEngine2::lambda$walkAndLoad$5 → KILLED
2. lambda$walkAndLoad$5 : negated conditional → KILLED
                        .filter(x -> !x.contains("checksum"))
323
                        .toList());
324
325
        // if there aren't any files, bail out
326 1 1. walkAndLoad : negated conditional → KILLED
        if (consolidatedFiles.isEmpty()) return;
327
328
        // sort
329 1 1. walkAndLoad : removed call to java/util/List::sort → TIMED_OUT
        consolidatedFiles.sort(Comparator.comparingLong(DbEngine2::parseConsolidatedFileName));
330
331
        for (String fileName : consolidatedFiles) {
332
            logger.logDebug(() -> "Processing database file: " + fileName);
333
            Path consolidatedDataFile = dbDirectory.resolve("consolidated_data").resolve(fileName);
334
            Path checksumFilename = consolidatedDataFile.resolveSibling(consolidatedDataFile.getFileName() + ".checksum");
335
336
337
            // By using a lazy stream, we are able to read each item from the file into
338
            // memory without needing to read the whole file contents into memory at once,
339
            // thus avoiding requiring a great amount of memory
340
            // build a hash for this data
341
            MessageDigest messageDigestSha256 = getMessageDigest("SHA-256");
342
343
            try(Stream<String> fileStream = Files.lines(consolidatedDataFile, StandardCharsets.US_ASCII)) {
344
345 1 1. walkAndLoad : removed call to java/util/stream/Stream::forEach → KILLED
                fileStream.forEach(line -> {
346 1 1. lambda$walkAndLoad$7 : removed call to java/security/MessageDigest::update → TIMED_OUT
                    messageDigestSha256.update(line.getBytes(StandardCharsets.US_ASCII));
347 1 1. lambda$walkAndLoad$7 : removed call to com/renomad/minum/database/DbEngine2::readAndDeserialize → KILLED
                    readAndDeserialize(line, fileName);
348
                });
349
350
                // check against the checksum for what we read, if applicable
351 1 1. walkAndLoad : negated conditional → KILLED
                if (Files.exists(checksumFilename)) {
352
                    String checksum = Files.readString(checksumFilename);
353
                    byte[] hashBytes = messageDigestSha256.digest();
354
                    String hashString = CryptoUtils.bytesToHex(hashBytes);
355 1 1. walkAndLoad : negated conditional → KILLED
                    if (!hashString.equals(checksum)) {
356
                        String errorMessage = generateChecksumErrorMessage(consolidatedDataFile);
357
                        throw new DbChecksumException(errorMessage);
358
                    }
359
                }
360
361
            } catch (Exception e) {
362
                throw new DbException(e);
363
            }
364
        }
365
    }
366
367
    /**
368
     * Given a file like 1_to_1000 or 1001_to_2000, extract out the
369
     * beginning index (i.e. 1, or 1001).
370
     */
371
    static long parseConsolidatedFileName(String file) {
372
        int index = file.indexOf("_to_");
373 1 1. parseConsolidatedFileName : negated conditional → KILLED
        if (index == -1) {
374
            throw new DbException("Consolidated filename was invalid: " + file);
375
        }
376 1 1. parseConsolidatedFileName : replaced long return with 0 for com/renomad/minum/database/DbEngine2::parseConsolidatedFileName → TIMED_OUT
        return Long.parseLong(file, 0, index, 10);
377
    }
378
379
    /**
380
     * Converts a serialized string to a strongly-typed data structure
381
     * and adds it to the database.
382
     */
383
    void readAndDeserialize(String lineOfData, String fileName) {
384
        try {
385
            @SuppressWarnings("unchecked")
386
            T deserializedData = (T) emptyInstance.deserialize(lineOfData);
387
            mustBeTrue(deserializedData != null, "deserialization of " + emptyInstance +
388
                    " resulted in a null value. Was the serialization method implemented properly?");
389
390
            // put the data into the in-memory data structure
391
            data.put(deserializedData.getIndex(), deserializedData);
392 1 1. readAndDeserialize : removed call to com/renomad/minum/database/DbEngine2::addToIndexes → KILLED
            addToIndexes(deserializedData);
393
394
        } catch (Exception e) {
395
            throw new DbException("Failed to deserialize " + lineOfData + " with data (\"" + fileName + "\"). Caused by: " + e);
396
        }
397
    }
398
399
400
    /**
401
     * This is what loads the data from disk the
402
     * first time someone needs it.  Because it is
403
     * locked, only one thread can enter at
404
     * a time.  The first one in will load the data,
405
     * and the second will encounter a branch which skips loading.
406
     */
407
    @Override
408
    public AbstractDb<T> loadData() {
409 1 1. loadData : removed call to java/util/concurrent/locks/ReentrantLock::lock → KILLED
        loadDataLock.lock(); // block threads here if multiple are trying to get in - only one gets in at a time
410
        try {
411 1 1. loadData : negated conditional → KILLED
            if (!hasLoadedData) {
412 1 1. loadData : removed call to com/renomad/minum/database/DbEngine2::loadDataFromDisk → KILLED
                loadDataFromDisk();
413
            }
414
            hasLoadedData = true;
415 1 1. loadData : replaced return value with null for com/renomad/minum/database/DbEngine2::loadData → KILLED
            return this;
416
        } catch (Exception ex) {
417
            throw new DbException("Failed to load data from disk.", ex);
418
        } finally {
419 1 1. loadData : removed call to java/util/concurrent/locks/ReentrantLock::unlock → TIMED_OUT
            loadDataLock.unlock();
420
        }
421
    }
422
423
    /**
424
     * This method provides read capability for the values of a database.
425
     * <br>
426
     * The returned collection is a read-only view over the data, through {@link Collections#unmodifiableCollection(Collection)}
427
     *
428
     * <p><em>Example:</em></p>
429
     * {@snippet :
430
     * boolean doesUserAlreadyExist(String username) {
431
     *     return userDb.values().stream().anyMatch(x -> x.getUsername().equals(username));
432
     * }
433
     * }
434
     */
435
    @Override
436
    public Collection<T> values() {
437
        // load data if needed
438 1 1. values : negated conditional → KILLED
        if (!hasLoadedData) loadData();
439
440 1 1. values : replaced return value with Collections.emptyList for com/renomad/minum/database/DbEngine2::values → KILLED
        return Collections.unmodifiableCollection(data.values());
441
    }
442
443
    @Override
444
    public AbstractDb<T> registerIndex(String indexName, Function<T, String> keyObtainingFunction) {
445 1 1. registerIndex : negated conditional → KILLED
        if (hasLoadedData) {
446
            throw new DbException("This method must be run before the database loads data from disk.  Typically, " +
447
                    "it should be run immediately after the database is created.  See this method's documentation");
448
        }
449 1 1. registerIndex : replaced return value with null for com/renomad/minum/database/DbEngine2::registerIndex → TIMED_OUT
        return super.registerIndex(indexName, keyObtainingFunction);
450
    }
451
452
453
    @Override
454
    public Collection<T> getIndexedData(String indexName, String key) {
455
        // load data if needed
456 1 1. getIndexedData : negated conditional → TIMED_OUT
        if (!hasLoadedData) loadData();
457 1 1. getIndexedData : replaced return value with Collections.emptyList for com/renomad/minum/database/DbEngine2::getIndexedData → KILLED
        return super.getIndexedData(indexName, key);
458
    }
459
460
    /**
461
     * This command calls {@link DatabaseAppender#flush()}, which will
462
     * force any in-memory-buffered data to be written to disk.  This is
463
     * not commonly necessary to call for business purposes, but tests
464
     * may require it if you want to be absolutely sure the data is written
465
     * to disk at a particular moment.
466
     */
467
    public void flush() {
468 1 1. flush : removed call to com/renomad/minum/database/DatabaseAppender::flush → TIMED_OUT
        this.databaseAppender.flush();
469
    }
470
471
    /**
472
     * This is here to match the contract of {@link Db}
473
     * but all it does is tell the interior file writer
474
     * to write its data to disk.
475
     */
476
    @Override
477
    public void stop() {
478 1 1. stop : removed call to com/renomad/minum/database/DbEngine2::flush → KILLED
        flush();
479
    }
480
481
    /**
482
     * No real difference to {@link #stop()} but here
483
     * to have a similar contract to {@link Db}
484
     */
485
    @Override
486
    public void stop(int count, int sleepTime) {
487 1 1. stop : removed call to com/renomad/minum/database/DbEngine2::flush → TIMED_OUT
        flush();
488
    }
489
}

Mutations

174

1.1
Location : write
Killed by : com.renomad.minum.database.DbEngine2Tests.testWriteDeserializationComplaints(com.renomad.minum.database.DbEngine2Tests)
changed conditional boundary → KILLED

2.2
Location : write
Killed by : com.renomad.minum.database.DbEngine2Tests.testWriteDeserializationComplaints(com.renomad.minum.database.DbEngine2Tests)
negated conditional → KILLED

176

1.1
Location : write
Killed by : com.renomad.minum.database.DbEngine2Tests.testWriteDeserializationComplaints(com.renomad.minum.database.DbEngine2Tests)
negated conditional → KILLED

178

1.1
Location : write
Killed by : com.renomad.minum.database.DbEngine2Tests.testWriteDeserializationComplaints(com.renomad.minum.database.DbEngine2Tests)
removed call to java/util/concurrent/locks/ReentrantLock::lock → KILLED

181

1.1
Location : write
Killed by : com.renomad.minum.database.DbEngine2Tests.testWriteDeserializationComplaints(com.renomad.minum.database.DbEngine2Tests)
removed call to com/renomad/minum/database/DbEngine2::writeToDisk → KILLED

182

1.1
Location : write
Killed by : none
removed call to com/renomad/minum/database/DbEngine2::writeToMemory → TIMED_OUT

186

1.1
Location : write
Killed by : none
removed call to java/util/concurrent/locks/ReentrantLock::unlock → TIMED_OUT

191

1.1
Location : write
Killed by : com.renomad.minum.database.DbEngine2Tests.testIndex_Update(com.renomad.minum.database.DbEngine2Tests)
replaced return value with null for com/renomad/minum/database/DbEngine2::write → KILLED

212

1.1
Location : consolidateIfNecessary
Killed by : com.renomad.minum.database.DbEngine2Tests.test_ConsolidateIfNecessary(com.renomad.minum.database.DbEngine2Tests)
changed conditional boundary → KILLED

2.2
Location : consolidateIfNecessary
Killed by : com.renomad.minum.database.DbEngine2Tests.test_ConsolidateIfNecessary(com.renomad.minum.database.DbEngine2Tests)
negated conditional → KILLED

3.3
Location : consolidateIfNecessary
Killed by : com.renomad.minum.database.DbEngine2Tests.test_ConsolidateIfNecessary(com.renomad.minum.database.DbEngine2Tests)
negated conditional → KILLED

213

1.1
Location : consolidateIfNecessary
Killed by : com.renomad.minum.database.DbEngine2Tests.test_ConsolidateIfNecessary(com.renomad.minum.database.DbEngine2Tests)
removed call to java/util/concurrent/locks/ReentrantLock::lock → KILLED

215

1.1
Location : consolidateIfNecessary
Killed by : com.renomad.minum.database.DbEngine2Tests.test_FailureDuringConsolidation(com.renomad.minum.database.DbEngine2Tests)
removed call to com/renomad/minum/database/DbEngine2::consolidateInnerCode → KILLED

217

1.1
Location : consolidateIfNecessary
Killed by : none
removed call to java/util/concurrent/locks/ReentrantLock::unlock → TIMED_OUT

219

1.1
Location : consolidateIfNecessary
Killed by : com.renomad.minum.database.DbEngine2Tests.test_ConsolidateIfNecessary(com.renomad.minum.database.DbEngine2Tests)
replaced boolean return with false for com/renomad/minum/database/DbEngine2::consolidateIfNecessary → KILLED

221

1.1
Location : consolidateIfNecessary
Killed by : com.renomad.minum.database.DbEngine2Tests.test_ConsolidateIfNecessary(com.renomad.minum.database.DbEngine2Tests)
replaced boolean return with true for com/renomad/minum/database/DbEngine2::consolidateIfNecessary → KILLED

230

1.1
Location : consolidateInnerCode
Killed by : com.renomad.minum.database.DbEngine2Tests.test_ConsolidateInnerCode(com.renomad.minum.database.DbEngine2Tests)
negated conditional → KILLED

2.2
Location : consolidateInnerCode
Killed by : com.renomad.minum.database.DbEngine2Tests.test_ConsolidateInnerCode(com.renomad.minum.database.DbEngine2Tests)
negated conditional → KILLED

3.3
Location : consolidateInnerCode
Killed by : com.renomad.minum.database.DbEngine2Tests.test_ConsolidateInnerCode(com.renomad.minum.database.DbEngine2Tests)
changed conditional boundary → KILLED

234

1.1
Location : lambda$consolidateInnerCode$2
Killed by : com.renomad.minum.database.DbEngine2Tests.test_FailureDuringConsolidation(com.renomad.minum.database.DbEngine2Tests)
removed call to com/renomad/minum/database/DatabaseConsolidator::consolidate → KILLED

240

1.1
Location : consolidateInnerCode
Killed by : com.renomad.minum.database.DbEngine2Tests.test_ConsolidateInnerCode(com.renomad.minum.database.DbEngine2Tests)
removed call to java/util/concurrent/atomic/AtomicInteger::set → KILLED

256

1.1
Location : delete
Killed by : com.renomad.minum.security.TheBrigTests
negated conditional → KILLED

258

1.1
Location : delete
Killed by : com.renomad.minum.database.DbEngine2Tests.test_FailureDuringDelete(com.renomad.minum.database.DbEngine2Tests)
removed call to java/util/concurrent/locks/ReentrantLock::lock → KILLED

260

1.1
Location : delete
Killed by : com.renomad.minum.database.DbEngine2Tests.test_FailureDuringDelete(com.renomad.minum.database.DbEngine2Tests)
removed call to com/renomad/minum/database/DbEngine2::deleteFromDisk → KILLED

261

1.1
Location : delete
Killed by : none
removed call to com/renomad/minum/database/DbEngine2::deleteFromMemory → TIMED_OUT

265

1.1
Location : delete
Killed by : none
removed call to java/util/concurrent/locks/ReentrantLock::unlock → TIMED_OUT

290

1.1
Location : loadDataFromDisk
Killed by : com.renomad.minum.database.DbEngine2Tests.testWriteDeserializationComplaints(com.renomad.minum.database.DbEngine2Tests)
negated conditional → KILLED

291

1.1
Location : loadDataFromDisk
Killed by : com.renomad.minum.database.DbEngine2Tests.test_ConvertingDatabase_Db_To_DbEngine2(com.renomad.minum.database.DbEngine2Tests)
removed call to com/renomad/minum/database/DbFileConverter::convertClassicFolderStructureToDbEngine2Form → KILLED

294

1.1
Location : loadDataFromDisk
Killed by : none
removed call to com/renomad/minum/utils/FileUtils::makeDirectory → TIMED_OUT

298

1.1
Location : loadDataFromDisk
Killed by : none
removed call to com/renomad/minum/database/DatabaseAppender::flush → TIMED_OUT

301

1.1
Location : loadDataFromDisk
Killed by : com.renomad.minum.database.DbEngine2Tests.test_LoadingData_NegativeCase(com.renomad.minum.database.DbEngine2Tests)
removed call to com/renomad/minum/database/DatabaseConsolidator::consolidate → KILLED

304

1.1
Location : loadDataFromDisk
Killed by : com.renomad.minum.database.DbEngine2Tests.test_Initialize_readAndDeserialize_NegativeCase(com.renomad.minum.database.DbEngine2Tests)
removed call to com/renomad/minum/database/DbEngine2::walkAndLoad → KILLED

306

1.1
Location : loadDataFromDisk
Killed by : none
negated conditional → TIMED_OUT

309

1.1
Location : loadDataFromDisk
Killed by : none
Replaced long addition with subtraction → TIMED_OUT

322

1.1
Location : lambda$walkAndLoad$5
Killed by : com.renomad.minum.database.DbEngine2Tests.test_LoadingData_MultipleThreads(com.renomad.minum.database.DbEngine2Tests)
replaced boolean return with true for com/renomad/minum/database/DbEngine2::lambda$walkAndLoad$5 → KILLED

2.2
Location : lambda$walkAndLoad$5
Killed by : com.renomad.minum.database.DbEngine2Tests.test_Initialize_readAndDeserialize_NegativeCase(com.renomad.minum.database.DbEngine2Tests)
negated conditional → KILLED

326

1.1
Location : walkAndLoad
Killed by : com.renomad.minum.database.DbEngine2Tests.test_Initialize_readAndDeserialize_NegativeCase(com.renomad.minum.database.DbEngine2Tests)
negated conditional → KILLED

329

1.1
Location : walkAndLoad
Killed by : none
removed call to java/util/List::sort → TIMED_OUT

345

1.1
Location : walkAndLoad
Killed by : com.renomad.minum.database.DbEngine2Tests.test_LoadingData_MultipleThreads(com.renomad.minum.database.DbEngine2Tests)
removed call to java/util/stream/Stream::forEach → KILLED

346

1.1
Location : lambda$walkAndLoad$7
Killed by : none
removed call to java/security/MessageDigest::update → TIMED_OUT

347

1.1
Location : lambda$walkAndLoad$7
Killed by : com.renomad.minum.database.DbEngine2Tests.test_LoadingData_MultipleThreads(com.renomad.minum.database.DbEngine2Tests)
removed call to com/renomad/minum/database/DbEngine2::readAndDeserialize → KILLED

351

1.1
Location : walkAndLoad
Killed by : com.renomad.minum.database.DbEngine2Tests.testChecksums_ChecksumsMissingAtLoad(com.renomad.minum.database.DbEngine2Tests)
negated conditional → KILLED

355

1.1
Location : walkAndLoad
Killed by : com.renomad.minum.database.DbEngine2Tests.test_LoadingData_MultipleThreads(com.renomad.minum.database.DbEngine2Tests)
negated conditional → KILLED

373

1.1
Location : parseConsolidatedFileName
Killed by : com.renomad.minum.database.DbEngine2Tests.test_parseConsolidatedFileName_NegativeCase(com.renomad.minum.database.DbEngine2Tests)
negated conditional → KILLED

376

1.1
Location : parseConsolidatedFileName
Killed by : none
replaced long return with 0 for com/renomad/minum/database/DbEngine2::parseConsolidatedFileName → TIMED_OUT

392

1.1
Location : readAndDeserialize
Killed by : com.renomad.minum.database.DbEngine2Tests.test_firstActionIsRequestingDataByIndex(com.renomad.minum.database.DbEngine2Tests)
removed call to com/renomad/minum/database/DbEngine2::addToIndexes → KILLED

409

1.1
Location : loadData
Killed by : com.renomad.minum.database.DbEngine2Tests.testWriteDeserializationComplaints(com.renomad.minum.database.DbEngine2Tests)
removed call to java/util/concurrent/locks/ReentrantLock::lock → KILLED

411

1.1
Location : loadData
Killed by : com.renomad.minum.database.DbEngine2Tests.testWriteDeserializationComplaints(com.renomad.minum.database.DbEngine2Tests)
negated conditional → KILLED

412

1.1
Location : loadData
Killed by : com.renomad.minum.database.DbEngine2Tests.testWriteDeserializationComplaints(com.renomad.minum.database.DbEngine2Tests)
removed call to com/renomad/minum/database/DbEngine2::loadDataFromDisk → KILLED

415

1.1
Location : loadData
Killed by : com.renomad.minum.database.DbEngine2Tests.testChecksums_ChecksumsMissingAtLoad(com.renomad.minum.database.DbEngine2Tests)
replaced return value with null for com/renomad/minum/database/DbEngine2::loadData → KILLED

419

1.1
Location : loadData
Killed by : none
removed call to java/util/concurrent/locks/ReentrantLock::unlock → TIMED_OUT

438

1.1
Location : values
Killed by : com.renomad.minum.web.FullSystemTests.testFullSystem_EdgeCase_InstantlyClosed(com.renomad.minum.web.FullSystemTests)
negated conditional → KILLED

440

1.1
Location : values
Killed by : com.renomad.minum.database.DbEngine2Tests.testIndexSpeedDifference(com.renomad.minum.database.DbEngine2Tests)
replaced return value with Collections.emptyList for com/renomad/minum/database/DbEngine2::values → KILLED

445

1.1
Location : registerIndex
Killed by : com.renomad.minum.database.DbEngine2Tests.testIndex_NegativeCase_PartitioningAlgorithmNull(com.renomad.minum.database.DbEngine2Tests)
negated conditional → KILLED

449

1.1
Location : registerIndex
Killed by : none
replaced return value with null for com/renomad/minum/database/DbEngine2::registerIndex → TIMED_OUT

456

1.1
Location : getIndexedData
Killed by : none
negated conditional → TIMED_OUT

457

1.1
Location : getIndexedData
Killed by : com.renomad.minum.database.DbEngine2Tests.testIndex_Update(com.renomad.minum.database.DbEngine2Tests)
replaced return value with Collections.emptyList for com/renomad/minum/database/DbEngine2::getIndexedData → KILLED

468

1.1
Location : flush
Killed by : none
removed call to com/renomad/minum/database/DatabaseAppender::flush → TIMED_OUT

478

1.1
Location : stop
Killed by : com.renomad.minum.database.DbEngine2Tests.test_LoadingData_MultipleThreads(com.renomad.minum.database.DbEngine2Tests)
removed call to com/renomad/minum/database/DbEngine2::flush → KILLED

487

1.1
Location : stop
Killed by : none
removed call to com/renomad/minum/database/DbEngine2::flush → TIMED_OUT

Active mutators

Tests examined


Report generated by PIT 1.17.0