DbEngine2.java

1
package com.renomad.minum.database;
2
3
import com.renomad.minum.state.Context;
4
import com.renomad.minum.utils.CryptoUtils;
5
import com.renomad.minum.utils.FileUtils;
6
import com.renomad.minum.utils.IFileUtils;
7
8
import java.io.IOException;
9
import java.nio.charset.StandardCharsets;
10
import java.nio.file.Path;
11
import java.security.MessageDigest;
12
import java.text.ParseException;
13
import java.util.*;
14
import java.util.concurrent.atomic.AtomicInteger;
15
import java.util.concurrent.atomic.AtomicLong;
16
import java.util.concurrent.locks.ReentrantLock;
17
import java.util.function.Function;
18
import java.util.stream.Stream;
19
20
import static com.renomad.minum.database.ChecksumUtility.generateChecksumErrorMessage;
21
import static com.renomad.minum.database.ChecksumUtility.getMessageDigest;
22
import static com.renomad.minum.utils.Invariants.mustBeFalse;
23
import static com.renomad.minum.utils.Invariants.mustBeTrue;
24
25
/**
26
 * a memory-based disk-persisted database class.
27
 *
28
 * <p>
29
 *     Engine 2 is a database engine that improves on the performance from the first
30
 *     database provided by Minum. It does this by using different strategies for disk persistence.
31
 * </p>
32
 * <p>
33
 *     The mental model of the previous Minum database has been an in-memory data
34
 *     structure in which every change is eventually written to its own file on disk for
35
 *     persistence.  Data changes affect just their relevant files.  The benefit of this approach is
36
 *     extreme simplicity. It requires very little code, relying as it does on the operating system's file capabilities.
37
 * </p>
38
 * <p>
39
 *     However, there are two performance problems with this approach.  First is when the
40
 *     data changes are arriving at a high rate.  In that situation, the in-memory portion keeps up to date,
41
 *     but the disk portion may lag by minutes.  The second problem is start-up time.  When
42
 *     the database starts, it reads files into memory.  The database can read about 6,000
43
 *     files a second in the best case.  If there are a million data items, it would take
44
 *     about 160 seconds to load it into memory, which is far too long.
45
 * </p>
46
 * <p>
47
 *      The new approach to disk persistence is to append each change to a file.  Append-only file
48
 *      changes can be very fast.  These append files are eventually consolidated into files
49
 *      partitioned by their index - data with indexes between 1 and 1000 go into one file, between
50
 *      1001 and 2000 go into another, and so on.
51
 *  </p>
52
 *  <p>
53
 *      Startup is magnitudes faster by this approach.  What took the previous database 160 seconds
54
 *      to load requires only 2 seconds. Writes to disk are also faster. What would have taken
55
 *      several minutes to write should only take a few seconds now.
56
 *  </p>
57
 *  <p>
58
 *      This new approach uses a different file structure than the previous. If it is
59
 *      desired to use the new engine on existing data, it is possible to convert the old
60
 *      data format to the new.  Construct an instance of the new engine, pointing
61
 *      at the same name as the previous, and it will convert the data.  If the previous
62
 *      call looked like this:
63
 *  </p>
64
 *  {@code
65
 *  Db<Photograph> photoDb = context.getDb("photos", Photograph.EMPTY);
66
 *  }
67
 *  <p>
68
 *  Then converting to the new database is just replacing it with the following
69
 *  line. <b>Please, backup your database before this change.</b>
70
 *  </p>
71
 *  <p>
72
 * {@code
73
 *     DbEngine2<Photograph> photoDb = context.getDb2("photos", Photograph.EMPTY);
74
 * }
75
 *  </p>
76
 *  <p>
77
 *     Once the new engine starts up, it will notice the old file structure and convert it
78
 *     over.  The methods and behaviors are mostly the same between the old and new engines, so the
79
 *     update should be straightforward.
80
 * </p>
81
 * <p>
82
 *     (By the way, it *is* possible to convert back to the old file structure,
83
 *     by starting the database the old way again.  Just be aware that each time the
84
 *     files are converted, it takes longer than normal to start the database)
85
 * </p>
86
 * <p>
87
 *     However, something to note is that using the old database is still fine in many cases,
88
 *     particularly for prototypes or systems which do not contain large amounts of data. If
89
 *     your system is working fine, there is no need to change things.
90
 * </p>
91
 *
92
 * @param <T> the type of data we'll be persisting (must extend from {@link DbData})
93
 */
94
public final class DbEngine2<T extends DbData<?>> extends AbstractDb<T> {
95
96
    private final ReentrantLock loadDataLock;
97
    private final ReentrantLock consolidateLock;
98
    private final ReentrantLock writeLock;
99
    int maxLinesPerAppendFile;
100
    boolean hasLoadedData;
101
    final DatabaseAppender databaseAppender;
102
    final DatabaseConsolidator databaseConsolidator;
103
104
    /**
105
     * Here we track the number of appends we have made.  Once it hits
106
     * a certain number, we will kick off a consolidation in a thread
107
     */
108
    final AtomicInteger appendCount = new AtomicInteger(0);
109
110
    /**
111
     * Used to determine whether to kick off consolidation.  If it is
112
     * already running, we don't want to kick it off again. This would
113
     * only affect us if we are updating the database very fast.
114
     */
115
    boolean consolidationIsRunning;
116
117
    /**
118
     * Constructs an in-memory disk-persisted database.
119
     * Loading of data from disk happens at the first invocation of any command
120
     * changing or requesting data, such as {@link #write(DbData)}, {@link #delete(DbData)},
121
     * or {@link #values()}.  See the private method loadData() for details.
122
     * @param dbDirectory this uniquely names your database, and also sets the directory
123
     *                    name for this data.  The expected use case is to name this after
124
     *                    the data in question.  For example, "users", or "accounts".
125
     * @param context used to provide important state data to several components
126
     * @param instance an instance of the {@link DbData} object relevant for use in this database. Note
127
     *                 that each database (that is, each instance of this class), focuses on just one
128
     *                 data, which must be an implementation of {@link DbData}.
129
     */
130
    public DbEngine2(Path dbDirectory, Context context, T instance) {
131
        this(dbDirectory, context, instance, new FileUtils(context.getLogger(), context.getConstants()));
132
    }
133
134
    DbEngine2(Path dbDirectory, Context context, T instance, IFileUtils fileUtils) {
135
        super(dbDirectory, context, instance, fileUtils);
136
137
        try {
138
            this.databaseConsolidator = new DatabaseConsolidator(dbDirectory, context, fileUtils);
139
            this.databaseAppender = new DatabaseAppender(dbDirectory, context, fileUtils);
140
        } catch (IOException ex) {
141
            throw new DbException("Error in DbEngine2 constructor", ex);
142
        }
143
        this.loadDataLock = new ReentrantLock();
144
        this.consolidateLock = new ReentrantLock();
145
        this.writeLock = new ReentrantLock();
146
        this.maxLinesPerAppendFile = context.getConstants().maxAppendCount;
147
    }
148
149
    /**
150
     * Write data to the database.  Use an index of 0 to store new data, and a positive
151
     * non-zero value to update data.
152
     * <p><em>
153
     *     Example of adding new data to the database:
154
     * </em></p>
155
     * {@snippet :
156
     *          final var newSalt = StringUtils.generateSecureRandomString(10);
157
     *          final var hashedPassword = CryptoUtils.createPasswordHash(newPassword, newSalt);
158
     *          final var newUser = new User(0L, newUsername, hashedPassword, newSalt);
159
     *          userDb.write(newUser);
160
     * }
161
     * <p><em>
162
     *     Example of updating data:
163
     * </em></p>
164
     * {@snippet :
165
     *         // write the updated salted password to the database
166
     *         final var updatedUser = new User(
167
     *                 user().getIndex(),
168
     *                 user().getUsername(),
169
     *                 hashedPassword,
170
     *                 newSalt);
171
     *         userDb.write(updatedUser);
172
     * }
173
     *
174
     * @param newData the data we are writing
175
     * @return the data with its new index assigned.
176
     * @throws DbException if there is a failure to write
177
     */
178
    @Override
179
    public T write(T newData) {
180 2 1. write : changed conditional boundary → KILLED
2. write : negated conditional → KILLED
        if (newData.getIndex() < 0) throw new DbException("Negative indexes are disallowed");
181
        // load data if needed
182 1 1. write : negated conditional → TIMED_OUT
        if (!hasLoadedData) loadData();
183
184 1 1. write : removed call to java/util/concurrent/locks/ReentrantLock::lock → KILLED
        writeLock.lock();
185
        try {
186
            boolean newElementCreated = processDataIndex(newData);
187 1 1. write : removed call to com/renomad/minum/database/DbEngine2::writeToDisk → KILLED
            writeToDisk(newData);
188 1 1. write : removed call to com/renomad/minum/database/DbEngine2::writeToMemory → KILLED
            writeToMemory(newData, newElementCreated);
189
        } catch (Exception ex) {
190
           throw new DbException("failed to write data " + newData, ex);
191
        } finally {
192 1 1. write : removed call to java/util/concurrent/locks/ReentrantLock::unlock → KILLED
            writeLock.unlock();
193
        }
194
195
        // returning the data at this point is the most convenient
196
        // way users will have access to the new index of the data.
197 1 1. write : replaced return value with null for com/renomad/minum/database/DbEngine2::write → KILLED
        return newData;
198
    }
199
200
201
    private void writeToDisk(T newData) throws IOException {
202
        logger.logTrace(() -> String.format("writing data to disk: %s", newData));
203
        String serializedData = newData.serialize();
204
        mustBeFalse(serializedData == null || serializedData.isBlank(),
205
                "the serialized form of data must not be blank. " +
206
                        "Is the serialization code written properly? Our datatype: " + emptyInstance);
207
        databaseAppender.appendToDatabase(DatabaseChangeAction.UPDATE, serializedData);
208
        appendCount.incrementAndGet();
209
        consolidateIfNecessary();
210
    }
211
212
    /**
213
     * If the append count is large enough, we will call the
214
     * consolidation method on the DatabaseConsolidator and
215
     * reset the append count to 0.
216
     */
217
    boolean consolidateIfNecessary() {
218 3 1. consolidateIfNecessary : negated conditional → TIMED_OUT
2. consolidateIfNecessary : changed conditional boundary → KILLED
3. consolidateIfNecessary : negated conditional → KILLED
        if (appendCount.get() > maxLinesPerAppendFile && !consolidationIsRunning) {
219 1 1. consolidateIfNecessary : removed call to java/util/concurrent/locks/ReentrantLock::lock → KILLED
            consolidateLock.lock(); // block threads here if multiple are trying to get in - only one gets in at a time
220
            try {
221 1 1. consolidateIfNecessary : removed call to com/renomad/minum/database/DbEngine2::consolidateInnerCode → TIMED_OUT
                consolidateInnerCode();
222
            } finally {
223 1 1. consolidateIfNecessary : removed call to java/util/concurrent/locks/ReentrantLock::unlock → TIMED_OUT
                consolidateLock.unlock();
224
            }
225 1 1. consolidateIfNecessary : replaced boolean return with false for com/renomad/minum/database/DbEngine2::consolidateIfNecessary → TIMED_OUT
            return true;
226
        }
227 1 1. consolidateIfNecessary : replaced boolean return with true for com/renomad/minum/database/DbEngine2::consolidateIfNecessary → KILLED
        return false;
228
    }
229
230
    /**
231
     * This code is only called in production from {@link #consolidateIfNecessary()},
232
     * and is necessarily protected by mutex locks.  However, it is provided
233
     * here as its own method for ease of testing.
234
     */
235
    void consolidateInnerCode() {
236 3 1. consolidateInnerCode : negated conditional → TIMED_OUT
2. consolidateInnerCode : negated conditional → TIMED_OUT
3. consolidateInnerCode : changed conditional boundary → TIMED_OUT
        if (appendCount.get() > maxLinesPerAppendFile && !consolidationIsRunning) {
237
            context.getExecutorService().submit(() -> {
238
                try {
239
                    consolidationIsRunning = true;
240 1 1. lambda$consolidateInnerCode$2 : removed call to com/renomad/minum/database/DatabaseConsolidator::consolidate → TIMED_OUT
                    databaseConsolidator.consolidate();
241
                    consolidationIsRunning = false;
242
                } catch (Exception e) {
243
                    logger.logAsyncError(() -> "Error during consolidation: " + e);
244
                }
245
            });
246 1 1. consolidateInnerCode : removed call to java/util/concurrent/atomic/AtomicInteger::set → TIMED_OUT
            appendCount.set(0);
247
        }
248
    }
249
250
    /**
251
     * Delete data
252
     * <p><em>Example:</em></p>
253
     * {@snippet :
254
     *      userDb.delete(user);
255
     * }
256
     * @param dataToDelete the data we are serializing and writing
257
     * @throws DbException if there is a failure to delete
258
     */
259
    @Override
260
    public void delete(T dataToDelete) {
261
        // load data if needed
262 1 1. delete : negated conditional → TIMED_OUT
        if (!hasLoadedData) loadData();
263
264 1 1. delete : removed call to java/util/concurrent/locks/ReentrantLock::lock → KILLED
        writeLock.lock();
265
        try {
266 1 1. delete : removed call to com/renomad/minum/database/DbEngine2::deleteFromDisk → KILLED
            deleteFromDisk(dataToDelete);
267 1 1. delete : removed call to com/renomad/minum/database/DbEngine2::deleteFromMemory → TIMED_OUT
            deleteFromMemory(dataToDelete);
268
        } catch (Exception ex) {
269
            throw new DbException("failed to delete data " + dataToDelete, ex);
270
        } finally {
271 1 1. delete : removed call to java/util/concurrent/locks/ReentrantLock::unlock → TIMED_OUT
            writeLock.unlock();
272
        }
273
    }
274
275
    private void deleteFromDisk(T dataToDelete) throws IOException {
276
        logger.logTrace(() -> String.format("deleting data from disk: %s", dataToDelete));
277
        databaseAppender.appendToDatabase(DatabaseChangeAction.DELETE, dataToDelete.serialize());
278
        appendCount.incrementAndGet();
279
        consolidateIfNecessary();
280
    }
281
282
    private void loadDataFromDisk() throws IOException, ParseException {
283
        logger.logDebug(() -> "Loading data from disk. Db Engine2. Directory: " + dbDirectory);
284
285
        // if we find the "index.ddps" file, it means we are looking at an old
286
        // version of the database.  Update it to the new version, and then afterwards
287
        // remove the old version files.
288 1 1. loadDataFromDisk : negated conditional → KILLED
        if (fileUtils.exists(dbDirectory.resolve("index.ddps"))) {
289 1 1. loadDataFromDisk : removed call to com/renomad/minum/database/DbFileConverter::convertClassicFolderStructureToDbEngine2Form → KILLED
            new DbFileConverter(context, dbDirectory, fileUtils).convertClassicFolderStructureToDbEngine2Form();
290
        }
291
292
        // if there are any remaining items in the current append-only file, move them
293
        // to a new file
294
        databaseAppender.saveOffCurrentDataToReadyFolder();
295 1 1. loadDataFromDisk : removed call to com/renomad/minum/database/DatabaseAppender::flush → KILLED
        databaseAppender.flush();
296
297
        // consolidate whatever files still exist in the append logs
298 1 1. loadDataFromDisk : removed call to com/renomad/minum/database/DatabaseConsolidator::consolidate → KILLED
        databaseConsolidator.consolidate();
299
300
        // load the data into memory
301 1 1. loadDataFromDisk : removed call to com/renomad/minum/database/DbEngine2::walkAndLoad → KILLED
        walkAndLoad(dbDirectory);
302
303 1 1. loadDataFromDisk : negated conditional → KILLED
        if (data.isEmpty()) {
304
            this.index = new AtomicLong(1);
305
        } else {
306 1 1. loadDataFromDisk : Replaced long addition with subtraction → KILLED
            var initialIndex = Collections.max(data.keySet()) + 1L;
307
            this.index = new AtomicLong(initialIndex);
308
        }
309
    }
310
311
    /**
312
     * Loops through each line of data in the consolidated data files,
313
     * converting each to its strongly-typed form and adding to the database
314
     */
315
    void walkAndLoad(Path dbDirectory) {
316
        List<String> consolidatedFiles = new ArrayList<>(
317
                Arrays.stream(Objects.requireNonNull(
318
                        dbDirectory.resolve("consolidated_data").toFile().list()))
319 2 1. lambda$walkAndLoad$5 : replaced boolean return with true for com/renomad/minum/database/DbEngine2::lambda$walkAndLoad$5 → TIMED_OUT
2. lambda$walkAndLoad$5 : negated conditional → KILLED
                        .filter(x -> !x.contains("checksum"))
320
                        .toList());
321
322
        // if there aren't any files, bail out
323 1 1. walkAndLoad : negated conditional → KILLED
        if (consolidatedFiles.isEmpty()) return;
324
325
        // sort
326 1 1. walkAndLoad : removed call to java/util/List::sort → TIMED_OUT
        consolidatedFiles.sort(Comparator.comparingLong(DbEngine2::parseConsolidatedFileName));
327
328
        for (String fileName : consolidatedFiles) {
329
            logger.logDebug(() -> "Processing database file: " + fileName);
330
            Path consolidatedDataFile = dbDirectory.resolve("consolidated_data").resolve(fileName);
331
            Path checksumFilename = consolidatedDataFile.resolveSibling(consolidatedDataFile.getFileName() + ".checksum");
332
333
334
            // By using a lazy stream, we are able to read each item from the file into
335
            // memory without needing to read the whole file contents into memory at once,
336
            // thus avoiding requiring a great amount of memory
337
            // build a hash for this data
338
            MessageDigest messageDigestSha256 = getMessageDigest("SHA-256");
339
340
            try(Stream<String> fileStream = fileUtils.lines(consolidatedDataFile, StandardCharsets.US_ASCII)) {
341
342 1 1. walkAndLoad : removed call to java/util/stream/Stream::forEach → KILLED
                fileStream.forEach(line -> {
343 1 1. lambda$walkAndLoad$7 : removed call to java/security/MessageDigest::update → KILLED
                    messageDigestSha256.update(line.getBytes(StandardCharsets.US_ASCII));
344 1 1. lambda$walkAndLoad$7 : removed call to com/renomad/minum/database/DbEngine2::readAndDeserialize → KILLED
                    readAndDeserialize(line, fileName);
345
                });
346
347
                // check against the checksum for what we read, if applicable
348 1 1. walkAndLoad : negated conditional → KILLED
                if (fileUtils.exists(checksumFilename)) {
349
                    String checksum = fileUtils.readString(checksumFilename);
350
                    byte[] hashBytes = messageDigestSha256.digest();
351
                    String hashString = CryptoUtils.bytesToHex(hashBytes);
352 1 1. walkAndLoad : negated conditional → KILLED
                    if (!hashString.equals(checksum)) {
353
                        String errorMessage = generateChecksumErrorMessage(consolidatedDataFile);
354
                        throw new DbChecksumException(errorMessage);
355
                    }
356
                }
357
358
            } catch (Exception e) {
359
                throw new DbException(e);
360
            }
361
        }
362
    }
363
364
    /**
365
     * Given a file like 1_to_1000 or 1001_to_2000, extract out the
366
     * beginning index (i.e. 1, or 1001).
367
     */
368
    static long parseConsolidatedFileName(String file) {
369
        int index = file.indexOf("_to_");
370 1 1. parseConsolidatedFileName : negated conditional → TIMED_OUT
        if (index == -1) {
371
            throw new DbException("Consolidated filename was invalid: " + file);
372
        }
373 1 1. parseConsolidatedFileName : replaced long return with 0 for com/renomad/minum/database/DbEngine2::parseConsolidatedFileName → TIMED_OUT
        return Long.parseLong(file, 0, index, 10);
374
    }
375
376
    /**
377
     * Converts a serialized string to a strongly-typed data structure
378
     * and adds it to the database.
379
     */
380
    void readAndDeserialize(String lineOfData, String fileName) {
381
        try {
382
            @SuppressWarnings("unchecked")
383
            T deserializedData = (T) emptyInstance.deserialize(lineOfData);
384
            mustBeTrue(deserializedData != null, "deserialization of " + emptyInstance +
385
                    " resulted in a null value. Was the serialization method implemented properly?");
386
387
            // put the data into the in-memory data structure
388
            data.put(deserializedData.getIndex(), deserializedData);
389 1 1. readAndDeserialize : removed call to com/renomad/minum/database/DbEngine2::addToIndexes → TIMED_OUT
            addToIndexes(deserializedData);
390
391
        } catch (Exception e) {
392
            throw new DbException("Failed to deserialize " + lineOfData + " with data (\"" + fileName + "\"). Caused by: " + e);
393
        }
394
    }
395
396
397
    /**
398
     * This is what loads the data from disk the
399
     * first time someone needs it.  Because it is
400
     * locked, only one thread can enter at
401
     * a time.  The first one in will load the data,
402
     * and the second will encounter a branch which skips loading.
403
     */
404
    @Override
405
    public AbstractDb<T> loadData() {
406 1 1. loadData : removed call to java/util/concurrent/locks/ReentrantLock::lock → KILLED
        loadDataLock.lock(); // block threads here if multiple are trying to get in - only one gets in at a time
407
        try {
408 1 1. loadData : negated conditional → TIMED_OUT
            if (!hasLoadedData) {
409 1 1. loadData : removed call to com/renomad/minum/database/DbEngine2::loadDataFromDisk → KILLED
                loadDataFromDisk();
410
            }
411
            hasLoadedData = true;
412 1 1. loadData : replaced return value with null for com/renomad/minum/database/DbEngine2::loadData → KILLED
            return this;
413
        } catch (Exception ex) {
414
            throw new DbException("Failed to load data from disk.", ex);
415
        } finally {
416 1 1. loadData : removed call to java/util/concurrent/locks/ReentrantLock::unlock → TIMED_OUT
            loadDataLock.unlock();
417
        }
418
    }
419
420
    /**
421
     * This method provides read capability for the values of a database.
422
     * <br>
423
     * The returned collection is a read-only view over the data, through {@link Collections#unmodifiableCollection(Collection)}
424
     *
425
     * <p><em>Example:</em></p>
426
     * {@snippet :
427
     * boolean doesUserAlreadyExist(String username) {
428
     *     return userDb.values().stream().anyMatch(x -> x.getUsername().equals(username));
429
     * }
430
     * }
431
     */
432
    @Override
433
    public Collection<T> values() {
434
        // load data if needed
435 1 1. values : negated conditional → TIMED_OUT
        if (!hasLoadedData) loadData();
436
437 1 1. values : replaced return value with Collections.emptyList for com/renomad/minum/database/DbEngine2::values → TIMED_OUT
        return Collections.unmodifiableCollection(data.values());
438
    }
439
440
    @Override
441
    public AbstractDb<T> registerIndex(String indexName, Function<T, String> keyObtainingFunction) {
442 1 1. registerIndex : negated conditional → KILLED
        if (hasLoadedData) {
443
            throw new DbException("This method must be run before the database loads data from disk.  Typically, " +
444
                    "it should be run immediately after the database is created.  See this method's documentation");
445
        }
446 1 1. registerIndex : replaced return value with null for com/renomad/minum/database/DbEngine2::registerIndex → KILLED
        return super.registerIndex(indexName, keyObtainingFunction);
447
    }
448
449
450
    @Override
451
    public Collection<T> getIndexedData(String indexName, String key) {
452
        // load data if needed
453 1 1. getIndexedData : negated conditional → KILLED
        if (!hasLoadedData) loadData();
454 1 1. getIndexedData : replaced return value with Collections.emptyList for com/renomad/minum/database/DbEngine2::getIndexedData → KILLED
        return super.getIndexedData(indexName, key);
455
    }
456
457
    /**
458
     * This is here to match the contract of {@link Db}
459
     * but all it does is tell the interior file writer
460
     * to write its data to disk.
461
     */
462
    @Override
463
    public void stop() throws IOException {
464 1 1. stop : removed call to com/renomad/minum/database/DatabaseAppender::flush → KILLED
        this.databaseAppender.flush();
465
    }
466
467
    /**
468
     * No real difference to {@link #stop()} but here
469
     * to have a similar contract to {@link Db}
470
     */
471
    @Override
472
    public void stop(int count, int sleepTime) throws IOException {
473 1 1. stop : removed call to com/renomad/minum/database/DbEngine2::stop → TIMED_OUT
        this.stop();
474
    }
475
}

Mutations

180

1.1
Location : write
Killed by : com.renomad.minum.web.WebTests
changed conditional boundary → KILLED

2.2
Location : write
Killed by : com.renomad.minum.web.WebTests
negated conditional → KILLED

182

1.1
Location : write
Killed by : none
negated conditional → TIMED_OUT

184

1.1
Location : write
Killed by : com.renomad.minum.web.WebTests
removed call to java/util/concurrent/locks/ReentrantLock::lock → KILLED

187

1.1
Location : write
Killed by : com.renomad.minum.web.WebTests
removed call to com/renomad/minum/database/DbEngine2::writeToDisk → KILLED

188

1.1
Location : write
Killed by : com.renomad.minum.web.WebTests
removed call to com/renomad/minum/database/DbEngine2::writeToMemory → KILLED

192

1.1
Location : write
Killed by : com.renomad.minum.web.WebTests
removed call to java/util/concurrent/locks/ReentrantLock::unlock → KILLED

197

1.1
Location : write
Killed by : com.renomad.minum.web.WebTests
replaced return value with null for com/renomad/minum/database/DbEngine2::write → KILLED

218

1.1
Location : consolidateIfNecessary
Killed by : com.renomad.minum.web.WebTests
changed conditional boundary → KILLED

2.2
Location : consolidateIfNecessary
Killed by : none
negated conditional → TIMED_OUT

3.3
Location : consolidateIfNecessary
Killed by : com.renomad.minum.security.TheBrigTests
negated conditional → KILLED

219

1.1
Location : consolidateIfNecessary
Killed by : com.renomad.minum.database.DbEngine2Tests
removed call to java/util/concurrent/locks/ReentrantLock::lock → KILLED

221

1.1
Location : consolidateIfNecessary
Killed by : none
removed call to com/renomad/minum/database/DbEngine2::consolidateInnerCode → TIMED_OUT

223

1.1
Location : consolidateIfNecessary
Killed by : none
removed call to java/util/concurrent/locks/ReentrantLock::unlock → TIMED_OUT

225

1.1
Location : consolidateIfNecessary
Killed by : none
replaced boolean return with false for com/renomad/minum/database/DbEngine2::consolidateIfNecessary → TIMED_OUT

227

1.1
Location : consolidateIfNecessary
Killed by : com.renomad.minum.web.WebTests
replaced boolean return with true for com/renomad/minum/database/DbEngine2::consolidateIfNecessary → KILLED

236

1.1
Location : consolidateInnerCode
Killed by : none
negated conditional → TIMED_OUT

2.2
Location : consolidateInnerCode
Killed by : none
negated conditional → TIMED_OUT

3.3
Location : consolidateInnerCode
Killed by : none
changed conditional boundary → TIMED_OUT

240

1.1
Location : lambda$consolidateInnerCode$2
Killed by : none
removed call to com/renomad/minum/database/DatabaseConsolidator::consolidate → TIMED_OUT

246

1.1
Location : consolidateInnerCode
Killed by : none
removed call to java/util/concurrent/atomic/AtomicInteger::set → TIMED_OUT

262

1.1
Location : delete
Killed by : none
negated conditional → TIMED_OUT

264

1.1
Location : delete
Killed by : com.renomad.minum.security.TheBrigTests
removed call to java/util/concurrent/locks/ReentrantLock::lock → KILLED

266

1.1
Location : delete
Killed by : com.renomad.minum.security.TheBrigTests
removed call to com/renomad/minum/database/DbEngine2::deleteFromDisk → KILLED

267

1.1
Location : delete
Killed by : none
removed call to com/renomad/minum/database/DbEngine2::deleteFromMemory → TIMED_OUT

271

1.1
Location : delete
Killed by : none
removed call to java/util/concurrent/locks/ReentrantLock::unlock → TIMED_OUT

288

1.1
Location : loadDataFromDisk
Killed by : com.renomad.minum.web.WebPerformanceTests.test3(com.renomad.minum.web.WebPerformanceTests)
negated conditional → KILLED

289

1.1
Location : loadDataFromDisk
Killed by : com.renomad.minum.database.DbEngine2Tests
removed call to com/renomad/minum/database/DbFileConverter::convertClassicFolderStructureToDbEngine2Form → KILLED

295

1.1
Location : loadDataFromDisk
Killed by : com.renomad.minum.web.WebPerformanceTests.test3(com.renomad.minum.web.WebPerformanceTests)
removed call to com/renomad/minum/database/DatabaseAppender::flush → KILLED

298

1.1
Location : loadDataFromDisk
Killed by : com.renomad.minum.web.WebPerformanceTests.test3(com.renomad.minum.web.WebPerformanceTests)
removed call to com/renomad/minum/database/DatabaseConsolidator::consolidate → KILLED

301

1.1
Location : loadDataFromDisk
Killed by : com.renomad.minum.web.WebPerformanceTests.test3(com.renomad.minum.web.WebPerformanceTests)
removed call to com/renomad/minum/database/DbEngine2::walkAndLoad → KILLED

303

1.1
Location : loadDataFromDisk
Killed by : com.renomad.minum.web.WebPerformanceTests.test3(com.renomad.minum.web.WebPerformanceTests)
negated conditional → KILLED

306

1.1
Location : loadDataFromDisk
Killed by : com.renomad.minum.security.TheBrigTests
Replaced long addition with subtraction → KILLED

319

1.1
Location : lambda$walkAndLoad$5
Killed by : none
replaced boolean return with true for com/renomad/minum/database/DbEngine2::lambda$walkAndLoad$5 → TIMED_OUT

2.2
Location : lambda$walkAndLoad$5
Killed by : com.renomad.minum.security.TheBrigTests
negated conditional → KILLED

323

1.1
Location : walkAndLoad
Killed by : com.renomad.minum.web.WebPerformanceTests.test3(com.renomad.minum.web.WebPerformanceTests)
negated conditional → KILLED

326

1.1
Location : walkAndLoad
Killed by : none
removed call to java/util/List::sort → TIMED_OUT

342

1.1
Location : walkAndLoad
Killed by : com.renomad.minum.security.TheBrigTests
removed call to java/util/stream/Stream::forEach → KILLED

343

1.1
Location : lambda$walkAndLoad$7
Killed by : com.renomad.minum.security.TheBrigTests
removed call to java/security/MessageDigest::update → KILLED

344

1.1
Location : lambda$walkAndLoad$7
Killed by : com.renomad.minum.security.TheBrigTests
removed call to com/renomad/minum/database/DbEngine2::readAndDeserialize → KILLED

348

1.1
Location : walkAndLoad
Killed by : com.renomad.minum.security.TheBrigTests
negated conditional → KILLED

352

1.1
Location : walkAndLoad
Killed by : com.renomad.minum.security.TheBrigTests
negated conditional → KILLED

370

1.1
Location : parseConsolidatedFileName
Killed by : none
negated conditional → TIMED_OUT

373

1.1
Location : parseConsolidatedFileName
Killed by : none
replaced long return with 0 for com/renomad/minum/database/DbEngine2::parseConsolidatedFileName → TIMED_OUT

389

1.1
Location : readAndDeserialize
Killed by : none
removed call to com/renomad/minum/database/DbEngine2::addToIndexes → TIMED_OUT

406

1.1
Location : loadData
Killed by : com.renomad.minum.web.WebPerformanceTests.test3(com.renomad.minum.web.WebPerformanceTests)
removed call to java/util/concurrent/locks/ReentrantLock::lock → KILLED

408

1.1
Location : loadData
Killed by : none
negated conditional → TIMED_OUT

409

1.1
Location : loadData
Killed by : com.renomad.minum.web.WebPerformanceTests.test3(com.renomad.minum.web.WebPerformanceTests)
removed call to com/renomad/minum/database/DbEngine2::loadDataFromDisk → KILLED

412

1.1
Location : loadData
Killed by : com.renomad.minum.FunctionalTests.test_EdgeCase_PostHandler_IgnoreBody(com.renomad.minum.FunctionalTests)
replaced return value with null for com/renomad/minum/database/DbEngine2::loadData → KILLED

416

1.1
Location : loadData
Killed by : none
removed call to java/util/concurrent/locks/ReentrantLock::unlock → TIMED_OUT

435

1.1
Location : values
Killed by : none
negated conditional → TIMED_OUT

437

1.1
Location : values
Killed by : none
replaced return value with Collections.emptyList for com/renomad/minum/database/DbEngine2::values → TIMED_OUT

442

1.1
Location : registerIndex
Killed by : com.renomad.minum.web.WebPerformanceTests.test3(com.renomad.minum.web.WebPerformanceTests)
negated conditional → KILLED

446

1.1
Location : registerIndex
Killed by : com.renomad.minum.web.WebPerformanceTests.test3(com.renomad.minum.web.WebPerformanceTests)
replaced return value with null for com/renomad/minum/database/DbEngine2::registerIndex → KILLED

453

1.1
Location : getIndexedData
Killed by : com.renomad.minum.FunctionalTests.test_PathFunction_Response(com.renomad.minum.FunctionalTests)
negated conditional → KILLED

454

1.1
Location : getIndexedData
Killed by : com.renomad.minum.FunctionalTests.test_PathFunction_Response_Range(com.renomad.minum.FunctionalTests)
replaced return value with Collections.emptyList for com/renomad/minum/database/DbEngine2::getIndexedData → KILLED

464

1.1
Location : stop
Killed by : com.renomad.minum.security.TheBrigTests
removed call to com/renomad/minum/database/DatabaseAppender::flush → KILLED

473

1.1
Location : stop
Killed by : none
removed call to com/renomad/minum/database/DbEngine2::stop → TIMED_OUT

Active mutators

Tests examined


Report generated by PIT 1.17.0