Commit 59ad4229 authored by Paal Kvamme's avatar Paal Kvamme
Browse files

Merge branch 'kvamme62/update-on-cloud' into 'master'

Add support for updating existing files also on the cloud

See merge request !77
parents 20154566 de722bc8
Pipeline #50201 passed with stages
in 13 minutes and 6 seconds
......@@ -88,6 +88,37 @@ Write requests can be paralellized with respect to copy-in, float to
int8/int16 conversion, decimation algorithm, compression, and upload
to cloud.
## <span style="color:blue">Limitations</span>
Writing and updating of compressed files has the following *restrictions*:
- Only write brick aligned regions.
- Each region can only be written once.
- Update is only allowed when the application knows which close is the
last, and uses FinalizeAction::Keep on each close except the last
one that is FinalizeAction::BuildFull.
Updating of uncompressed files has the following *recommendations*:
- If possible, defer finalize as with compressed files.
- Or, the default, finalize on every close.
- This may lead to poor performance since all LOD levels get recomputed.
- Or, explicitly request FinalizeAction::BuildIncremental
- Statistics and histogram for float data may see numerical inaccuracy.
- In extreme cases, numerical inaccuracy can make histogram counts negative.
- Statistics min/max will not shrink if spikes are overwritten.
- Histogram range will not grow after updates.
- Additionally for updating on the cloud, this might leak disk space.
This is due to how cloud storage works.
Applications that frequently update files must be prepared to garbage
collect by copying the files when the lost space exceeds a certain
percentage. OpenZGY does not at this time provide an API to do this.
But there is a command line tool.
All writes and updates have the following *recommendations*:
- Write data in an order similarly to how it is expected to be read.
For most applications this means writing with vertical changing fastest
and (less importantly) inline slowest.
- Prefer writing brick aligned regions.
## <span style="color:blue">Building and testing the core parts</span>
### Building Linux core
......
......@@ -81,26 +81,13 @@ limitations under the License.
<dd>will replace all three of the above.</dd>
</dl>
<h2>Differences in the OpenZGY reference implementation compared to ZGY-Public:</h2>
<h2>Differences in the OpenZGY implementation compared to ZGY-Public:</h2>
<p>Some the limitations listed here are already enforced by the existing
ZGY-Public API, so they only affect Petrel which uses the ZGY-Internal
API instead.</p>
<ul>
<li><p>
The brick size must be 64x64x64. Granted, the file format does
allow specifying a different brick size. But this has never been
tested. And even if it did work, many users of the ZGY library
makes hard coded assumptions about this brick size.
<br/>[Enforced by ZGY-Public][Ok for Petrel]
</p></li>
<li><p>
Opening a previously written file for update will not be supported.
<br/>[Enforced by ZGY-Public][This is a problem for Petrel]
</p></li>
<li><p>
Writing alpha tiles will not be supported.
<br/>[Enforced by ZGY-Public][Ok for Petrel]
......@@ -325,14 +312,12 @@ limitations under the License.
<li><p>
Computing statistics, histogram, and low resolution bricks
will be done in a separate pass after all full resolution data
is written. The main drawback with this is that if an
application shows a progress bar for the writes, the bar might
be stationary while low resolution data is being computed.
Another problem is that if a file is opened for update (which
might not be supported anyway) a full scan might be needed
even when just a small part was changed. The existing
ZGY-Public goes to great lengths to instead compute this
information along the way.
is written. A mechanism is in place to allow the application
to display a progress bar. If the actual write needs a progress
bar (managed by the application) then a progress bar during
finalize (which generates the low resolution bricks) will
probably need one as well. The writes and the finalize usually
take the same amount of time.
</p></li>
<li><p>
......@@ -383,16 +368,6 @@ limitations under the License.
bricks are contiguous. It also simplifies the code quite a lot.
</p></li>
<li><p>
Writing low resolution data interleaved (as the existing code
does) would have allowed the application to show a more
accurate progress bar that won't stop for a while when the
file is almost fully written. It is questionable whether this
feature is worth the extra complexity. Also, OpenZGY partly
mitigates the issue by providing allowing the caller to set a
progress callback for the low resolution generation.
</p></li>
<li><p>
Writing low resolution data interleaved might also be somewhat
more efficient, especially if the application writes 128<sup>3</sup>
......@@ -450,12 +425,6 @@ limitations under the License.
</ul>
<h2>Challenges for writing files with lossy compression.</h2>
<ul>
<li><p>
The old ZGY accessor can compress an uncompressed file that already
exists on disk. It cannot write directly to a compressed file.
OpenZGY might need to have the same limitation. At least initially.
</p></li>
<li><p>
Generating low resolution data is tricky because this involves
reading already written tiles, decompressing, computing a
......@@ -554,14 +523,13 @@ limitations under the License.
<h2>Details of the seismic store access:</h2>
<ul>
<li><p>
This can be done a lot simpler than today, at the cost of not
supporting the seismic server very well. The seismic server
depends heavily on how caching works in the old ZGY-Public and
ZGY-Cloud.
There will be no caching in the new plug-in.
</p></li>
<li><p>
There will be no caching in the new plug-in.
The experimental &quot;Seismic Server&quot; app is not
supported because it depends heavily on how caching works in
the old ZGY-Public and ZGY-Cloud.
</p></li>
<li><p>
......
......@@ -187,8 +187,10 @@ ifneq ($(strip $(ZFP_LIBRARY)),)
/bin/cp -a -t $(BIN_DIR) $(strip $(ZFP_LIBRARY))*
endif
# SD_LIBRARY is only needed because of test/sdutils.cpp doing direct access
# to SDAPI and bypassing OpenZGY entirely.
$(BIN_DIR)/test_all: $(TEST_OBJ) $(LIBDSO) $(SD_SENTINEL) $(ZFP_SENTINEL)
$(CXX) -o $@ $(CXXFLAGS) $(ORIGIN) $(TEST_OBJ) $(LIBDSO)
$(CXX) -o $@ $(CXXFLAGS) $(ORIGIN) $(TEST_OBJ) $(LIBDSO) $(SD_LIBRARY)
#$(BIN_DIR)/zgycopyc: $(OBJ_DIR)/tools/zgycopyc.o $(OBJ_DIR)/test/mock.o $(LIBDSO)
# $(CXX) -o $@ $(CXXFLAGS) $(ORIGIN) $^ -fopenmp
......
......@@ -362,10 +362,12 @@ public:
case BrickStatus::Constant: result._alpha_constant_count += 1; break;
case BrickStatus::Normal:
result._alpha_normal_count += 1;
result._data_start = std::max(result._data_start, info.offset_in_file);
break;
case BrickStatus::Compressed:
result._alpha_compressed_count += 1;
result._alpha_compressed_size += info.size_in_file;
result._data_start = std::max(result._data_start, info.offset_in_file);
break;
}
}
......@@ -376,10 +378,13 @@ public:
case BrickStatus::Missing: result._brick_missing_count += 1; break;
case BrickStatus::Constant: result._brick_constant_count += 1; break;
case BrickStatus::Normal:
result._brick_normal_count += 1; break;
result._brick_normal_count += 1;
result._data_start = std::max(result._data_start, info.offset_in_file);
break;
case BrickStatus::Compressed:
result._brick_compressed_count += 1;
result._brick_compressed_size += info.size_in_file;
result._data_start = std::max(result._data_start, info.offset_in_file);
break;
}
}
......@@ -712,6 +717,7 @@ public:
(new FileStatistics(*filestats_nocache()));
// The base class has no _fd member so I need to set the size here.
result->_file_size = _fd->xx_eof();
result->_segment_sizes = _fd->xx_segments(false);
{
// Too bad there is no proper atomic_shared_ptr yet.
std::lock_guard<std::mutex> lk(_filestats_mutex);
......@@ -876,6 +882,36 @@ public:
const InternalZGY::IHistHeaderAccess& hh = this->_meta->hh();
if (hh.samplecount() == 0 || hh.minvalue() > hh.maxvalue())
this->_dirty = true;
// Consistency checks: Only files uploaded by OpenZGY can be updated.
// See also the consistency checks in the SeismicStoreFileDelayedWrite
// constructor regarding the segment size.
if (_fd->xx_iscloud()) {
const std::int64_t headersize = this->_meta_rw->flushMeta(nullptr);
const std::shared_ptr<const FileStatistics> fs = filestats();
const std::vector<std::int64_t> segsizes = fs->segmentSizes();
if (segsizes.size() != 0) {
if (fs->dataStart() >= 0 && fs->dataStart() < segsizes[0]) {
// One or more bricks or tiles were found in the first segment.
// Most likely the file was uploaded by sdutil in a single chunk,
// or it may have been written by the old ZGY-Cloud.
// Distinguishing those two is not always possible because
// ZGY-Cloud can also put everything in the same segment.
throw Errors::ZgyUpdateRules
("Only files uploaded by OpenZGY can be updated.");
}
if (headersize != segsizes[0]) {
// Even when there is no data in the header area, the header
// segment must be exactly the expected size. Most likely
// this is a file containing no data and uploaded by sdutil
// or the old ZGY-Cloud. If there is more than one segment
// or eof is > headersize then there is something weird going on.
// Probably not useful to report on that case though.
throw Errors::ZgyUpdateRules
("Only files uploaded by OpenZGY can be updated. Bad Header size.");
}
}
}
}
/**
......@@ -1516,6 +1552,11 @@ public:
*/
void close()
{
// TODO-@@@: If the file has never been written to and the error
// flag is set then discard everyhing and do NOT write any data.
// This can in some cases avoid corrupting a file that was opened
// for write and then has an error thrown.
// The same logic may be needed in _close_internal.
if (_fd) {
finalize(std::vector<DecimationType>
{DecimationType::LowPass, DecimationType::WeightedAverage},
......@@ -1580,6 +1621,7 @@ public:
(new FileStatistics(*filestats_nocache()));
// The base class has no _fd member so I need to set the size here.
result->_file_size = _fd->xx_eof();
result->_segment_sizes = _fd->xx_segments(false);
return result;
}
......
......@@ -23,6 +23,7 @@
#include <memory>
#include <ostream>
#include <iostream>
#include <sstream>
#include <string>
#include <mutex>
......@@ -474,6 +475,8 @@ private:
std::int64_t _file_version;
std::int64_t _file_size;
std::int64_t _header_size;
std::int64_t _data_start;
std::vector<std::int64_t> _segment_sizes;
//std::int64_t _padding_size;
//std::int64_t _wasted_size;
std::int64_t _alpha_normal_count;
......@@ -499,6 +502,8 @@ FileStatistics()
: _file_version(0)
, _file_size(0)
, _header_size(0)
, _data_start(-1)
, _segment_sizes()
//, _padding_size(0)
//, _wasted_size(0)
, _alpha_normal_count(0)
......@@ -526,6 +531,10 @@ FileStatistics()
std::int64_t fileSize() const { return _file_size; }
/// Size of all headers.
std::int64_t headerSize() const { return _header_size; }
/// Lowest address of any brick or tile, or -1 if there are none.
std::int64_t dataStart() const { return _data_start; }
/// Used for cloud storage only.
const std::vector<std::int64_t>& segmentSizes() const {return _segment_sizes;}
// Wasted due to first brick alignment.
//std::int64_t paddingSize() const { return _padding_size; }
// Wasted due to other reasons.
......@@ -596,6 +605,9 @@ FileStatistics()
* For debugging. Output most of the information to the supplied ostream.
*/
void dump(std::ostream& out, const std::string& prefix = "") const {
std::stringstream segs;
for (std::int64_t it : _segment_sizes)
segs << " " << it;
out << prefix << "ZGY version " << _file_version
<< " file compressed to "
<< int(100.0 * _compression_factor) << "% of original\n"
......@@ -603,6 +615,8 @@ FileStatistics()
<< _file_size << " bytes of which "
<< _header_size << " are in headers and "
<< _file_size - _used_size << " wasted\n"
<< prefix << "Segments:" << segs.str() << ", "
<< "Data area starts at: " << _data_start << "\n"
<< prefix << "Alpha: "
<< _alpha_missing_count << " missing, "
<< _alpha_constant_count << " constant, "
......
......@@ -222,6 +222,28 @@ public:
*/
virtual std::int64_t xx_eof() const = 0;
/**
* Return the size of each segment of the file, if the backend
* has a concept of multiple segments in one file. Otherwise
* just return xx_eof() in the first and only slot.
*
* If complete=false return at most 3 numbers: The first, second,
* and last segment size. Currently all segments except the
* first and last are required to have the same size, so by
* combining the results of xx_segments() and xx_eof() it is
* possible to compute the rest of the information.
*
* If the file is open for write then the last number will be
* the in-memory buffer. That can be zero and it can also be
* larger than the preferred segment size.
*
* Currently the only reason this is needed (apart from debug)
* is a couple of consistency checks when opening a cloud file
* for update. This is unfortunate because I really wanted to
* keep this api as small as possible.
*/
virtual std::vector<std::int64_t> xx_segments(bool complete) const = 0;
/**
* Return true if the file is on the cloud.
* This might trigger some optimizations.
......
......@@ -79,6 +79,7 @@ public:
static std::shared_ptr<FileADT> xx_make_instance(const std::string& filename, OpenMode mode, const OpenZGY::IOContext *iocontext);
virtual void xx_close() override;
virtual std::int64_t xx_eof() const override;
virtual std::vector<std::int64_t> xx_segments(bool complete) const override;
virtual bool xx_iscloud() const override;
virtual void xx_read(void *data, std::int64_t offset, std::int64_t size, UsageHint usagehint=UsageHint::Unknown) override;
virtual void xx_readv(const ReadList& requests, bool parallel_ok=false, bool immutable_ok=false, bool transient_ok=false, UsageHint usagehint=UsageHint::Unknown) override;
......@@ -210,6 +211,15 @@ LocalFileLinux::xx_eof() const
return this->_eof;
}
/**
* \details: Thread safety: Yes, called method is thread safe.
*/
std::vector<std::int64_t>
LocalFileLinux::xx_segments(bool /*complete*/) const
{
return std::vector<std::int64_t>{this->xx_eof()};
}
/**
* \details: Thread safety: Yes.
*/
......
......@@ -114,6 +114,12 @@ FileWithPerformanceLogger::xx_eof() const
return _relay->xx_eof();
}
std::vector<std::int64_t>
FileWithPerformanceLogger::xx_segments(bool complete) const
{
return _relay->xx_segments(complete);
}
bool
FileWithPerformanceLogger::xx_iscloud() const
{
......
......@@ -69,6 +69,7 @@ public:
virtual void xx_write(const void* data, std::int64_t offset, std::int64_t size, UsageHint usagehint) override;
virtual void xx_close() override;
virtual std::int64_t xx_eof() const override;
virtual std::vector<std::int64_t> xx_segments(bool complete) const override;
virtual bool xx_iscloud() const override;
public:
void add(const Timer& timer, std::int64_t blocksize);
......
......@@ -63,6 +63,10 @@ public:
virtual std::int64_t xx_eof() const override {
return _relay->xx_eof();
}
virtual std::vector<std::int64_t> xx_segments(bool complete) const override
{
return _relay->xx_segments(complete);
}
virtual bool xx_iscloud() const override {
return _relay->xx_iscloud();
}
......
......@@ -160,6 +160,7 @@ public:
virtual void xx_write(const void* data, std::int64_t offset, std::int64_t size, UsageHint usagehint=UsageHint::Unknown) override;
virtual void xx_close();
virtual std::int64_t xx_eof() const;
virtual std::vector<std::int64_t> xx_segments(bool complete) const override;
virtual bool xx_iscloud() const override;
// Functions from FileUtilsSeismicStore
virtual void deleteFile(const std::string& filename, bool missing_ok) const;
......@@ -168,15 +169,14 @@ private:
void do_write_one(const void* const data, const std::int64_t blocknum, const std::int64_t size, const bool overwrite);
void do_write_many(const void* const data, const std::int64_t blocknum, const std::int64_t size, const std::int64_t blobsize, const bool overwrite);
public:
// The raw SDGenericDataset is needed by SeismicStoreFileDelayedWrite
// when opening a file for update.
std::shared_ptr<SDGenericDatasetWrapper> datasetwrapper() const {return _dataset;}
// TODO-Low per-instance logging. This is tedious to implement
// because many of the helper classes will need to hold a logger
// instance as well, or need the logger passed in each call.
static bool _logger(int priority, const std::string& message = std::string());
static bool _logger(int priority, const std::ios& ss);
public:
// For use by debug_trace, allow SeismicStoreFileDelayedWrite() access
// to details of the file.
std::shared_ptr<const DatasetInformation> debug_info();
private:
/**
* This class is used by _split_by_segment to describe a request as seen by
......@@ -293,9 +293,11 @@ public:
virtual void xx_write(const void* data, std::int64_t offset, std::int64_t size, UsageHint usagehint=UsageHint::Unknown) override;
virtual void xx_close() override;
virtual std::int64_t xx_eof() const override;
virtual std::vector<std::int64_t> xx_segments(bool complete) const override;
virtual bool xx_iscloud() const override;
private:
void _reopen_last_segment();
void _flush_part(std::int64_t this_segsize);
void _flush(bool final_call);
};
......@@ -358,7 +360,7 @@ public:
public:
std::int64_t totalSize() const;
std::vector<std::int64_t> allSizes(std::int64_t open_size) const;
std::vector<std::int64_t> allSizes(bool complete) const;
void getLocalOffset(std::int64_t offset, std::int64_t size, std::int64_t *blocknum, std::int64_t *local_offset, std::int64_t *local_size) const;
void checkOnWrite(std::int64_t blocknum, std::int64_t blocksize) const;
void updateOnWrite(std::int64_t blocknum, std::int64_t blocksize);
......@@ -479,21 +481,27 @@ DatasetInformation::totalSize() const
}
/**
* Return the total file size broken down into segments, including
* the "open" segment which size needs to be provided explicitly.
* This functon is currently only used for debugging.
* Return the total file size broken down into segments, not including
* the "open" segment which DatasetInformation doesn't know about.
*/
std::vector<std::int64_t>
DatasetInformation::allSizes(std::int64_t open_size) const
DatasetInformation::allSizes(bool complete) const
{
std::vector<std::int64_t> result;
for (int ii = 0; ii < block_count_; ++ii)
result.push_back(ii == 0 ? block0_size_ :
ii == block_count_-1 ? last_block_size_ :
block1_size_);
if (open_size >= 0)
result.push_back(open_size);
return result;
switch (block_count_) {
case 0: return std::vector<std::int64_t>{};
case 1: return std::vector<std::int64_t>{block0_size_};
case 2: return std::vector<std::int64_t>{block0_size_, last_block_size_};
default: {
std::vector<std::int64_t> result;
result.push_back(block0_size_);
result.push_back(block1_size_);
if (complete)
for (int ii = 0; ii < block_count_ - 3; ++ii)
result.push_back(block1_size_);
result.push_back(last_block_size_);
return result;
}
}
}
/**
......@@ -552,7 +560,7 @@ DatasetInformation::checkOnWrite(std::int64_t blocknum, std::int64_t blocksize)
* Update cached size information after data is successfully written.
* checkOnWrite() must have been called already.
*
* Thread safety: NOT thred safe.
* Thread safety: NOT thread safe.
* Do not invoke SDGenericDatasetWrapper::info()->updateOnWrite() directly.
* Call the thread safe SDGenericDatasetWrapper::updateOnWrite() instead.
* That one wll make sure the smart pointer being updated is unique.
......@@ -1134,7 +1142,7 @@ SeismicStoreFile::xx_read(void *data, std::int64_t offset, std::int64_t size, Us
ReadRequest request(offset, size, nullptr);
RawList split = this->_split_by_segment(ReadList{request});
if (this->_config->_debug_trace)
this->_config->_debug_trace("read", /*need=*/size, /*want=*/size,/*parts*/ split.size(), this->_dataset->info()->allSizes(-1));
this->_config->_debug_trace("read", /*need=*/size, /*want=*/size,/*parts*/ split.size(), this->xx_segments(true));
for (const RawRequest& it : split) {
// TODO-Low: port _cached_read ?
SimpleTimerEx tt(*this->_rtimer);
......@@ -1222,7 +1230,7 @@ SeismicStoreFile::xx_readv(const ReadList& requests, bool parallel_ok, bool immu
std::shared_ptr<char> data(new char[realsize], std::default_delete<char[]>());
if (this->_config->_debug_trace)
this->_config->_debug_trace("readv", /*need=*/asked, /*want=*/realsize,/*parts*/ work.size(), this->_dataset->info()->allSizes(-1));
this->_config->_debug_trace("readv", /*need=*/asked, /*want=*/realsize,/*parts*/ work.size(), this->xx_segments(true));
// Do the actual reading of the consolidated chunks, possibly using
// multiple threads.
......@@ -1316,18 +1324,21 @@ SeismicStoreFile::xx_write(const void* data, std::int64_t offset, std::int64_t s
overwrite = true;
this->_dataset->info()->getLocalOffset
(offset, size, &blocknum, &local_offset, &local_size);
// Normally we get here to overwrite blob 0, and that is ok.
// TODO-Low: This code needs more work if/when allowing update.
// This test will fail in the parallel upload case
// because local_offset and local_size refers to SDAPI blocks and
// not the larger segments that we are asked to write. local_size
// will usually not be larger than one SDAPI block and will thus
// fail the size check. It is not an immediate concern because
// block 0 should work, and updating other blocks is only needed
// when re-opening a closed segment. Which is not yet implemented.
// Maybe check offset+N*(segsize/segsplit) (last SDAPI block).
// I am unsure whether it is only the test that is wrong or whether
// this case needs more special handling. Worry about that later.
// Normally we only get here to overwrite blob 0, and that is ok.
// Writing block 0 is not multi-threaded and does not resize.
// If opening an existing file for update it depends on how that
// is handled elsewhere. Hopefully we still won't get here
// If that happens then there are several caveats:
// - May need to allow resizing the last brick, which in turn
// invalidates some assumptions about immutable information.
// - The test below will fail in the parallel upload case
// because local_offset and local_size refers to SDAPI blocks and
// not the larger segments that we are asked to write. local_size
// will usually not be larger than one SDAPI block and will thus
// fail the size check.
// - Maybe check offset+N*(segsize/segsplit) (last SDAPI block)?
// - I am unsure whether it is only the test that is wrong or whether
// this case needs more special handling.
if (local_offset != 0 || local_size != size)
throw OpenZGY::Errors::ZgyInternalError("Cannot write resized segment.");
}
......@@ -1358,7 +1369,7 @@ SeismicStoreFile::xx_write(const void* data, std::int64_t offset, std::int64_t s
if (this->_config->_debug_trace)
this->_config->_debug_trace
(offset == current_eof ? "append" : "write",
size, size, 1, this->_dataset->info()->allSizes(-1));
size, size, 1, this->xx_segments(true));
}
/**
......@@ -1538,6 +1549,22 @@ SeismicStoreFile::xx_eof() const
return _dataset->info()->totalSize();
}
/**
* \brief Return the size of each segment of the file.
* \details: Thread safety: Not if writes may be in progress. Could be fixed.
*
* If complete=false return at most 3 numbers: The first, second,
* and last segment size. Currently all segments except the
* first and last are required to have the same size, so by
* combining the results of xx_segments() and xx_eof() it is
* possible to compute the rest of the information.
*/
std::vector<std::int64_t>
SeismicStoreFile::xx_segments(bool complete) const
{
return this->_dataset->info()->allSizes(complete);
}
/**
* \details: Thread safety: Yes.
*/
......@@ -1614,15 +1641,6 @@ SeismicStoreFile::altUrl(const std::string& filename) const
return url;
}
/**
* Thread safety: Yes.
*/
std::shared_ptr<const DatasetInformation>
SeismicStoreFile::debug_info()
{
return this->_dataset->info();
}
/**
* Given one or more (offset, size, ...) tuples, convert these
* to (segment_number, offset_in_seg, size_in_seg, outpos).
......@@ -1683,12 +1701,6 @@ SeismicStoreFile::_cached_read(/*TODO-Low: seg, offset, view*/)
// FileADT -> SeismicStoreFile -> SeismicStoreFileDelayedWrite /////////
/////////////////////////////////////////////////////////////////////////////
OpenMode _mode;
std::shared_ptr<OpenZGY::IOContext> _config;
std::shared_ptr<SeismicStoreFile> _relay;
std::vector<char> _open_segment;
UsageHint _usage_hint;
SeismicStoreFileDelayedWrite::SeismicStoreFileDelayedWrite(const std::string& filename, OpenMode mode, const IOContext *iocontext)
: FileADT()
, _mode(mode)
......@@ -1706,6 +1718,9 @@ SeismicStoreFileDelayedWrite::SeismicStoreFileDelayedWrite(const std::string& fi
if (!context)
throw OpenZGY::Errors::ZgyUserError("Opening a file from seismic store requires a SeismicStoreIOContext");
this->_config.reset(new OpenZGY::SeismicStoreIOContext(*context));
if (mode == OpenMode::ReadWrite)
this->_reopen_last_segment();
}
SeismicStoreFileDelayedWrite::~SeismicStoreFileDelayedWrite()
......@@ -1882,7 +1897,7 @@ SeismicStoreFileDelayedWrite::xx_write(const void* data, std::int64_t offset, st
if (offset == 0 || this->_config->_segsize <= 0 || offset < committed) {
this->_relay->xx_write(data, offset, size, usagehint);
if (this->_config->_debug_trace)
this->_config->_debug_trace("flush", size, size, 1, this->_relay->debug_info()->allSizes(this->_open_segment.size()));
this->_config->_debug_trace("flush", size, size, 1, this->xx_segments(true));
return;
}
......@@ -1927,7 +1942,7 @@ SeismicStoreFileDelayedWrite::xx_write(const void* data, std::int64_t offset, st
this->_usage_hint = UsageHint::Mixed;
if (this->_config->_debug_trace)