Commit de722bc8 authored by Paal Kvamme's avatar Paal Kvamme
Browse files

Documentation updates.

parent 623fd567
Pipeline #50196 passed with stages
in 8 minutes and 54 seconds
......@@ -88,6 +88,37 @@ Write requests can be paralellized with respect to copy-in, float to
int8/int16 conversion, decimation algorithm, compression, and upload
to cloud.
## <span style="color:blue">Limitations</span>
Writing and updating of compressed files has the following *restrictions*:
- Only write brick aligned regions.
- Each region can only be written once.
- Update is only allowed when the application knows which close is the
last, and uses FinalizeAction::Keep on each close except the last
one that is FinalizeAction::BuildFull.
Updating of uncompressed files has the following *recommendations*:
- If possible, defer finalize as with compressed files.
- Or, the default, finalize on every close.
- This may lead to poor performance since all LOD levels get recomputed.
- Or, explicitly request FinalizeAction::BuildIncremental
- Statistics and histogram for float data may see numerical inaccuracy.
- In extreme cases, numerical inaccuracy can make histogram counts negative.
- Statistics min/max will not shrink if spikes are overwritten.
- Histogram range will not grow after updates.
- Additionally for updating on the cloud, this might leak disk space.
This is due to how cloud storage works.
Applications that frequently update files must be prepared to garbage
collect by copying the files when the lost space exceeds a certain
percentage. OpenZGY does not at this time provide an API to do this.
But there is a command line tool.
All writes and updates have the following *recommendations*:
- Write data in an order similarly to how it is expected to be read.
For most applications this means writing with vertical changing fastest
and (less importantly) inline slowest.
- Prefer writing brick aligned regions.
## <span style="color:blue">Building and testing the core parts</span>
### Building Linux core
......
......@@ -81,26 +81,13 @@ limitations under the License.
<dd>will replace all three of the above.</dd>
</dl>
<h2>Differences in the OpenZGY reference implementation compared to ZGY-Public:</h2>
<h2>Differences in the OpenZGY implementation compared to ZGY-Public:</h2>
<p>Some the limitations listed here are already enforced by the existing
ZGY-Public API, so they only affect Petrel which uses the ZGY-Internal
API instead.</p>
<ul>
<li><p>
The brick size must be 64x64x64. Granted, the file format does
allow specifying a different brick size. But this has never been
tested. And even if it did work, many users of the ZGY library
makes hard coded assumptions about this brick size.
<br/>[Enforced by ZGY-Public][Ok for Petrel]
</p></li>
<li><p>
Opening a previously written file for update will not be supported.
<br/>[Enforced by ZGY-Public][This is a problem for Petrel]
</p></li>
<li><p>
Writing alpha tiles will not be supported.
<br/>[Enforced by ZGY-Public][Ok for Petrel]
......@@ -325,14 +312,12 @@ limitations under the License.
<li><p>
Computing statistics, histogram, and low resolution bricks
will be done in a separate pass after all full resolution data
is written. The main drawback with this is that if an
application shows a progress bar for the writes, the bar might
be stationary while low resolution data is being computed.
Another problem is that if a file is opened for update (which
might not be supported anyway) a full scan might be needed
even when just a small part was changed. The existing
ZGY-Public goes to great lengths to instead compute this
information along the way.
is written. A mechanism is in place to allow the application
to display a progress bar. If the actual write needs a progress
bar (managed by the application) then a progress bar during
finalize (which generates the low resolution bricks) will
probably need one as well. The writes and the finalize usually
take the same amount of time.
</p></li>
<li><p>
......@@ -383,16 +368,6 @@ limitations under the License.
bricks are contiguous. It also simplifies the code quite a lot.
</p></li>
<li><p>
Writing low resolution data interleaved (as the existing code
does) would have allowed the application to show a more
accurate progress bar that won't stop for a while when the
file is almost fully written. It is questionable whether this
feature is worth the extra complexity. Also, OpenZGY partly
mitigates the issue by providing allowing the caller to set a
progress callback for the low resolution generation.
</p></li>
<li><p>
Writing low resolution data interleaved might also be somewhat
more efficient, especially if the application writes 128<sup>3</sup>
......@@ -450,12 +425,6 @@ limitations under the License.
</ul>
<h2>Challenges for writing files with lossy compression.</h2>
<ul>
<li><p>
The old ZGY accessor can compress an uncompressed file that already
exists on disk. It cannot write directly to a compressed file.
OpenZGY might need to have the same limitation. At least initially.
</p></li>
<li><p>
Generating low resolution data is tricky because this involves
reading already written tiles, decompressing, computing a
......@@ -554,14 +523,13 @@ limitations under the License.
<h2>Details of the seismic store access:</h2>
<ul>
<li><p>
This can be done a lot simpler than today, at the cost of not
supporting the seismic server very well. The seismic server
depends heavily on how caching works in the old ZGY-Public and
ZGY-Cloud.
There will be no caching in the new plug-in.
</p></li>
<li><p>
There will be no caching in the new plug-in.
The experimental &quot;Seismic Server&quot; app is not
supported because it depends heavily on how caching works in
the old ZGY-Public and ZGY-Cloud.
</p></li>
<li><p>
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment