Commit 472a2632 authored by Paal Kvamme's avatar Paal Kvamme
Browse files

Merge branch 'kvamme62/doc-update' into 'master'

Documentation updates

See merge request !119
parents 19a30320 12f024c8
Pipeline #127529 passed with stages
in 14 minutes and 50 seconds
......@@ -3,6 +3,7 @@ __pycache__
......@@ -136,7 +136,7 @@ docker-build:
docker rmi -f $(TAG):old $(TAG):test || true
$(RM) cid.txt
docker run --cidfile cid.txt $(TAG) make -j 8 SDAPI_INTERNAL=$(SDAPI_INTERNAL) $(DOCKERTARGET)
docker cp $$(cat cid.txt):/home/me/oz/build/deploy - | gzip -9 > deploy.tgz
docker cp $$(cat cid.txt):/home/me/oz/build/deploy/. - | gzip -9 > openzgy-install.tgz
docker rm $$(cat cid.txt)
$(RM) cid.txt
docker build -t $(TAG):test -f $(DOCKERFILE)-test .
# OpenZGY library
## <span style="color:red">The build instructions are somewhat out of date</span>
## <span style="color:blue">What's in the box?</span>
......@@ -33,6 +31,58 @@ html/index.html. If you have
installed when building there will also be a single-document pdf
versions of those two next to the .tgz files.
## <span style="color:blue">Version numbers for OpenZGY</span>
The version or OpenZGY is of the form {major}/{minor}/{patch}.
The way the numbering is done (as of May 2022) leaves something to be
desired. In theory the minor number should be incremented on any non
trivial change. In particular, for any change that is not both
forwards and backwards compatible. A.k.a. it won't work to hot swap an
existing so/dll with an older one. The major number should be
incremented on breaking changes. In that case it also won't work to
swap out a so/dll with a newer one.
Major number 0 means the above rules aren't necessarily followed. This
is where we are today, with the version being 0.2.*
The major/minor version number is hard coded in several files. This
allows for e.g. the Python wrapper use a different version than the
C++ code does. This probably isn't a good idea. For the pure Python
version it is appropriate, even desirable, because this deprecated
code really is out of date,
| File name | Version used for major/minor of: |
| ---------------------------- | -------------------------------------- |
| native/src/Makefile | Shared object name, others? |
| azure/templates/versions.yml | NuGet and some universal package ids. |
| wrapper/ | Python wrapper, and OSDU package name.\* |
| python/ | Deprecated pure Python version. |
| native/sdglue/ | Deprecated plug-in for the pure code. |
| (universal package)\*\* | Previous version incremented by 1 |
| + possibly others. | |
\*) The GitLab build in OSDU cannot use versions.yml. That file is for
Azure DevOps. And GitLab cannot easily parse any of the files that set
a version because the version number might be computed instead of
stored literally. The current approach is to build the Python wrapper,
then parse the produced PKG-INFO file.
\*\*) A few of the universal package ids just tell the Azure servers
to increment the patch number by 1 and keep the major/minor of the
last build. The patch number then becomes more readable with fewer
digits. The drawback is that bumping the major or minor version needs
to be done by submitting a package by hand. It might be better to
switch to use the build id for all cases.
The patch number is more reasonable. It uses an id provided by Azure
DevOps, $(Build.BuildId), or by GitLab, ${CI_PIPELINE_IID}. Internally
in the source code it is referred to as ${AZURE_BUILDID}. Which is
misleading in the GitLab case. Manual builds use a fallback "dev0"
unless the id is passed to the "make" command line or is set in the
environment. As noted above, there are exceptions for some of the
universal packages.
## <span style="color:blue">Example ZGY files</span>
A ZGY file with real seismic (Volve) is available on
......@@ -165,25 +215,46 @@ All writes and updates have the following *recommendations*:
and (less importantly) inline slowest.
- Prefer writing brick aligned regions.
## <span style="color:blue">Building and testing the core parts</span>
### Building on Linux using docker
## <span style="color:blue">Building and testing</span>
This is the simplest approach.
### Building on Linux
git clone open-zgy
git clone --recursive open-zgy/seismic-store-cpp-lib
make -C sd-env final-${distro}
make -C sd-env testonly-${distro}
cd open-zgy
mkdir -p seismic-dms-sdapi pkg
make -C sd-env sdapi-${distro}
ln -s ../sdapi-pkgs/${distro}_sdapi_linux64.tar.gz pkg/sdapi_linux64.tar.gz
# A local build assumes your local machine uses ${distro}
# Result will be left in build/deploy.
make -j 5 build && make
# Alternatively use Docker also for OpenZGY core.
# Result will be left in openzgy-install.tgz.
make clobber && make LINUXDISTRO=${distro} docker-build && make LINUXDISTRO=${distro} docker-test
This will build SDAPI, build OpenZGY, and run the OpenZGY unit tests.
If you don't need Seismic Store then omit the second clone and the first make.
You might also need HAVE_SD="" as an argument to make.
The same applies if you have obtained the SDAPI devkit from somewhere else.
E.g. you can build it using the instructions in
Build os-seismic-store-cpp-lib according to the instructions in the
[]( or [OSDU](
file in the Seismic Store repository.
Copy the result to pkg/sdapi_linux64.tar.gz so the build can find it.
Caveat, if building by hand the resulting tarball might not match exactly
what OpenZGY expects. You might need to unpack it, reorganize the files,
and tar it up again.
Currently supported in the distro parameter is
- centos7
- bionic
- focal
- buster
- bullseye
- omega
- centos8
......@@ -191,28 +262,17 @@ The resulting docker image can be used to experiment with both the
pure and the wrapped C++ Python versions. The image also contains a
tar file that can be copied out of the image for deploying OpenZGY.
You might want to use ```make -C sd-env PULL="" final-${distro}```
You might want to use ```make -C sd-env PULL="" sdapi-${distro}```
if you don't want to trigger a full rebuild, which can easily take
30 minutes, every time the base image is updated with some minor fix.
There can be such a thing as too much continous integration.
For an explanation of the more obscure targets you might want to build,
see the [detailed readme file](sd-env/
### Building Linux core locally
The code should build pretty much out of the box as long as Seismic
Store access is not enabled. On Linux there is a top level Makefile.
"make" builds and tests everything, while "make build" only builds.
In both cases both C++ and Python versions are built / packaged.
See [Output folders](#output-folders) for where to find the output.
If you want support for Seismic Store then using the docker build is
recommended. Otherwise you need to build SDAPI first and then copy the
binaries by hand into the OpenZGY source tree.
For an explanation of the more obscure targets you might want to build,
see the [detailed readme file](sd-env/
### Building Windows core locally
### Building on Windows
Prerequisites are Visual Studio 2019 with Platform Toolset v142.
If building with cloud support you might also need NuGet,
......@@ -267,45 +327,11 @@ build\deploy\native\x64\Debug\OpenZGY.Tools.ZgyCopy.exe -i compressed.zgy -o rou
## <span style="color:blue">Building and testing with cloud access enabled</span>
### Source code for SDAPI
There are two git repositories where the SDAPI source code can be downloaded.
The repository where active development is taking place is only accessible inside Schlumberger at
The repository that is publicly accessible but may lag behind the internal version is at
So when the instructions below mention downloading the cloud library you may download SDAPI from
[Azure]( or [OSDU](
### Building Linux cloud locally
This is somewhat tricky. Consider using the
[docker build](#Building-on Linux-using-docker) instead.
You will need to download and build the
Seismic Store SDK a.k.a. SDAPI. And then package the binaries so they
can be picked up by OpenZGY. The Linux build expects it to be
available as a tarball in the source tree. So if you want cloud
support this will need some manual tweaking.
See [Building the Seismic Store SDK on Linux](building-the-seismic-store-sdk-on-linux)
for more information.
In the C++ code the cloud support and the compression support will
automatically be included if the SDK is available in the source tree.
And they will be quietly support if the respective tar files are missing.
Alternatively you can explicitly add HAVE_SD=yes and HAVE_ZFP=yes to
make sure those get built and tested, with error messages if they are
not found. Or set to an empty string (not "no"!) if you don't want to
build them even if present.
In the Python code the cloud support gets compiled into a C++ Python
extension which needs to be installed (using pip) next to OpenZGY
### Building Windows cloud
(This section might be out of date)
By default, integration with Seismic Store is disabled for windows
build. To enable it you need to explicitly define HAVE_SD=yes and edit
the project OpenZGY.vcxproj to link with the Seismic Store SDK (a.k.a.
......@@ -316,16 +342,6 @@ On Windows the Visual Studio solution expects the SDAPI headers and
binaries to be available as a NuGet package. You will need to download
and build the Seismic Store SDK and push the binaries to NuGet.
To enable reading and writing seismic store files using the pure
Python implementation you will need to build a small binary Python
extension module that wraps the C++ SDAPI. This produces a wheel that
you can subsequently install. Currently this module only builds on
The C++ Python extension that wraps the entire OpenZGY/C++
implementation also does not build yet. So for cloud access you are
currently limited to C++.
@rem ... Download
@rem ... Download Visual Studio 2019 and Platform Toolset v142 from Microsoft
......@@ -345,16 +361,6 @@ OPENZGY_SDTESTSINK to two locations in the cloud. The first one a
read-only folder of test data and the second one an empty folder for
tests to write to.
## <span style="color:blue">Feature matrix</span>
|Package |linux|windows|read|write|update|seisstore|zfp compress|old compress|
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|OpenZGY/C++ |y|y|y|y|y|y|y|N/A|
|OpenZGY/C++ Python wrapper |y|-|y|y|y|y|y|N/A|
|OpenZGY/Python |y|y|y|y|N/A?|linux|y|N/A|
|ZGY-Public, ZGY-Cloud |y|y|y|y|y|y|N/A|y|
|Old Python wrapper |y|y|y|y|y|y|N/A|y|
## <span style="color:blue">Output folders</span>
On Linux, running "make build" or "make" at the top level produces the deliverables listed below. On Windows the output folder structure is similar.
......@@ -375,7 +381,7 @@ On Linux, running "make build" or "make" at the top level produces the deliverab
| <span style="color:#FF6622; font-size: 14pt;">python</span> |
| ---------- |
| This is a pure python implementation of the core OpenZGY library. Binary packages are needed for optional compression and seismic store access. |
| This is a DEPRECATED pure python implementation of the core OpenZGY library. Binary packages are needed for optional compression and seismic store access. |
......@@ -384,119 +390,15 @@ On Linux, running "make build" or "make" at the top level produces the deliverab
| <span style="color:#FF6622; font-size: 14pt;">legacy C++<br>legacy ZGY-Public</span>|
| ---------- |
| This is the binary SDK for the closed source ZGY-Public library and ZGY-Cloud plug-in with a Python wrapper. This software is deprecated. |
| This is the binary SDK for the closed source ZGY-Public library and ZGY-Cloud plug-in with a Python wrapper. This software is deprecated. It is only available in the git history. |
## <span style="color:blue">Building the Seismic Store SDK on Linux</span>
The Seismic Store SDK (a.k.a. SDAPI) source code can be downloaded from
[Azure]( or [OSDU](
The simplest approach is to clone this into the top level of the OpenZGY folder.
Important: You need to use **git clone --recursive**.
Also create an empty folder named seismic-service-bin to hold the binaries
built from os-seismic-store-cpp-lib. So your directory structure should be
something like:
+-- native/...
+-- python/...
+-- (etc)
+-- os-seismic-store-cpp-lib/...
+-- seismic-service-bin/...
Build os-seismic-store-cpp-lib according to the instructions in the file in
[Azure]( or [OSDU](
Or use the scripts in sd-env.
If cloning from the OSDU repo you might still want to clone into
os-seismic-store-cpp-lib instead of seismic-store-cpp-lib as it is
called there, so the instructions in this file match exactly.
The software requires gcc 4.9.2 or later. This means that if building
on CentOS/RedHat 7 the compilers need to be upgraded. This can cause
some annoying ripple effects.
Bundle the required headers and the compiled binaries into a gzipped
tar file, like so:
mkdir -p tmp/include tmp/lib/linux64
cp -a -t tmp/include ${SRC}/src/core/*.h ${SRC}/src/lib/accessors/*.h
cp -a -t tmp/lib/linux64 libjsoncpp* libsdapi*
tar zcf sdapi_linux64_osdu.tar.gz -C tmp .
Move the tar file to
replacing gccNNN with the version of the compiler being used. So gcc
8.3.1 (currently the default in CentOS/RedHat 8) would be Lin64_gcc831.
Installing the dependencies needed to build SDAPI can be a bit
tedious. Especially if you want to include Azure support. And
especially if you are worried about polluting your regular Linux
installation with a lot of extra packages. There exists a set of
docker files in sd-env/* that might help. Or they might confuse the
issues further. The docker setup allows building code for multiple
Linux distros on the same server. It also tries to do more work on
version handling. This complicates the setup. See sd-env/Makefile for
Note: The Lin64_gccNNN versioning scheme is a holdover from some very
old system and doesn't make much sense. But you shouldn't have any
problems with it unless you are building multiple targets from the
same source folder. The problem is that (a) only the compiler, not the
Linux distribution is included in the name and (b) there really is no
need to include the compiler's patch number in the folder name. If two
Linux distributions happen to use the exact same compiler version then
there is a name clash. <!-- TODO-Low: Fix versioning scheme -->
Note: The top level Makefile accepts a SDAPI_INTERNAL=yes argument. All
this does is to switch the name of the SDAPI tar file from
sdapi_linux64_osdu.tar.gz to sdapi_linux64_local.tar.gz. This is only
useful when building multiple versions of SDAPI.
### Complete example - Linux
Here I am running on Ubuntu focal and I want to build the SDAPI
library inside a docker container and then build OpenZGY itself
outside docker.
git clone oz
cd oz
git clone --recursive
cd sd-env
make clobber
make build-focal
make run-focal
cd ..
docker start -ai sd-focal
mkdir -p seismic-service-bin/Lin64_gcc540
docker cp 47dfde70a3d5:/home/me/sdapi_linux64.tar.gz seismic-service-bin/Lin64_gcc540/sdapi_linux64_osdu.tar.gz
make clobber
make build
## <span style="color:blue">Building the Seismic Store SDK on Windows</span>
Build os-seismic-store-cpp-lib according to the instructions in the file in
[Azure]( or [OSDU](
TODO-Doc: Add more details. Especially in how to package the result
(e.g. in NuGet) for consumption by the OpenZGY build.
### Complete example - Windows
## <span style="color:blue">Docker and Azure DevOps</span>
......@@ -505,11 +407,7 @@ up automated builds of multiple versions of OpenZGY. You don't need
these and you don't need docker if you just want to build a single
version of the code that will run on your current architecture.
Similarly, the files in sd-env/ might help building the Seismic Store
library but they are entirely optional. See "Integration with Seismic
Store". The file in Seismic Store explains how to build the
software by hand. If you decide to use sd-env/ you may need to
customize those files.
That being said, the docker files might help setting up the
prerequisites correctly. And using docker avoids polluting your
## <span style="color:red">This file is very much out of date</span>
## Dockerfiles
See also sd-env/
There are now two sets of docker files to build OpenZGY,
in addition to the option of just using "make" without docker.
Which to choose depends on what you want to achieve.
### To build locally on developer machine
Use the top level Makefile.
#### Drawbacks:
- Difficult to have reproducible builds.
- Difficult to integrate with Seismic Store, see next item.
### To build OpenZGY only
In ```scripts/Dockerfile-{distro}``` there is a setup to build OpenZGY only.
In the same folder there is also ```scripts/Dockerfile-{distro}-test```
showing the minimal environment needed to install OpenZGY. The latter
is also used for running unit tests.
It is also possible to use the dockerfiles in sd-env for OpenZGY-only
builds, see [](../sd-env/ in sd-env
for details. Due to limitations in the current version of docker this
is not recommended. Stages that don't affect the final result may still
get built.
#### Drawbacks:
- It is possible to integrate Seismic Store with this build but this
requires some manual steps. The seismic store SDK must be built and
placed in
If you don't need Seismic Store integration this issue is moot.
- Maintaining two sets of dockerfiles (three if you count the -test
file) is tedious and error prone.
### To build both Seismic Store and OpenZGY
This is intended to be a single-click build if the complete system. In
```sd-env/Dockerfile-{distro}``` there is a setup to build Seismic
Store prerequisites, seismic store itself, and OpenZGY.
#### Drawbacks:
- Both Seismic Store and OpenZGY will be rebuilt each time. This in
itself is not a big issue since building Seismic Store is fairly
- Due to how Docker works, the system will occasionally also rebuild
the Seismic Store prerequisites such as Azure Storage, even when not
needed. This can easily take several times longer than just Seismic
Store and OpenZGY. Irritating but harmless. A partial mitigation is
to tag some of the intermediate builds.
in azure/templates and the
[Makefile](../sd-env/Makefile) in sd-env.
- The pipelines in Azure Devops don't handle pulling from multiple
repositories very well. Internally in SLB this becomes somewhat
awkward. For public builds where all inputs are public it probably
works better.
- The pipelines in Azure Devops have problems detecting when third
party libraries need to be rebuilt. Especially those managed by
vcpkg. They do get built from scratch if the base image (e.g.
centos:centos8) is updated but that means the time for rebuild will
appear to be random. One partial mitigation is to clone vcpkg using
a hard coded hash. That approach comes with its own problems.
Somebody must remember to manually upgrade the vcpkg version ever so
- The all in one dockerfiles are complex and difficult to maintain.
See below for a diagram.
#### Dockerfile layout for Seismic Store plus OpenZGY
![Visualize Dockerfile layout](images/dockerstages-fig1.png)
digraph "Docker stages for OpenZGY builds" {
graph [overlap=false rankdir="TB"];
edge [minlen=1]
node [shape=box]
intro [label="Dockerfile stages\lAll distros mostly look like this,\lbut there are some variations.\l", color="invis"]
"devtoolset" [shape=record label="{Devtoolset|Compiler upgrade\l(if required)\l}"]
"sdapi-base" [shape=record label="{sdapi-base|Install some\limportant packages\l}"]
"sdapi-vcpkg" [shape=record label="{sdapi-vcpkg|Add Azure Sorage\land other vcpkg\ldependencies\lMay take a long time!\l}"]
"sdapi-manual" [shape=record label="{sdapi-manual|Add more dependencies.\lIf none, do vcpkg\lhere instead\l|TAG: \{distro\}:manual}"]
"sdapi-build" [shape=record label="{sdapi-build|Copies source from\lhost, builds SDAPI\l|TAG: \{distro\}:sdbuild}"]
"openzgy-source-vanilla" [shape=record label="{openzgy-source-vanilla|Install prerequisites\lCopies OpenZGY\lsources from host\l}"]
"openzgy-source-cloud" [shape=record label="{openzgy-source-cloud|Add SDAPI binaries\lfrom earlier stage\l}"]
"openzgy-build" [shape=record label="{openzgy-build|Builds OpenZGY\l|TAG: \{distro\}:ozbuild}"]
"openzgy-minimal" [label="openzgy-minimal\nExample of how\lto consume OpenZGY\l"]
"openzgy-testenv" [shape=record label="{openzgy-testenv|Add unit tests to\lthe minimal image\lAlso as a convenience\lexport SDAPI SDK\lfrom here\l|TAG: \{distro\}:oztests\l}"]
"openzgy-test" [shape=record label="{openzgy-test|Run unit tests\lNote, might be\lbetter to do this\lin a 'docker run'\l}"]
"deliverables" [label="Both OpenZGY\ldeliverables and\lSDAPI (for use\lin ZGY-Cloud)\lcan be extracted\lfrom this image.\l", color=invis]
"Linux distro" -> "devtoolset"
"devtoolset" -> "sdapi-base"
"sdapi-base" -> "sdapi-vcpkg"
"sdapi-vcpkg" -> "sdapi-manual"
"sdapi-manual" -> "sdapi-build"
"devtoolset" -> "openzgy-source-vanilla";
"openzgy-source-vanilla" -> "openzgy-source-cloud"
"openzgy-source-vanilla" -> "openzgy-build" [label="disable\lseismic\lstore\l"]
"openzgy-source-cloud" -> "openzgy-build";
"devtoolset" -> "openzgy-minimal";
"openzgy-minimal" -> "openzgy-testenv";
"openzgy-testenv" -> "openzgy-test";
"sdapi-build" -> "openzgy-source-cloud" [style=dashed, constraint=false];
"openzgy-build" -> "openzgy-minimal" [style=dashed, constraint=false];
"sdapi-build" -> "openzgy-testenv" [style=dashed, constraint=false, label="convenience\lonly\l"];
"openzgy-testenv" -> "deliverables" [style=dashed]
......@@ -415,7 +415,7 @@ public:
static void TestExampleV1(Logger_t logger, progress_t progress, tokencb_t tokencb)
if (logger) {
bool ok_logger = logger(2, "Testing from example_1");
/*bool ok_logger = */logger(2, "Testing from example_1");
//std::cerr << "CAPI: Logger delegate returned " << ok_logger << std::endl;
......@@ -424,7 +424,7 @@ public:
if (progress)
bool ok_progress = progress(5, 42);
/*bool ok_progress = */progress(5, 42);
//std::cerr << "CAPI: progress delegate returned " << ok_progress << std::endl;
......@@ -243,7 +243,7 @@ void test_instanciate_handles()
// Hand written
ZgySuccessHandle success;
ZgyErrorHandle error("error", "oops!");
ZgyCleanupHandle cleanup();
ZgyCleanupHandle cleanup;
ZgyStringHandle string("hello!");
// From a single template
ZgyReaderHandle reader(nullptr);
......@@ -34,7 +34,7 @@ namespace InternalZGY {
* typically used when just a few methods are to be intercepted and
* it makes sense to have a default that just passes on the call.
class FileRelayBase : public FileADT
class OPENZGY_TEST_API FileRelayBase : public FileADT
std::shared_ptr<IFileBase> _relay;
......@@ -1292,7 +1292,7 @@ SeismicStoreFile::_set_backoff(ISDGenericDataset* sdgd)
* If dictated by the iocontext, turn on the read-only flag first.
SeismicStoreFile::_open_dataset_ro(const std::shared_ptr<seismicdrive::SDManager>& manager, const std::string& filename, const std::unordered_map<std::string, std::string>& extra, bool sd_ds_log, const SeismicStoreIOContext */*context*/)
SeismicStoreFile::_open_dataset_ro(const std::shared_ptr<seismicdrive::SDManager>& manager, const std::string& filename, const std::unordered_map<std::string, std::string>& extra, bool sd_ds_log, const SeismicStoreIOContext* /*context*/)
if (_logger(5, ""))
_sslogger(5, std::stringstream()
......@@ -1363,7 +1363,7 @@ SeismicStoreFile::_open_dataset_rw(const std::shared_ptr<seismicdrive::SDManager
dataset->open(disp, extra);
_logger(2, "Readonly flag already off for \"" + filename + "\"");
catch (const seismicdrive::SDException& ex) {
catch (const seismicdrive::SDException&) {
// TODO-Low: A specific SDAPI exception "read-only dataset"
// Currently a SDExceptionSDAccessorError is thrown, which is
// more about *where* the error occured and not *what* went wrong.
......@@ -20,6 +20,7 @@
#include <mutex>
#include <stdexcept>
#include <string>
#include "../declspec.h"
namespace InternalZGY {
#if 0
......@@ -31,7 +32,7 @@ class ISDGenericDataset;
* Maintain block level read- and write locks for an open file.
class Locker
struct Entry
......@@ -203,7 +203,7 @@ public:
std::shared_ptr<ISDGenericDataset> sgds,
std::int64_t highwater,
const LoggerFn& logger)
: tracker_(std::make_shared<Locker>(highwater, logger))
: tracker_(std::make_shared<Locker>((int)highwater, logger))
, relay_(sgds)
, logger_(logger)
......@@ -387,7 +387,7 @@ private:
bool check_and_overwrite)
try {
relay->writeBlock(blocknum, data.get(), nbytes, check_and_overwrite);
relay->writeBlock((int)blocknum, data.get(), nbytes, check_and_overwrite);
catch(...) {
......@@ -2277,7 +2277,6 @@ test_hammer()
static void
typedef OpenZGY::IZgyWriter::size3i_t size3i_t;
const std::string filename = cloud_synt2_name();
SeismicStoreIOContext context(*Test_Utils::default_sd_context());
......@@ -407,11 +407,12 @@ public:
// Disabled by default because on failure these tend to hang or
// crash. And there might be failures due to race conditions in
// the tests themselves.
#if 0
register_test("locker.simple", test_simple);
register_test("locker.mixed", test_mixed);