Open VDS issueshttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues2019-12-20T08:47:12Zhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/1vanilla checkout build fails on linux2019-12-20T08:47:12ZJackie Livanilla checkout build fails on linuxfails with:
```
open-vds/src/IO/IOManagerAzure.cpp:20:10: fatal error: was/common.h: No such file or directory
#include <was/common.h>
^~~~~~~~~~~~~~
compilation terminated.
src/CMakeFiles/openvds_objects.dir/build.make:127: ...fails with:
```
open-vds/src/IO/IOManagerAzure.cpp:20:10: fatal error: was/common.h: No such file or directory
#include <was/common.h>
^~~~~~~~~~~~~~
compilation terminated.
src/CMakeFiles/openvds_objects.dir/build.make:127: recipe for target 'src/CMakeFiles/openvds_objects.dir/IO/IOManagerAzure.cpp.o' failed
```https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/2Build Fixes for Fedora 312019-12-20T08:46:37ZAndrew KBuild Fixes for Fedora 31Add this as an issue since I don't appear to have access to submit a merge request.
I ran into a few problem when trying to build OpenVDS in Fedora 31. I've fixed the issues locally and have submitted my patch file for the master branch...Add this as an issue since I don't appear to have access to submit a merge request.
I ran into a few problem when trying to build OpenVDS in Fedora 31. I've fixed the issues locally and have submitted my patch file for the master branch get rid of the problems. These should still be ok for other distributions as well
[openvds_build.diff](/uploads/46da01da49937d8912060330583602de/openvds_build.diff)
- Set CMAKE_INSTALL_LIBDIR to lib${LIBSUFFIX} for Azure Storage SDK
* Azure Storage SDK does not use GNUInstallDir cmake include to
set the CMAKE_INSTALL_<dir> variables resulting in invalid LIBDIR
paths for distributions that use lib64. However, they provide a way
to override it which is what this change uses to set the correct path
- Add WERROR=OFF for cpprestsdk
* Warning about deprecated-copy when building with gcc 9. Causes
the build to fail if werror is enabled
- Remove const from dimensionDistribution
* uniform_real_distribution<float> call operator is not const which
causes an error when building with gcc 9https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/3VolumeDataAccessor Clone() method is not implemented2020-01-03T14:48:54ZMorten OfstadVolumeDataAccessor Clone() method is not implementedThis method is used to clone a VolumeDataAccessor e.g. when used in a firstprivate() clause of an OpenMP parallel construct. It implements ref-counted shared access to the underlying VolumeDataPageAccessor while maintaining the pinned pa...This method is used to clone a VolumeDataAccessor e.g. when used in a firstprivate() clause of an OpenMP parallel construct. It implements ref-counted shared access to the underlying VolumeDataPageAccessor while maintaining the pinned page state separately for each thread. A test needs to be added to ensure that this works as intended.Version 1.0Morten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/4Support for 2D seismic2022-01-12T13:29:20ZMorten OfstadSupport for 2D seismicThe SEG-Y import needs to be augmented with code to create the required metadata for 2D seismic (In the TraceCoordinates category, see KnownMetadata.h) and the SEG-Y export needs to be validated (it should work as-is). An additional help...The SEG-Y import needs to be augmented with code to create the required metadata for 2D seismic (In the TraceCoordinates category, see KnownMetadata.h) and the SEG-Y export needs to be validated (it should work as-is). An additional helper class is needed to do correct lookups in the metadata (similar to the IJKGridDefinition/VDSCoordinateTransformer classes).Version 2.1https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/5Support for prestack-migrated seismic2021-02-04T09:20:14ZMorten OfstadSupport for prestack-migrated seismicThe SEG-Y importer should create 4D volumes with a bin-grid for these data types.The SEG-Y importer should create 4D volumes with a bin-grid for these data types.Version 2.0https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/6Support for unbinned prestack seismic2021-02-04T09:19:15ZMorten OfstadSupport for unbinned prestack seismicThis includes shot, receiver and CMP/CDP/CRP gathers. The axis information needs to be correct according to the KnownMetadata.h, no bingrid metadata should be added.This includes shot, receiver and CMP/CDP/CRP gathers. The axis information needs to be correct according to the KnownMetadata.h, no bingrid metadata should be added.Version 2.0https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/7SEG-Y importer LOD generation2022-07-21T13:15:19ZMorten OfstadSEG-Y importer LOD generationThe SEG-Y importer needs to be augmented with functionality to create LODs. Since we're writing to an object store we don't want to base the LOD generation on re-downloading all objects in order to generate LODs, so we need to implement ...The SEG-Y importer needs to be augmented with functionality to create LODs. Since we're writing to an object store we don't want to base the LOD generation on re-downloading all objects in order to generate LODs, so we need to implement an in-memory cache of partially written LOD tiles. If possible this should be implemented as a re-usable utility class that can be used by other importers with similar needs.Version 2.1https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/8File backend2020-08-07T12:37:29ZMorten OfstadFile backendA file backend based on the huebds library for manipulating data-store files should be added. This needs to have a compatibility layer so it can read files written with the commercial library (translating the serialized objects into JSON...A file backend based on the huebds library for manipulating data-store files should be added. This needs to have a compatibility layer so it can read files written with the commercial library (translating the serialized objects into JSON that is similar to the objects found in an object store version of VDS). The handling of chunk-metadata and metadata-pages is done quite differently for the file format, so there is some refactoring work to be done to make this backend possible.Version 2.0https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/9SEGYScan and SEGYUpload tools should be merged2020-01-27T14:06:35ZMorten OfstadSEGYScan and SEGYUpload tools should be mergedThere is no need to have two separate tools for scanning and uploading SEG-Y files, the command line options of both should be available and an additional --scan-only parameter can be added to tell it to only scan the file and write the ...There is no need to have two separate tools for scanning and uploading SEG-Y files, the command line options of both should be available and an additional --scan-only parameter can be added to tell it to only scan the file and write the results of the scanning to a JSON file. The merged command-line tool should be named SEGYImport to be consistent with the naming of the SEGYExport tool.Version 1.0https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/10UploadRequestAWS gets destructed while FlushUploadQueue is waiting for it to ...2019-12-20T08:47:00ZMorten OfstadUploadRequestAWS gets destructed while FlushUploadQueue is waiting for it to finishWe are seeing 2 types of mutex error appear randomly, and abort the process: unlocking an unowned mutex, and mutex destroyed whilst busy.
The stack for unlocking unowned mutex is:
```
ucrtbased.dll!issue_debug_notification(const wc...We are seeing 2 types of mutex error appear randomly, and abort the process: unlocking an unowned mutex, and mutex destroyed whilst busy.
The stack for unlocking unowned mutex is:
```
ucrtbased.dll!issue_debug_notification(const wchar_t * const message) Line 28 C++
ucrtbased.dll!__acrt_report_runtime_error(const wchar_t * message) Line 154 C++
ucrtbased.dll!abort() Line 61 C++
msvcp140d.dll!_Thrd_abort(const char * msg) Line 18 C++
msvcp140d.dll!_Mtx_unlock(_Mtx_internal_imp_t * mtx) Line 167 C++
> otter.dll!std::_Mutex_base::unlock() Line 67 C++
otter.dll!std::unique_lock<std::mutex>::~unique_lock<std::mutex>() Line 188 C++
otter.dll!OpenVDS::UploadRequestAWS::WaitForFinish() Line 302 C++
otter.dll!OpenVDS::VolumeDataAccessManagerImpl::FlushUploadQueue() Line 598 C++
otter.dll!OpenVDS::VolumeDataPageAccessorImpl::Commit() Line 504 C++
otter.dll!otter_core::VdsImage::writeToBlock(int block_index, void * buffer, unsigned __int64 buffer_size) Line 410 C++
```
The stack for the mutex destroyed whilst busy is:
```
ucrtbased.dll!issue_debug_notification(const wchar_t * const message) Line 28 C++
ucrtbased.dll!__acrt_report_runtime_error(const wchar_t * message) Line 154 C++
ucrtbased.dll!abort() Line 61 C++
msvcp140d.dll!_Thrd_abort(const char * msg) Line 18 C++
msvcp140d.dll!_Mtx_destroy_in_situ(_Mtx_internal_imp_t * mtx) Line 65 C++
> otter.dll!std::_Mutex_base::~_Mutex_base() Line 44 C++
otter.dll!std::mutex::~mutex() C++
otter.dll!OpenVDS::UploadRequestAWS::~UploadRequestAWS() C++
otter.dll!OpenVDS::UploadRequestAWS::`scalar deleting destructor'(unsigned int) C++
otter.dll!std::_Destroy_in_place<OpenVDS::UploadRequestAWS>(OpenVDS::UploadRequestAWS & _Obj) Line 242 C++
otter.dll!std::_Ref_count_obj2<OpenVDS::UploadRequestAWS>::_Destroy() Line 1504 C++
otter.dll!std::_Ref_count_base::_Decref() Line 651 C++
otter.dll!std::_Ptr_base<OpenVDS::UploadRequestAWS>::_Decref() Line 882 C++
otter.dll!std::shared_ptr<OpenVDS::UploadRequestAWS>::~shared_ptr<OpenVDS::UploadRequestAWS>() Line 1132 C++
otter.dll!OpenVDS::upload_callback(const Aws::S3::S3Client * client, const Aws::S3::Model::PutObjectRequest & putRequest, const Aws::Utils::Outcome<Aws::S3::Model::PutObjectResult,Aws::Client::AWSError<enum Aws::S3::S3Errors>> & outcome, std::weak_ptr<OpenVDS::UploadRequestAWS> weak_upload) Line 228 C++
otter.dll!OpenVDS::UploadRequestAWS::run::__l2::<lambda>(const Aws::S3::S3Client * client, const Aws::S3::Model::PutObjectRequest & putRequest, const Aws::Utils::Outcome<Aws::S3::Model::PutObjectResult,Aws::Client::AWSError<enum Aws::S3::S3Errors>> & outcome, const std::shared_ptr<Aws::Client::AsyncCallerContext const> & __formal) Line 293 C++
otter.dll!std::_Invoker_functor::_Call<void <lambda>(const Aws::S3::S3Client *, const Aws::S3::Model::PutObjectRequest &, const Aws::Utils::Outcome<Aws::S3::Model::PutObjectResult,Aws::Client::AWSError<enum Aws::S3::S3Errors>> &, const std::shared_ptr<Aws::Client::AsyncCallerContext const> &) &,Aws::S3::S3Client const *,Aws::S3::Model::PutObjectRequest const &,Aws::Utils::Outcome<Aws::S3::Model::PutObjectResult,Aws::Client::AWSError<enum Aws::S3::S3Errors>> const &,std::shared_ptr<Aws::Client::AsyncCallerContext const> const &>(OpenVDS::UploadRequestAWS::run::__l2::void <lambda>(const Aws::S3::S3Client *, const Aws::S3::Model::PutObjectRequest &, const Aws::Utils::Outcome<Aws::S3::Model::PutObjectResult,Aws::Client::AWSError<enum Aws::S3::S3Errors>> &, const std::shared_ptr<Aws::Client::AsyncCallerContext const> &) & _Obj, const Aws::S3::S3Client * && <_Args_0>, const Aws::S3::Model::PutObjectRequest & <_Args_1>, const Aws::Utils::Outcome<Aws::S3::Model::PutObjectResult,Aws::Client::AWSError<enum Aws::S3::S3Errors>> & <_Args_2>, const std::shared_ptr<Aws::Client::AsyncCallerContext const> & <_Args_3>) Line 1579 C++
otter.dll!std::invoke<void <lambda>(const Aws::S3::S3Client *, const Aws::S3::Model::PutObjectRequest &, const Aws::Utils::Outcome<Aws::S3::Model::PutObjectResult,Aws::Client::AWSError<enum Aws::S3::S3Errors>> &, const std::shared_ptr<Aws::Client::AsyncCallerContext const> &) &,Aws::S3::S3Client const *,Aws::S3::Model::PutObjectRequest const &,Aws::Utils::Outcome<Aws::S3::Model::PutObjectResult,Aws::Client::AWSError<enum Aws::S3::S3Errors>> const &,std::shared_ptr<Aws::Client::AsyncCallerContext const> const &>(OpenVDS::UploadRequestAWS::run::__l2::void <lambda>(const Aws::S3::S3Client *, const Aws::S3::Model::PutObjectRequest &, const Aws::Utils::Outcome<Aws::S3::Model::PutObjectResult,Aws::Client::AWSError<enum Aws::S3::S3Errors>> &, const std::shared_ptr<Aws::Client::AsyncCallerContext const> &) & _Obj, const Aws::S3::S3Client * && <_Args_0>, const Aws::S3::Model::PutObjectRequest & <_Args_1>, const Aws::Utils::Outcome<Aws::S3::Model::PutObjectResult,Aws::Client::AWSError<enum Aws::S3::S3Errors>> & <_Args_2>, const std::shared_ptr<Aws::Client::AsyncCallerContext const> & <_Args_3>) Line 1579 C++
otter.dll!std::_Invoker_ret<void,1>::_Call<void <lambda>(const Aws::S3::S3Client *, const Aws::S3::Model::PutObjectRequest &, const Aws::Utils::Outcome<Aws::S3::Model::PutObjectResult,Aws::Client::AWSError<enum Aws::S3::S3Errors>> &, const std::shared_ptr<Aws::Client::AsyncCallerContext const> &) &,Aws::S3::S3Client const *,Aws::S3::Model::PutObjectRequest const &,Aws::Utils::Outcome<Aws::S3::Model::PutObjectResult,Aws::Client::AWSError<enum Aws::S3::S3Errors>> const &,std::shared_ptr<Aws::Client::AsyncCallerContext const> const &>(OpenVDS::UploadRequestAWS::run::__l2::void <lambda>(const Aws::S3::S3Client *, const Aws::S3::Model::PutObjectRequest &, const Aws::Utils::Outcome<Aws::S3::Model::PutObjectResult,Aws::Client::AWSError<enum Aws::S3::S3Errors>> &, const std::shared_ptr<Aws::Client::AsyncCallerContext const> &) & <_Vals_0>, const Aws::S3::S3Client * && <_Vals_1>, const Aws::S3::Model::PutObjectRequest & <_Vals_2>, const Aws::Utils::Outcome<Aws::S3::Model::PutObjectResult,Aws::Client::AWSError<enum Aws::S3::S3Errors>> & <_Vals_3>, const std::shared_ptr<Aws::Client::AsyncCallerContext const> & <_Vals_4>) Line 1598 C++
otter.dll!std::_Func_impl_no_alloc<void <lambda>(const Aws::S3::S3Client *, const Aws::S3::Model::PutObjectRequest &, const Aws::Utils::Outcome<Aws::S3::Model::PutObjectResult,Aws::Client::AWSError<enum Aws::S3::S3Errors>> &, const std::shared_ptr<Aws::Client::AsyncCallerContext const> &),void,Aws::S3::S3Client const *,Aws::S3::Model::PutObjectRequest const &,Aws::Utils::Outcome<Aws::S3::Model::PutObjectResult,Aws::Client::AWSError<enum Aws::S3::S3Errors>> const &,std::shared_ptr<Aws::Client::AsyncCallerContext const> const &>::_Do_call(const Aws::S3::S3Client * && <_Args_0>, const Aws::S3::Model::PutObjectRequest & <_Args_1>, const Aws::Utils::Outcome<Aws::S3::Model::PutObjectResult,Aws::Client::AWSError<enum Aws::S3::S3Errors>> & <_Args_2>, const std::shared_ptr<Aws::Client::AsyncCallerContext const> & <_Args_3>) Line 927 C++
aws-cpp-sdk-s3.dll!std::_Func_class<void,Aws::S3::S3Client const *,Aws::S3::Model::PutObjectRequest const &,Aws::Utils::Outcome<Aws::S3::Model::PutObjectResult,Aws::Client::AWSError<enum Aws::S3::S3Errors>> const &,std::shared_ptr<Aws::Client::AsyncCallerContext const> const &>::operator()(const Aws::S3::S3Client * <_Args_0>, const Aws::S3::Model::PutObjectRequest & <_Args_1>, const Aws::Utils::Outcome<Aws::S3::Model::PutObjectResult,Aws::Client::AWSError<enum Aws::S3::S3Errors>> & <_Args_2>, const std::shared_ptr<Aws::Client::AsyncCallerContext const> & <_Args_3>) Line 970 C++
aws-cpp-sdk-s3.dll!Aws::S3::S3Client::PutObjectAsyncHelper(const Aws::S3::Model::PutObjectRequest & request, const std::function<void __cdecl(Aws::S3::S3Client const *,Aws::S3::Model::PutObjectRequest const &,Aws::Utils::Outcome<Aws::S3::Model::PutObjectResult,Aws::Client::AWSError<enum Aws::S3::S3Errors>> const &,std::shared_ptr<Aws::Client::AsyncCallerContext const> const &)> & handler, const std::shared_ptr<Aws::Client::AsyncCallerContext const> & context) Line 3316 C++
aws-cpp-sdk-s3.dll!Aws::S3::S3Client::PutObjectAsync::__l2::<lambda>() Line 3311 C++
aws-cpp-sdk-s3.dll!std::_Invoker_functor::_Call<void <lambda>(void) &>(Aws::S3::S3Client::PutObjectAsync::__l2::void <lambda>(void) & _Obj) Line 1579 C++
aws-cpp-sdk-s3.dll!std::invoke<void <lambda>(void) &>(Aws::S3::S3Client::PutObjectAsync::__l2::void <lambda>(void) & _Obj) Line 1579 C++
aws-cpp-sdk-s3.dll!std::_Invoker_ret<std::_Unforced,0>::_Call<void <lambda>(void) &>(Aws::S3::S3Client::PutObjectAsync::__l2::void <lambda>(void) & <_Vals_0>) Line 1615 C++
aws-cpp-sdk-s3.dll!std::_Call_binder<std::_Unforced,void <lambda>(void),std::tuple<>,std::tuple<>>(std::_Invoker_ret<std::_Unforced,0> __formal, std::integer_sequence<unsigned __int64> __formal, Aws::S3::S3Client::PutObjectAsync::__l2::void <lambda>(void) & _Obj, std::tuple<> & _Tpl, std::tuple<> && _Ut) Line 1402 C++
aws-cpp-sdk-s3.dll!std::_Binder<std::_Unforced,void <lambda>(void)>::operator()<>() Line 1442 C++
aws-cpp-sdk-s3.dll!std::_Invoker_functor::_Call<std::_Binder<std::_Unforced,void <lambda>(void)> &>(std::_Binder<std::_Unforced,void <lambda>(void)> & _Obj) Line 1579 C++
aws-cpp-sdk-s3.dll!std::invoke<std::_Binder<std::_Unforced,void <lambda>(void)> &>(std::_Binder<std::_Unforced,void <lambda>(void)> & _Obj) Line 1579 C++
aws-cpp-sdk-s3.dll!std::_Invoker_ret<void,1>::_Call<std::_Binder<std::_Unforced,void <lambda>(void)> &>(std::_Binder<std::_Unforced,void <lambda>(void)> & <_Vals_0>) Line 1598 C++
aws-cpp-sdk-s3.dll!std::_Func_impl_no_alloc<std::_Binder<std::_Unforced,void <lambda>(void)>,void>::_Do_call() Line 927 C++
aws-cpp-sdk-core.dll!std::_Func_class<void>::operator()() Line 970 C++
aws-cpp-sdk-core.dll!Aws::Utils::Threading::DefaultExecutor::SubmitToThread::__l2::<lambda>() Line 29 C++
aws-cpp-sdk-core.dll!std::_Invoker_functor::_Call<void <lambda>(void)>(Aws::Utils::Threading::DefaultExecutor::SubmitToThread::__l2::void <lambda>(void) && _Obj) Line 1579 C++
aws-cpp-sdk-core.dll!std::invoke<void <lambda>(void)>(Aws::Utils::Threading::DefaultExecutor::SubmitToThread::__l2::void <lambda>(void) && _Obj) Line 1579 C++
aws-cpp-sdk-core.dll!std::thread::_Invoke<std::tuple<void <lambda>(void)>,0>(void * _RawVals) Line 43 C++
ucrtbased.dll!thread_start<unsigned int (__cdecl*)(void *),1>(void * const parameter) Line 97 C++
```Jørgen Lindjorgen.lind@3lc.aiJørgen Lindjorgen.lind@3lc.aihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/11Need to be able to use Zip compression for SEGYTraceHeaders2021-06-16T22:19:49ZMorten OfstadNeed to be able to use Zip compression for SEGYTraceHeadersThe compression method needs to be possible to specify when creating a new volume. Sufficient control over per-channel compression method needs to be added so we can have the Amplitudes uncompressed while the trace headers are Zipped.The compression method needs to be possible to specify when creating a new volume. Sufficient control over per-channel compression method needs to be added so we can have the Amplitudes uncompressed while the trace headers are Zipped.Morten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/13Create a slice server example2020-01-27T14:05:48ZMorten OfstadCreate a slice server exampleCreate an example code that serves PNG images of slices of seismic over HTTP using Python and Flask. This shows off the Python API and is a realistic use-case for the technology.Create an example code that serves PNG images of slices of seismic over HTTP using Python and Flask. This shows off the Python API and is a realistic use-case for the technology.Version 1.0Morten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/14Add IOManagerCurl2021-03-11T12:41:38ZJørgen Lindjorgen.lind@3lc.aiAdd IOManagerCurlTo maintain a predictable performance between IOManagers there should exist a IOManager utilising curl on both Windows and Linux that communicates with a server without any authentication. This can then be referenced as a benchmark for p...To maintain a predictable performance between IOManagers there should exist a IOManager utilising curl on both Windows and Linux that communicates with a server without any authentication. This can then be referenced as a benchmark for performance characteristics the other IOManagers should achieve. The IOManager should be written in such a way that it should be "trivial" for other IOManagers to utilise the http transfer code and add its own signing headers etc.Version 2.0Jørgen Lindjorgen.lind@3lc.aiJørgen Lindjorgen.lind@3lc.aihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/15Python will not error on openvds.open() when incorrect credentials2020-02-04T13:35:46ZMarius Storm-Olsenmarius@bluware.comPython will not error on openvds.open() when incorrect credentialsThe following code will give you a None handle without any error code/string.
The None handle will still give you a valid VolumeDataAccessManager, but the subsequent VolumeSubsetRequest will fail:
```python
import openvds
import numpy
...The following code will give you a None handle without any error code/string.
The None handle will still give you a valid VolumeDataAccessManager, but the subsequent VolumeSubsetRequest will fail:
```python
import openvds
import numpy
AWS_BUCKET = 'bluware-vds-public-eu-north-1'
AWS_REGION = 'eu-north-1'
AWS_OBJECTID = '5790BB045F835E6B'
opt = openvds.AWSOpenOptions(bucket=AWS_BUCKET, region=AWS_REGION, key=AWS_OBJECTID)
err = openvds.Error()
try:
ovds = openvds.open(opt, err)
except:
print('Failed to Open VDS:' + err.string)
if err.code != 0:
print('Error code: ' + err.code)
if ovds is None:
print('No valid handle')
print(ovds)
acc = openvds.VolumeDataAccessManager(ovds)
print(acc)
r = acc.requestVolumeSubset((0,0,0),(100,100,100))
print(r)
```
![image](/uploads/05d9c8d026fdf5814b54179543a1f5fb/image.png)
With proper credentials, we get success:
![image](/uploads/8c644d2da0bcfe8ce46e9aafee00daf6/image.png)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/16Convert asserts to std::runtime_error where it makes sense2021-06-16T22:19:48ZMorten OfstadConvert asserts to std::runtime_error where it makes senseAny calls to the public API with invalid parameters should result in std::runtime_error, not in an assert triggering (which will only happen in debug builds). We will continue using error codes for errors that are not caused by an invali...Any calls to the public API with invalid parameters should result in std::runtime_error, not in an assert triggering (which will only happen in debug builds). We will continue using error codes for errors that are not caused by an invalid program (e.g. IO errors), and will use asserts for checking preconditions/postconditions and class invariants.Version 2.0Jørgen Lindjorgen.lind@3lc.aiJørgen Lindjorgen.lind@3lc.aihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/17Add a benchmarking tool2020-03-10T10:28:09ZMorten OfstadAdd a benchmarking toolThe VDSBenchmark should have options to read the whole dataset as slices (along any axis) and tiles, control how many requests are in the queue and size of tiles. It should report detailed timings and MB/s rate.The VDSBenchmark should have options to read the whole dataset as slices (along any axis) and tiles, control how many requests are in the queue and size of tiles. It should report detailed timings and MB/s rate.Jørgen Lindjorgen.lind@3lc.aiJørgen Lindjorgen.lind@3lc.aihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/18Document Python API2020-06-03T14:05:55ZMorten OfstadDocument Python APINeed to set up Sphinx autodoc to make separate sections for the C++ and Python APIs since they are not identical.Need to set up Sphinx autodoc to make separate sections for the C++ and Python APIs since they are not identical.Version 1.0Morten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/19Complete C++ API documentation2020-06-03T14:04:58ZMorten OfstadComplete C++ API documentationCurrently only the VolumeDataAccessManager methods are documented. The documentation needs to be complete and well formatted.Currently only the VolumeDataAccessManager methods are documented. The documentation needs to be complete and well formatted.Morten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/20Document tools2020-05-27T09:58:20ZMorten OfstadDocument toolsThe SEGYImport and SEGYExport tools need to be documented in the Sphinx documentation (including usage examples and explanation of the whole import/export process), not just through the --help option.The SEGYImport and SEGYExport tools need to be documented in the Sphinx documentation (including usage examples and explanation of the whole import/export process), not just through the --help option.Version 1.0Morten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/21Replace IOError with std::error_code2021-06-16T22:19:47ZMorten OfstadReplace IOError with std::error_codeUsing std::error_code is more convenient, but we will need to make sure we internally get (int, char *) from a TLS (via a getLastError() method when a method returns unsuccessfully) and then convert to std::error_code in inline methods s...Using std::error_code is more convenient, but we will need to make sure we internally get (int, char *) from a TLS (via a getLastError() method when a method returns unsuccessfully) and then convert to std::error_code in inline methods so we are not hit by ABI incompatibilities when mixing debug/release builds and/or compiler versions.Jørgen Lindjorgen.lind@3lc.aiJørgen Lindjorgen.lind@3lc.aihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/22Create a frontend for the SliceServer2021-06-16T22:19:46ZMorten OfstadCreate a frontend for the SliceServerThe SliceServer example would be a lot cooler if it had a HTML frontend that let you select slice dimension and drag sliders to change inline/crossline/time position.The SliceServer example would be a lot cooler if it had a HTML frontend that let you select slice dimension and drag sliders to change inline/crossline/time position.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/23SEGYExport should be able to create file/trace headers if the VDS wasn't impo...2021-06-16T22:19:46ZMorten OfstadSEGYExport should be able to create file/trace headers if the VDS wasn't imported from SEG-YFor data scientists it is a valuable feature to be able to export their probability cubes or other similar datasets to SEG-Y in order to read them into software that isn't VDS capable. In order for that to work the SEGYExport tool needs ...For data scientists it is a valuable feature to be able to export their probability cubes or other similar datasets to SEG-Y in order to read them into software that isn't VDS capable. In order for that to work the SEGYExport tool needs to be able to create its own headers if none are present in the input VDS.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/24ZGY import/export2024-03-07T09:18:54ZMorten OfstadZGY import/exportIt would be desirable to be able to import/export the ZGY format from Ocean/Petrel, this is dependent on the ZGY libraries being open-sourced.It would be desirable to be able to import/export the ZGY format from Ocean/Petrel, this is dependent on the ZGY libraries being open-sourced.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/25SEGYImport should create ImportInformation metadata2020-04-08T09:57:24ZMorten OfstadSEGYImport should create ImportInformation metadataThe ImportInformation metadata category was recently added to include more information about original file name and time of import, SEGYImport should generate this metadata.The ImportInformation metadata category was recently added to include more information about original file name and time of import, SEGYImport should generate this metadata.Version 1.0Morten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/26Make IO tests for Error situations2020-03-10T10:27:10ZJørgen Lindjorgen.lind@3lc.aiMake IO tests for Error situationsWe need to be able to handle errors in the IO stack more gracefully. This needs some unit tests.
Make a facade IOManager to be able to fake IO errors and then create tests to verify behaviour for scenarios that include:
- Invalid Meta d...We need to be able to handle errors in the IO stack more gracefully. This needs some unit tests.
Make a facade IOManager to be able to fake IO errors and then create tests to verify behaviour for scenarios that include:
- Invalid Meta data Page data
- Invalid Mata data Page http request
- Invalid Data chunk data
- Invalid Data chunk http download
- Invalid Meta data Page upload
- Invalid Data chunk upload
- Invalid VolumeDataPage upload
- Invalid VolumeDataPage download
- Invalid TraceHeaders upload
- Invalid TraceHeaders downloadsVersion 1.0Jørgen Lindjorgen.lind@3lc.aiJørgen Lindjorgen.lind@3lc.aihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/27Metadata Page download race2020-02-19T14:31:44ZJørgen Lindjorgen.lind@3lc.aiMetadata Page download raceWhen receiving an invalid metadata page there is a race leading to a deadlock.When receiving an invalid metadata page there is a race leading to a deadlock.Version 1.0Jørgen Lindjorgen.lind@3lc.aiJørgen Lindjorgen.lind@3lc.aihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/29SEGYExport won't export datasets where Dimensions_012 is unavailable2020-03-27T08:48:38ZMorten OfstadSEGYExport won't export datasets where Dimensions_012 is unavailableThis prevents SEGYExport from working correctly on 2D datasets or in other cases where Dimensions_01 is available but Dimensions_012 is not. It also won't work with crossline-oriented prestack or poststack seismic where the primary key c...This prevents SEGYExport from working correctly on 2D datasets or in other cases where Dimensions_01 is available but Dimensions_012 is not. It also won't work with crossline-oriented prestack or poststack seismic where the primary key corresponds to dimension 1 instead of dimension 2. We should identify which dimensions correspond to the sample dimension and always loop over the primary dimension. We should also read either from the 3D dimension group or the 2D dimension group that contains the first two dimensions of the 3D dimension group.Morten OfstadJørgen Lindjorgen.lind@3lc.aiMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/30SEGYExport doesn't work with other formats than IBM-float2020-03-27T12:41:37ZMorten OfstadSEGYExport doesn't work with other formats than IBM-floatThere are no checks to see what DataSampleFormat the original file used, and there are no checks for the format of the input VDS. The default should be to export as integer data when the VDS is U8 or U16 with bias corresponding to signed...There are no checks to see what DataSampleFormat the original file used, and there are no checks for the format of the input VDS. The default should be to export as integer data when the VDS is U8 or U16 with bias corresponding to signed integer, and to honor any DataSampleFormatCode metadata in the SEGY category and failing that we can peek at the original binary header to see if that indicates using IEEE.Morten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/31AWS authentication fails with assumed roles2020-03-10T14:40:43ZMorten OfstadAWS authentication fails with assumed rolesThis is actually caused by an issue in the AWS C++ SDK where configurations with assumed roles are ignored by the DefaultCredentialsProviderChain: https://github.com/aws/aws-sdk-cpp/issues/150
However, we need to find a workaround as th...This is actually caused by an issue in the AWS C++ SDK where configurations with assumed roles are ignored by the DefaultCredentialsProviderChain: https://github.com/aws/aws-sdk-cpp/issues/150
However, we need to find a workaround as this was promised to be fixed "incredibly soon" back in 2016 and still isn't fixed.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/32SEGYImport should be able to import SEG-Y from S3 or Azure Blob Storage2020-03-10T10:26:24ZMorten OfstadSEGYImport should be able to import SEG-Y from S3 or Azure Blob StorageThis requires making it possible to instantiate and use an IOManager from a client of the OpenVDS API (in this case the SEGYImport command-line tool). It also requires adding command line options to provide the required OpenOptions to th...This requires making it possible to instantiate and use an IOManager from a client of the OpenVDS API (in this case the SEGYImport command-line tool). It also requires adding command line options to provide the required OpenOptions to that IOManager to the SEGYImport command and matching the input files to a pattern so s3:// is recognized as a file in an S3 Blob and getting some advice from Microsoft on how to refer to data on Azure Blob Storage.Version 1.0https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/33Parallel write support and example2024-03-07T09:17:38ZMorten OfstadParallel write support and exampleFor R3, the HPC group is interested in developing an example of how to do parallel write (e.g. using MPI). This probably requires adding some support functions to make it easier to send metadata to a central coordinator and write the chu...For R3, the HPC group is interested in developing an example of how to do parallel write (e.g. using MPI). This probably requires adding some support functions to make it easier to send metadata to a central coordinator and write the chunk-metadata pages separately.Morten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/34zlib use intrinsics on windows2020-04-16T13:03:29ZJørgen Lindjorgen.lind@3lc.aizlib use intrinsics on windowsNeed to pick up the cpu architecture and set the correct cmake flagsNeed to pick up the cpu architecture and set the correct cmake flagsJørgen Lindjorgen.lind@3lc.aiJørgen Lindjorgen.lind@3lc.aihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/35SEGYImport should split network get requests into chunks of max 8 MB2020-04-16T13:01:05ZJørgen Lindjorgen.lind@3lc.aiSEGYImport should split network get requests into chunks of max 8 MBJørgen Lindjorgen.lind@3lc.aiJørgen Lindjorgen.lind@3lc.aihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/36Missing GetMappedValueCount(int channel) in VolumeDataLayout.h2021-06-16T22:19:45ZCamille PerinMissing GetMappedValueCount(int channel) in VolumeDataLayout.hThere is no `virtual int GetMappedValueCount(int channel) const = 0;` in `VolumeDataLayout.h`
to retrieve the number of values to store per trace (when using per trace mapping)
For sure, user can already do that with `GetChannelDescrip...There is no `virtual int GetMappedValueCount(int channel) const = 0;` in `VolumeDataLayout.h`
to retrieve the number of values to store per trace (when using per trace mapping)
For sure, user can already do that with `GetChannelDescriptor(int channel).GetMappedValueCount()` but it seems to be a little inconsistency.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/37Bug when using WaitForCompletion(request, 1000)2020-04-23T11:43:08ZCamille PerinBug when using WaitForCompletion(request, 1000)The bug occurs when I try to display some progress information while requesting data.
I use :
```
while (!accessManager->WaitForCompletion(request, 1000)) {
if (accessManager->IsCanceled(request)) {
fmt::pri...The bug occurs when I try to display some progress information while requesting data.
I use :
```
while (!accessManager->WaitForCompletion(request, 1000)) {
if (accessManager->IsCanceled(request)) {
fmt::print(stdout, "Request canceled\n");
break;
}
fmt::print(stdout, "Progress : {} %\n", accessManager->GetCompletionFactor(request) * 100.);
}
```
instead of
```
bool finished = accessManager->WaitForCompletion(request);
```
I get this error :
```
/data/openSDU/openVDS/open-vds/cmake-build-debug/examples/SliceDump/slicedump --bucket openvds-test-int --region eu-west-3 --object vds/alwynDepth_w_IL_XL_IEEEFloat /tmp/slice.bmp --axis 2,1,0 --position 500 --progress
Using axis mapping [2, 1, 0]
Found data set with sample count [1163, 849, 1101]
Launch request ... OK
slicedump: /data/openSDU/openVDS/open-vds/src/OpenVDS/VDS/VolumeDataRequestProcessor.cpp:161: void OpenVDS::SetErrorForJob(OpenVDS::Job*): Assertion `job->cancelled' failed.
Process finished with exit code 6
```Jørgen Lindjorgen.lind@3lc.aiJørgen Lindjorgen.lind@3lc.aihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/38API to constrain the library by provided CPU and Memory resources2021-06-16T22:19:44ZCamille PerinAPI to constrain the library by provided CPU and Memory resourcesCommercial VDS solution provides global memory and thread management methods :
```
ConfigMemoryManagement.getInstance().setProcessingCPUCacheMax(cacheSize);
ConfigMemoryManagement.getInstance().setEnableProcessingThreadX(true);
```
Cou...Commercial VDS solution provides global memory and thread management methods :
```
ConfigMemoryManagement.getInstance().setProcessingCPUCacheMax(cacheSize);
ConfigMemoryManagement.getInstance().setEnableProcessingThreadX(true);
```
Could similar helper methods be implemented in OpenVDS?https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/39Azure make sure id uses the blob key2020-05-28T07:30:40ZJørgen Lindjorgen.lind@3lc.aiAzure make sure id uses the blob keyVersion 1.0https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/40SEGYImport is generating faster than network2020-05-29T12:02:39ZJørgen Lindjorgen.lind@3lc.aiSEGYImport is generating faster than networkVersion 1.0Jørgen Lindjorgen.lind@3lc.aiJørgen Lindjorgen.lind@3lc.aihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/41Add library path instructions to readme2021-06-16T22:19:43ZJørgen Lindjorgen.lind@3lc.aiAdd library path instructions to readmehttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/42update readme to point to the new documentation location2020-05-25T11:25:09ZJørgen Lindjorgen.lind@3lc.aiupdate readme to point to the new documentation locationhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/43CI should fail on missing new python api2021-06-16T22:19:42ZJørgen Lindjorgen.lind@3lc.aiCI should fail on missing new python apiWhen adding functionality that should update the python bindings, then ci should failWhen adding functionality that should update the python bindings, then ci should failhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/44dont use std::atomic<std::chrono....> in requestprocessor2020-05-08T15:13:22ZJørgen Lindjorgen.lind@3lc.aidont use std::atomic<std::chrono....> in requestprocessorhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/45Compilation fails on centos 7 with recent devtoolset2020-09-01T09:12:54ZMorten OfstadCompilation fails on centos 7 with recent devtoolset> /usr/include/c++/4.8.2/atomic: In instantiation of 'struct std::atomic<std::chrono::time_point<std::chrono::_V2::steady_clock, std::chrono::duration<long int, std::ratio<1l, 1000000000l> > > >':
> /project/open-vds/src/OpenVDS/VDS/Volu...> /usr/include/c++/4.8.2/atomic: In instantiation of 'struct std::atomic<std::chrono::time_point<std::chrono::_V2::steady_clock, std::chrono::duration<long int, std::ratio<1l, 1000000000l> > > >':
> /project/open-vds/src/OpenVDS/VDS/VolumeDataPageAccessorImpl.h:50:67: required from here
> /usr/include/c++/4.8.2/atomic:167:7: error: function 'std::atomic<_Tp>::atomic() [with _Tp = std::chrono::time_point<std::chrono::_V2::steady_clock, std::chrono::duration<long int, std::ratio<1l, 1000000000l> > >]' defaulted on its first declaration with an exception-specification that differs from the implicit declaration 'constexpr std::atomic<std::chrono::time_point<std::chrono::_V2::steady_clock, std::chrono::duration<long int, std::ratio<1l, 1000000000l> > > >::atomic()'
> atomic() noexcept = default;
> ^
>
I don't think we need to use atomic for the m_lastUsed timestamp -- either that or we can just convert it to a plain int64_t which has milliseconds after startup (using the steady_clock).Version 1.0Jørgen Lindjorgen.lind@3lc.aiJørgen Lindjorgen.lind@3lc.aihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/46Missing Java API for endpointOverride in AWSOpenOptions2020-05-25T12:49:39ZMorten OfstadMissing Java API for endpointOverride in AWSOpenOptionsEndpointOverride was added after Java bindings branched and is required to work with MinIO.EndpointOverride was added after Java bindings branched and is required to work with MinIO.Camille PerinCamille Perinhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/47Java VolumeDataAxisDescriptor and MetadataKey has public type/category/name m...2020-05-25T12:49:29ZMorten OfstadJava VolumeDataAxisDescriptor and MetadataKey has public type/category/name membersThese should be made private and only have .getXXX() methods so people don't think it is possible to set them and to be consistent with the other APIs. It is weird that VolumeDataAxisDescriptor is done differently than VolumeDataChannelD...These should be made private and only have .getXXX() methods so people don't think it is possible to set them and to be consistent with the other APIs. It is weird that VolumeDataAxisDescriptor is done differently than VolumeDataChannelDescriptor (which has getters) and that the VolumeDataChannelDescriptor uses 'm_' prefix for its members while the others do not ('m_' prefix feels a little weird in Java anyway).Camille PerinCamille Perinhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/48Implement resumable uploads2022-08-25T13:23:13ZMorten OfstadImplement resumable uploadsWe can check if the chunk-metadata on the metadata-page is in sync with the chunk-metadata header of the objects (with HEAD requests) to quickly find where to resume an upload that was stopped part-way through. This feature will be impor...We can check if the chunk-metadata on the metadata-page is in sync with the chunk-metadata header of the objects (with HEAD requests) to quickly find where to resume an upload that was stopped part-way through. This feature will be important as we move to larger datasets that are more likely to have a problem and are more costly to retry from the beginning.Version 2.1Morten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/49SEGYImport should not create datasets with ForceFullResolutionDimension true2020-05-25T12:37:42ZMorten OfstadSEGYImport should not create datasets with ForceFullResolutionDimension trueThis is probably a bug in the VolumeDataLayout code. We should also normalize this setting to false when there are no LOD levels since it makes no sense then.This is probably a bug in the VolumeDataLayout code. We should also normalize this setting to false when there are no LOD levels since it makes no sense then.Morten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/50Python bindings for the new IOManager2020-09-01T09:19:29ZSiarhei Khaletski (EPAM)Python bindings for the new IOManagerWhen I've added the new IOManager, should I do anything else that additional to the usual `python setup.py install`?
After changes I tried to reinstall to use the new Manager, but this was not appeared in Python.
Do I need run `python/t...When I've added the new IOManager, should I do anything else that additional to the usual `python setup.py install`?
After changes I tried to reinstall to use the new Manager, but this was not appeared in Python.
Do I need run `python/tools/mkwrapper.py` against `src` directory?
I did `python mkwrapper.py -I../../src /opt/open-vds/src/OpenVDS/*/*.h`, to no avail.
Did `python/openvds` folder composed manually? Thanks.
OS: Ubuntu 18.04
Python: 3.7.0https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/51Fix buffer size calculations for multi-component data2020-06-18T08:17:39ZMorten OfstadFix buffer size calculations for multi-component dataThe buffer size calculation functions do not take the channel as an argument so they assume scalar data.The buffer size calculation functions do not take the channel as an argument so they assume scalar data.Version 1.0Morten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/52Cannot create VDSs with the Java API2021-08-31T13:57:59ZMorten OfstadCannot create VDSs with the Java APIJava API is missing create() calls and there might be other missing pieces needed to create a new VDS from scratch.Java API is missing create() calls and there might be other missing pieces needed to create a new VDS from scratch.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/53Python OpenOptions constructors should have default values for arguments2020-09-14T18:14:27ZMorten OfstadPython OpenOptions constructors should have default values for argumentsMost arguments (e.g. endpointOverride) should have default value so we can use the constructor with named arguments without specifying all of them.
This should be relatively easy to add, following the instructions from:
https://pybind1...Most arguments (e.g. endpointOverride) should have default value so we can use the constructor with named arguments without specifying all of them.
This should be relatively easy to add, following the instructions from:
https://pybind11.readthedocs.io/en/stable/basics.html#default-argumentsMorten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/54Scaling of SEG-Y coordinates is negative2020-06-16T10:01:11ZMorten OfstadScaling of SEG-Y coordinates is negativeI have a SEG-Y with a coordinate scalar value of -100 in byte 72. However, when I convert to VDS using a header JSON with CoordinateScale defined, the resulting VDS scan file shows the XYs as negative numbers, as if multiplied by -100 ra...I have a SEG-Y with a coordinate scalar value of -100 in byte 72. However, when I convert to VDS using a header JSON with CoordinateScale defined, the resulting VDS scan file shows the XYs as negative numbers, as if multiplied by -100 rather than interpreted to multiply by 0.01. I can override the scalar in the command line with --scale 0.01 to get correct numbers.Morten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/55Publish GCP OpenVDS sources2020-06-23T05:37:24ZElizaveta Zeldina (EPAM)Publish GCP OpenVDS sourcesPublish GCP OpenVDS support sources to community repository.Publish GCP OpenVDS support sources to community repository.Dmitriy RudkoDmitriy Rudkohttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/56AWS Connection without sessionToken2020-08-20T11:35:07ZAaronAWS Connection without sessionTokenError received opening vds in python in the following manner.
there is not valid SessionToken because these are the bucket owners credentials
```python
connectionString = "AccessKeyId="+store['credentials']['key']
connec...Error received opening vds in python in the following manner.
there is not valid SessionToken because these are the bucket owners credentials
```python
connectionString = "AccessKeyId="+store['credentials']['key']
connectionString += ";SecretAccessKey=" + store['credentials']['secret']
connectionString += ";SessionToken=" + ""
connectionString += ";Region=" + "us-east-1"
uri = "s3://" + os.path.join(store['bucket'],key)
vds = openvds.open(uri,connectionString)
```
gives the following error.
```
vds = openvds.open(uri,connectionString)
RuntimeError: Invalid connection string. Name SessionToken has no value.
``Jørgen Lindjorgen.lind@3lc.aiJørgen Lindjorgen.lind@3lc.aihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/57[VDSInfo] Query all metadata with single command2020-07-06T09:02:52ZJesse Hudgens[VDSInfo] Query all metadata with single commandThere is interest in querying all available metadata using a single command in VDSInfo rather specifying a name or category predicate.There is interest in querying all available metadata using a single command in VDSInfo rather specifying a name or category predicate.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/58Refactor SEGY importer read strategy2020-09-01T09:10:43ZMorten OfstadRefactor SEGY importer read strategyThe SEGY importer currently reads entire inline-groups, split into multiple requests if they come from blob store. It is more efficient if we can start the import process before the entire inline-group is populated and instead read from ...The SEGY importer currently reads entire inline-groups, split into multiple requests if they come from blob store. It is more efficient if we can start the import process before the entire inline-group is populated and instead read from the input SEGY in "pages" of a fixed number of traces that are needed to make the next column of cubes.Version 2.0https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/59Implement VDSFile support2020-08-07T12:32:50ZAnatoly YanchevskyImplement VDSFile supportEnum OpenVDS::OpenOptions::ConnectionType contain File.
Did you plane to create implementations for that ?
Thanks.Enum OpenVDS::OpenOptions::ConnectionType contain File.
Did you plane to create implementations for that ?
Thanks.Version 1.1https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/60[SEGYImport] Accept XYs in IEEE or IBM floating point format2022-09-06T09:33:06ZJesse Hudgens[SEGYImport] Accept XYs in IEEE or IBM floating point formatOccasionally there are SEG-Y datasets with X and Y values stored as IEEE float or IBM float. This is very uncommon (we found about 2% of files stored this way). Currently SEGYImport accepts XYs in TwoByte or FourByte integer. It may be d...Occasionally there are SEG-Y datasets with X and Y values stored as IEEE float or IBM float. This is very uncommon (we found about 2% of files stored this way). Currently SEGYImport accepts XYs in TwoByte or FourByte integer. It may be desired to accept XYs in IEEE or IBM floating point format as well. ~suggestion ~"Priority:: Low"https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/61[Documentations] Correct tag in example of connection string to s32020-08-04T09:06:48ZAnatoly Yanchevsky[Documentations] Correct tag in example of connection string to s3Link: http://osdu.pages.community.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/getting-started.html#opening-a-vds
The connection string for s3 could look like this:
AccessKeyId=xxx;SecretAccessKey=xxx;SessionToken=x...Link: http://osdu.pages.community.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/getting-started.html#opening-a-vds
The connection string for s3 could look like this:
AccessKeyId=xxx;SecretAccessKey=xxx;SessionToken=xxx;Region=eu-north-1
In this string, 'SecretAccessKey' should be replaced on 'SecretKey', that what function 'createS3OpenOptions' waiting for.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/62Reading from s3 stuck2020-09-01T09:20:33ZAnatoly YanchevskyReading from s3 stuckI'm starting to test OpenVDS with AWS. After create container, I try to read it back.
On my first vm_1 running test take 100% CPU and almost not using network, process is stuck.
With vm_2 it work better, but not all time.
vm_1: t3.large...I'm starting to test OpenVDS with AWS. After create container, I try to read it back.
On my first vm_1 running test take 100% CPU and almost not using network, process is stuck.
With vm_2 it work better, but not all time.
vm_1: t3.large (2 CPU, 8 GB RAM)
vm_2: t3.x2large (8 CPU, 32 GB RAM)
I have questions:
1. Did you have any recommendations to minimal vm configuration ?
2. Can I disable 'aws_sdk_' logs without library change ?
3. Do you have restrictions to data size for OpenVDS::VolumeDataAccessManager::RequestVolumeSubset ?
[read_OVDS.cc](/uploads/dcf91ad9acab4ef0e42988c411dcd16c/read_OVDS.cc)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/63SEGYExport failing with "Inconsistent metadata" error2020-09-04T09:05:41ZsrinivasSEGYExport failing with "Inconsistent metadata" error1. SEGYImport, with headers option, created a VDS file
2. SEGYExport, is failing to create SEGY file using the VDS file created in step#1
Attached screen-shot for commands and Error:
![SegyExport](/uploads/854506a504936134a49736476cc18d...1. SEGYImport, with headers option, created a VDS file
2. SEGYExport, is failing to create SEGY file using the VDS file created in step#1
Attached screen-shot for commands and Error:
![SegyExport](/uploads/854506a504936134a49736476cc18d7e/SegyExport.png)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/64Add options for adaptive streaming2021-03-05T10:41:04ZMorten OfstadAdd options for adaptive streamingAll the code to do the adaptive requests and decompression is there, we need to add the options to actually use it...All the code to do the adaptive requests and decompression is there, we need to add the options to actually use it...Version 2.0Morten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/65setup.py need update to make it work with VS2019, plus some other suggestions2021-01-04T13:32:41ZThang Hasetup.py need update to make it work with VS2019, plus some other suggestions1. Windows build process require git to be installed. Please specify this in the build requirement.
2. setup.py does not detect Visual Studio 2019, and instead ask for Visual Studio 2017. I don't know how to fix this. Most likely related...1. Windows build process require git to be installed. Please specify this in the build requirement.
2. setup.py does not detect Visual Studio 2019, and instead ask for Visual Studio 2017. I don't know how to fix this. Most likely related to some kind of ninja build configuration.
3. Desktop C++ component of Visual Studio 2019 is required (in Visual Studio Installer), along with Cmake integration (should be chosen by default in the installer). This seems obvious to developers, but for some newbies, it might be important.
4. Would you ever consider building python wheel and bundle all the pre-built results with each release, instead of just uploading source code?https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/66Add start time/depth determination and override to SEGYImport2021-01-21T13:41:52ZMorten OfstadAdd start time/depth determination and override to SEGYImportWe need to read the start time trace header field to create the axis descriptor for the time/depth axis and we need to provide a command-line parameter with an override if the information in the trace header is wrong.We need to read the start time trace header field to create the axis descriptor for the time/depth axis and we need to provide a command-line parameter with an override if the information in the trace header is wrong.Version 2.0Morten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/67Support for prestack offset-volumes2022-01-12T13:30:03ZMorten OfstadSupport for prestack offset-volumesThe data from Volve in the ST0202vsST10010_4D/PreMig_data/Offset_gathers folder (ST0202ZDC12_PZ_PSDM_INPUT_OFFGRPS_T.ST0202-ST10010-PRESTACK-RECEIVER-SEGY.NORWAY.4230.23031.1613.1.sgy) cannot currently be imported properly since it shoul...The data from Volve in the ST0202vsST10010_4D/PreMig_data/Offset_gathers folder (ST0202ZDC12_PZ_PSDM_INPUT_OFFGRPS_T.ST0202-ST10010-PRESTACK-RECEIVER-SEGY.NORWAY.4230.23031.1613.1.sgy) cannot currently be imported properly since it should result in a 4D Sample/Trace(offset)/Crossline/Inline volume with dimension-group 0/2/3 bricks, similar to how crossline-sorted volumes have the same axis but dimension-group 0/1/3 bricks instead of the 'normal' dimension-group 0/1/2 bricks that are produced when the volume is inline-sorted.
It will require some modifications to how the file-info is created since the 'fold' is basically separate inlines for each offset, so we still want to pretend that inline is the primary key when creating the file-info.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/68Add exception translation to the Java API2022-07-21T13:12:49ZMorten OfstadAdd exception translation to the Java APIThe native VolumeDataAccessManager will throw exceptions for bad parameters etc., these should be translated to standard Java exceptions.The native VolumeDataAccessManager will throw exceptions for bad parameters etc., these should be translated to standard Java exceptions.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/69Can't build without Google GCS2021-03-08T15:07:57ZCamille PerinCan't build without Google GCSOpenVDS can't build without Google GCS since Dms introduction. This build procedure :
```
cmake -DDISABLE_GCP_IOMANAGER=ON ..
make
```
leads to this error :
```
Scanning dependencies of target sdapi_objects
[ 19%] Building CXX object ...OpenVDS can't build without Google GCS since Dms introduction. This build procedure :
```
cmake -DDISABLE_GCP_IOMANAGER=ON ..
make
```
leads to this error :
```
Scanning dependencies of target sdapi_objects
[ 19%] Building CXX object 3rdparty/BuildDms/CMakeFiles/sdapi_objects.dir/__/dms-c7ba5398/src/src/core/SDDataset.cc.o
[ 20%] Building CXX object 3rdparty/BuildDms/CMakeFiles/sdapi_objects.dir/__/dms-c7ba5398/src/src/core/SDReadOnlyGenericDatasetAccessor.cc.o
[ 20%] Building CXX object 3rdparty/BuildDms/CMakeFiles/sdapi_objects.dir/__/dms-c7ba5398/src/src/core/Constants.cc.o
[ 20%] Building CXX object 3rdparty/BuildDms/CMakeFiles/sdapi_objects.dir/__/dms-c7ba5398/src/src/core/SDException.cc.o
[ 21%] Building CXX object 3rdparty/BuildDms/CMakeFiles/sdapi_objects.dir/__/dms-c7ba5398/src/src/core/SDGenericDataset.cc.o
[ 21%] Building CXX object 3rdparty/BuildDms/CMakeFiles/sdapi_objects.dir/__/dms-c7ba5398/src/src/core/SDHierarchicalDataset.cc.o
[ 21%] Building CXX object 3rdparty/BuildDms/CMakeFiles/sdapi_objects.dir/__/dms-c7ba5398/src/src/core/SDManager.cc.o
[ 22%] Building CXX object 3rdparty/BuildDms/CMakeFiles/sdapi_objects.dir/__/dms-c7ba5398/src/src/core/SDHierarchicalDatasetAccessor.cc.o
[ 22%] Building CXX object 3rdparty/BuildDms/CMakeFiles/sdapi_objects.dir/__/dms-c7ba5398/src/src/core/SDUtils.cc.o
[ 23%] Building CXX object 3rdparty/BuildDms/CMakeFiles/sdapi_objects.dir/__/dms-c7ba5398/src/src/lib/accessors/GcsAccessor.cc.o
/tmp/open-vds/3rdparty/dms-c7ba5398/src/src/lib/accessors/GcsAccessor.cc:29:10: fatal error: crc32c/crc32c.h: Aucun fichier ou dossier de ce type
#include "crc32c/crc32c.h"
^~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [3rdparty/BuildDms/CMakeFiles/sdapi_objects.dir/build.make:180: 3rdparty/BuildDms/CMakeFiles/sdapi_objects.dir/__/dms-c7ba5398/src/src/lib/accessors/GcsAccessor.cc.o] Error 1
make[2]: *** Attente des tâches non terminées....
make[1]: *** [CMakeFiles/Makefile2:782: 3rdparty/BuildDms/CMakeFiles/sdapi_objects.dir/all] Error 2
make: *** [Makefile:163: all] Error 2
```https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/70Can not create OpenVDS container in file.2021-03-11T13:23:52ZAnatoly YanchevskyCan not create OpenVDS container in file.In latest version: 1.2.0
When I try to work with OpenVDS as file, I got assert in:
HueBulkDataStore::FileInterface *
HueBulkDataStoreImpl::AddFile(const char *fileName, int chunkCount, int indexPageEntryCount, int fileType, int chunkMet...In latest version: 1.2.0
When I try to work with OpenVDS as file, I got assert in:
HueBulkDataStore::FileInterface *
HueBulkDataStoreImpl::AddFile(const char *fileName, int chunkCount, int indexPageEntryCount, int fileType, int chunkMetadataLength, int fileMetadataLength, bool overwriteExisting)
{
assert(!m_readOnly && m_extentAllocator);
this occur because in `urlToOpenOptions` missing `&removeProtocol` for file protocol.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/71Add command-line parameters to SEGYImport tool for overriding specific header...2021-03-03T15:41:36ZMorten OfstadAdd command-line parameters to SEGYImport tool for overriding specific header fieldsWhen SEGYImport is being called from a script/in a container it can be inconvenient to create a header format JSON file, so we should provide the option to override header fields directly on the command line.When SEGYImport is being called from a script/in a container it can be inconvenient to create a header format JSON file, so we should provide the option to override header fields directly on the command line.Version 2.0https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/72Change bricksize defaults when importing with compression2021-03-08T15:07:16ZMorten OfstadChange bricksize defaults when importing with compressionSEGYImport should default to 128 bricksize with 4 margin is using wavelet compression.SEGYImport should default to 128 bricksize with 4 margin is using wavelet compression.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/73Python VDS creation doesn't work2021-03-10T15:00:48ZMorten OfstadPython VDS creation doesn't workThere are two problems:
* The VolumeDataAxisDescriptors created by Python end up pointing their names and units to temporary copies of strings that are then destructed. This leads to invalid strings for names and units for the created VD...There are two problems:
* The VolumeDataAxisDescriptors created by Python end up pointing their names and units to temporary copies of strings that are then destructed. This leads to invalid strings for names and units for the created VDS.
* The GetBuffer()/GetWritableBuffer() calls on VolumeDataPage don't create buffer views that actually work with Python and you just get a raw pointer that you can't use for anything.Version 2.0Morten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/75Fix Python wrappers for openWithAdaptiveCompressionTolerance/openWithAdaptive...2021-04-08T08:39:40ZMorten OfstadFix Python wrappers for openWithAdaptiveCompressionTolerance/openWithAdaptiveCompressionRatioThe wrapper generator doesn't automatically handle the StringWrapper type used in the native API so all calls to open/create require manually added pybind11 code. Unfortunately this was missed for the new openWithAdaptiveCompressionToler...The wrapper generator doesn't automatically handle the StringWrapper type used in the native API so all calls to open/create require manually added pybind11 code. Unfortunately this was missed for the new openWithAdaptiveCompressionTolerance/openWithAdaptiveCompressionRatio methods and the only way to open a file (or a dataset with one of the cloud providers is to use the roundabout way (can also use createOpenOptions()):
```
options = openvds.VDSFileOpenOptions("test.vds")
options.waveletAdaptiveMode = openvds.WaveletAdaptiveMode.Tolerance
options.waveletAdaptiveTolerance = 0.1
vds = openvds.open(options)
```
Ideally we should fix the wrapper generator so it translates StringWrapper/VectorWrapper to std::string/std::vector or help pybind11 understand how to do the conversion, that way we won't have to keep adding hand-crafted pybind11 code in these cases.Version 2.1https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/76Implement automatic remapping from available dimension groups2022-02-16T09:53:06ZMorten OfstadImplement automatic remapping from available dimension groupsIf you request data from a dimension group that is not directly stored, but it has another available dimension group sharing at least two dimensions, it should remap the request to that dimension group.
A simple version of this will simp...If you request data from a dimension group that is not directly stored, but it has another available dimension group sharing at least two dimensions, it should remap the request to that dimension group.
A simple version of this will simply automatically change the dimensionsND of the request call, but it should also really work for page accessors in which case you need to create new pages by copying from the pages of the available dimension group.
This is needed to make things work like commercial VDS.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/77Issue with build gcc 9.12021-04-28T10:57:50ZAnatoly YanchevskyIssue with build gcc 9.1When we try to build open-vds library with gcc 9.1, got next error:
../OpenVDS/Optional.h:45:12:
error: template placeholder type ‘const optional<...auto...>’ must be followed by a simple declarator-id
45 | optional(const std:...When we try to build open-vds library with gcc 9.1, got next error:
../OpenVDS/Optional.h:45:12:
error: template placeholder type ‘const optional<...auto...>’ must be followed by a simple declarator-id
45 | optional(const std::optional& opt) : m_Value(opt.has_value() ? opt.value() : value_type()), m_HasValue(opt.has_value())
to resolve this, we change 'optional(const std::optional& opt)' on 'optional(const auto& opt)'Morten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/78Fix SEGYImport for int8 and int16 formats2022-01-12T14:40:15ZMorten OfstadFix SEGYImport for int8 and int16 formatsCurrently it looks like this just hits an assert and in release mode it just crashes. That's not good. We need to add support for these formats.Currently it looks like this just hits an assert and in release mode it just crashes. That's not good. We need to add support for these formats.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/79Consider to rename tag in connections string for S32021-07-09T14:27:36ZAnatoly YanchevskyConsider to rename tag in connections string for S3Right now for S3 connection string, OpenVDS use 'SecretKey'
http://osdu.pages.community.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/getting-started.html#opening-a-vds
But requests to AWS API return 'secretAccessKey'.Right now for S3 connection string, OpenVDS use 'SecretKey'
http://osdu.pages.community.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/getting-started.html#opening-a-vds
But requests to AWS API return 'secretAccessKey'.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/80SEGYImport fails to produce SurveyCoordinateSystem metadata2021-04-30T15:33:55ZMorten OfstadSEGYImport fails to produce SurveyCoordinateSystem metadataThis is a regression introduced by 8822c88c.This is a regression introduced by 8822c88c.Morten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/81Support for multi-file SEG-Y import2022-02-16T09:51:37ZMorten OfstadSupport for multi-file SEG-Y importTrying to import multiple SEG-Y files will currently hit the "Only one input SEG-Y file may be specified" error, this needs to be fixed because it is common for prestack data to be split in multiple files to get around file size limitati...Trying to import multiple SEG-Y files will currently hit the "Only one input SEG-Y file may be specified" error, this needs to be fixed because it is common for prestack data to be split in multiple files to get around file size limitations in the filesystem etc.Version 2.1https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/82Build error for Ubuntu 20.04 Docker container2021-12-06T14:20:20ZPhilipp WitteBuild error for Ubuntu 20.04 Docker containerBuilding a Docker image for the latest OpenVDS version using the [Ubuntu Dockerfile](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/blob/master/docker/ubuntu-20.04.Dockerfile) results in the fo...Building a Docker image for the latest OpenVDS version using the [Ubuntu Dockerfile](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/blob/master/docker/ubuntu-20.04.Dockerfile) results in the following compilation errors:
/usr/bin/ld: tools/VDSCopy/CMakeFiles/VDSCopy.dir/VDSCopy.cpp.o: undefined reference to symbol 'pthread_create@@GLIBC_2.2.5'
/usr/bin/ld: /lib/x86_64-linux-gnu/libpthread.so.0: error adding symbols: DSO missing from command line
I also tried a manual build resulting in the same error (during make -j8).https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/83Issues when the input file is in SeismicDMS2021-06-21T11:04:18ZYan Sushchynski (EPAM)Issues when the input file is in SeismicDMSThere are 2 issues with the input file in the Seismic DMS:
1. If we specify path like this `sd://<tenant>/<subproject>/<path>/<file_name>.segy` we get the error
```
HTTP 404, [seismic-store-service] The dataset sd://<tenant>/<subprojec...There are 2 issues with the input file in the Seismic DMS:
1. If we specify path like this `sd://<tenant>/<subproject>/<path>/<file_name>.segy` we get the error
```
HTTP 404, [seismic-store-service] The dataset sd://<tenant>/<subproject>/<path> does not exist
```
However, if we duplicate the file name, it finds the file in SeismicDMS (`sd://<tenant>/<subproject>/<path>/<file_name>.segy/<file_name>.segy` is ok)
2. If we try to use the SeismicDMS path with duplicated file names we get this error
```
[500] Error executing an HTTP request [ HTTP 500, Cannot read property 'iss' of undefined ]
----------------------------------------
Retrying http_request for PUT request with
https://<seistore_host>/api/v3/dataset/tenant/<tenant>/subproject/<subproject>/dataset/<file_name>.segy/lock - retry number: 7
```
This error occurs, when the IdToken is not passed to the request to SeismicDMS.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/84Add possibility to configure HTTP version using Env vars.2021-06-21T11:27:02ZYan Sushchynski (EPAM)Add possibility to configure HTTP version using Env vars.As Google API doesn't work with HTTP2, it will be useful to configure HTTP version in CURL requests using Environmental variables. At the current moment during SEGYImport command with the destination in SeismicDMS we get the following er...As Google API doesn't work with HTTP2, it will be useful to configure HTTP version in CURL requests using Environmental variables. At the current moment during SEGYImport command with the destination in SeismicDMS we get the following error:
```
[0] (ERROR) HTTPRequest::Send, performing request, detailed error: HTTP/2 stream 0 was not closed cleanly: PROTOCOL_ERROR (err 1) The error occurred while executing an HTTP request. error code: 92, error message: 'Stream error in the HTTP/2 framing layer']
Retrying http_request for POST request with https://storage.googleapis.com/upload/storage/v1/b/<bucket>/o?uploadType=multipart - retry number: 1
```
The same request with HTTP/1.1 works well.
https://curl.se/libcurl/c/CURLOPT_HTTP_VERSION.htmlhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/85Crash with IOManagerAWS2021-06-28T08:09:31ZAnatoly YanchevskyCrash with IOManagerAWSTo work with AWS SDK need to call InitAPI/ShutdownAPI:
https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/basic-use.html
This what you do in IOManagerAWS.
But, this mean that we can not use AWS SDK in same process.
Similar probl...To work with AWS SDK need to call InitAPI/ShutdownAPI:
https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/basic-use.html
This what you do in IOManagerAWS.
But, this mean that we can not use AWS SDK in same process.
Similar problem described in:
https://github.com/aws/aws-sdk-cpp/issues/456
Maybe, you can provide flag in AWSOpenOptions, to skip your call InitAPI/ShutdownAPI ?https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/86API to get container size2023-08-14T15:54:50ZAnatoly YanchevskyAPI to get container sizeIt will be really useful to get API for size of OpenVDS container.
Right now we can store only size for input file from with container was created.It will be really useful to get API for size of OpenVDS container.
Right now we can store only size for input file from with container was created.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/87OpenVDS DAG to support metadata registration2023-04-03T08:36:35Zjingdong sunOpenVDS DAG to support metadata registrationOpenVDS DAG to support metadata registrationOpenVDS DAG to support metadata registrationhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/88[Bug]I have found a bug about minmax of channel,how can i contribute to commu...2021-07-26T07:30:41ZChenjian Qiu[Bug]I have found a bug about minmax of channel,how can i contribute to communityI have found a bug about minmax of channel,how can i contribute to community.
I want fix this bug myselfI have found a bug about minmax of channel,how can i contribute to community.
I want fix this bug myselfhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/89Improve documentation of the value range for VolumeDataChannelDescriptor2022-10-21T07:29:44ZMorten OfstadImprove documentation of the value range for VolumeDataChannelDescriptorThere is currently no mention in the documentation that this value range is supposed to be used for display and outliers should be removed from it (like the SEGYImport command currently does). This is confusing for the users of the libra...There is currently no mention in the documentation that this value range is supposed to be used for display and outliers should be removed from it (like the SEGYImport command currently does). This is confusing for the users of the library who will most likey assume this value range is supposed to be the min/max of all the data.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/90Support datasets created by sdutil2021-08-09T11:13:23ZSiarhei Khaletski (EPAM)Support datasets created by sdutilOpenVds Import utility supports `sd://` dataset records as an SEGY file input.
The `sdutil` can be used to upload SEGY files to seismic store to create datasets records.
## Issue
`sdutil` uploads files on bucket with name `0` (zero). R...OpenVds Import utility supports `sd://` dataset records as an SEGY file input.
The `sdutil` can be used to upload SEGY files to seismic store to create datasets records.
## Issue
`sdutil` uploads files on bucket with name `0` (zero). Reasoning and explanation are here https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/merge_requests/35#note_59888
SEGY file uploaded to bucket by `sdutil`:
![image](/uploads/69ae3466666885d9356c1c7083f81cb4/image.png)
OpenVDS, on the contrary, uses `name` dataset property as a file name on bucket. Of course, it fails, file not found error.
![image](/uploads/91ef26dadb59beb3f6e9a5f54e7ac68a/image.png)
Arguments for OpenVds Import util:
```json
{
"segy_url": "sd://sk-tenant-seismic-data/sk-seismic-data/segy/ST0202R08_PS_PSDM_FULL_OFFSET_DEPTH.MIG_FIN.POST_STACK.3D.JS-017534",
"vds_url": "sd://sk-tenant-seismic-data/sk-seismic-data>/vds"
}
```
Is it possible to support the both naming approaches?
Proposal of `sdutil` team is here https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/merge_requests/35#note_59892https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/91getMetadataBLOB is missing from the Java API2021-10-12T10:31:10ZJørgen Lindjorgen.lind@3lc.aigetMetadataBLOB is missing from the Java APIhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/92Improve error handling with python's context manager on open()2022-01-22T15:38:13ZFilip BrzękImprove error handling with python's context manager on open()Hi,
when encountering an error during `vds.open()` in Python's SDK, those two snippets should behave equivalently, but they don't:
```python
vds_source = openvds.open(sd_path, sd_conn_str)
layout = openvds.getLayout(vds_source)
print("...Hi,
when encountering an error during `vds.open()` in Python's SDK, those two snippets should behave equivalently, but they don't:
```python
vds_source = openvds.open(sd_path, sd_conn_str)
layout = openvds.getLayout(vds_source)
print("ChannelCount: {}".format(layout.getChannelCount()))
openvds.close(vds_source)
```
this yields `RuntimeError: Open error: File::open`
```python
with openvds.open(sd_path, sd_conn_str) as vds_source:
layout = openvds.getLayout(vds_source)
print("ChannelCount: {}".format(layout.getChannelCount()))
```
this causes,
```free(): invalid pointer
Fatal Python error: Aborted
Thread 0x00007f8d06c8f740 (most recent call first):
...
57032 abort (core dumped)
```
I suspect it might call `openvds.close()` on a failed-opened VDS handle.
slightly off-topic:
As we're trying to debug an issue with connection to seismic ddms, can you provide any tip, on how to improve the verbosity of the open() error?
Thanks,
Filip
EDIT: after fixing sd_url to `sd_path`, core dumped happens in both cases.M10 - Release 0.13https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/93Imposible to install openvds module on Mac2022-07-21T13:09:53ZMateusz GeislerImposible to install openvds module on MacOn MacBook Pro (Intel processor) with python 3.7 I'm not able to install openvds module.
` pip install openvds
ERROR: Could not find a version that satisfies the requirement openvds (from versions: none)
ERROR: No matching distributi...On MacBook Pro (Intel processor) with python 3.7 I'm not able to install openvds module.
` pip install openvds
ERROR: Could not find a version that satisfies the requirement openvds (from versions: none)
ERROR: No matching distribution found for openvds
`https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/94[GCP] HTTP/2 stream 0 was not closed cleanly: PROTOCOL_ERROR (1)2021-11-19T12:13:09ZYan Sushchynski (EPAM)[GCP] HTTP/2 stream 0 was not closed cleanly: PROTOCOL_ERROR (1)Hi!
When we attempt to convert Segy to OpenVDS with using the latest image `community.opengroup.org:5555/osdu/platform/domain-data-mgmt-services/seismic/open-vds/openvds-ingestion:2.1.8` in `SSDMS`, we get the following error:
```
(ERRO...Hi!
When we attempt to convert Segy to OpenVDS with using the latest image `community.opengroup.org:5555/osdu/platform/domain-data-mgmt-services/seismic/open-vds/openvds-ingestion:2.1.8` in `SSDMS`, we get the following error:
```
(ERROR) HTTPRequest::Send, performing request, detailed error: HTTP/2 stream 0 was not closed cleanly: PROTOCOL_ERROR (err 1) The error occurred while executing an HTTP request. error code: 92, error message: 'Stream error in the HTTP/2 framing layer']
Retrying http_request for POST request with https://storage.googleapis.com/upload/storage/v1/b/<subproject-bucket>/o?uploadType=multipart - retry number: 1
```
This problem disappears after switching to the previous version of the image.
It seems like `Google API` doesn't support HTTP/2, when we try to access this object in GCS.M10 - Release 0.13Jørgen Lindjorgen.lind@3lc.aiJørgen Lindjorgen.lind@3lc.aihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/95patching metadata container on existing VDS2022-08-25T09:52:46ZFilip Brzękpatching metadata container on existing VDSDear developers,
is there a way of updating the metadata container on an already existing VDS instance, i.e. post `openvds.create`?
I've gone through the docs and the only place I've seen it is to provide it in the constructor.
use-cas...Dear developers,
is there a way of updating the metadata container on an already existing VDS instance, i.e. post `openvds.create`?
I've gone through the docs and the only place I've seen it is to provide it in the constructor.
use-case: we're trying to convert some other proprietary seismic format to open-VDS and to preserve the idempotency of the conversion workflow, we need to serialize some additional headers. It would be convenient to be able to patch those, after filling data pages. Is it possible as of now?
Thanks,
Filiphttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/96Building out of the box2021-12-09T02:37:33ZPaal KvammeBuilding out of the boxBuilding using docker/*.Dockerfile doesn't work out of the box. Not for me at least. Here is a list of some bugs and some annoyances. I am using the dockerfiles in the baseline but not the actual build scripts.
docker/centos8.Dockerfile...Building using docker/*.Dockerfile doesn't work out of the box. Not for me at least. Here is a list of some bugs and some annoyances. I am using the dockerfiles in the baseline but not the actual build scripts.
docker/centos8.Dockerfile:
- Need to rename PowerTools -> powertools (upstream change in CentOS 8)
- Building C++ "Release" causes several unit tests to crash. Need to use RelWithDebug, and copy the results to where a Release build would have put them.
- Note that boost 1.6.9 is installed in the docker image but not used, unless BOOST_INCLUDEDIR and BOOST_LIBRARYDIR is manually set.
- Note that Release libraries are installed to lib/ while Debug libraries are below lib64. Long live consistency.
- Need to edit the dockerfile to also install python3-devel to get the wheel built. I used "python3 setup.py --build-type RelWithDebug bdist_wheel" to do the actual build. Later attempts also disabled AWS, AZURE, etc.
- The Python wheel still didn't work for CentOS 8, even after fixing things so Python was found. Reason: Since I had to use "RelWithDebug" for the C++ API I assumed the same applied to Python. Not so. In fact, "RelWithDebug" silently fails in that case. The wheel, while created, is still incomplete.
docker/centos7.Dockerfile:
- The dockerfile installs devtoolset-8 while the build script tries to use devtoolset-7.
docker/ubuntu-20.04.Dockerfile:
- dlopen() and dlclose missing. Add the following to src/OpenVDS/CMakeLists.txt. Only needed for Ubuntu, but it appears to be harmless to do it for all. ```target_link_libraries(openvds_objects PRIVATE ${CMAKE_DL_LIBS})```
- Python is not found, need to edit CMakeLists.txt look for Python3 and replace Development.Module with just Development. Note, I applied this to CentOS as well but I have not verified that it was needed there.
All:
- When building sdapi (dms) on Linux, the library ends up as libsdapi.so.0.0 instead of the version (3.14 currently) found in src/src/core/SDVersion.h. Either the OpenVDS build is not using the devops/scripts/build-linux64.sh script (which does sound likely) or there is some problem with some of the other rules. The "0.0" version causes problems if an application uses both OpenVDS and some other module using sdapi.
- When something goes wrong building the Python wheel it is not obvious that there is a problem. I use "python3 setup.py bdist_wheel". Later attempts also disabled AWS, AZURE, etc. The wheel gets created but it is missing key files. Trying to install the wheel and import it gives an error about openvds core not available. Buried somewhere in the build logs I see a message about "Cannot find Python3, found version 3.6.8. Which is itself pretty confusing. It sounds like, "I found it, but I didn't find it". What to do with such errors is not obvious.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/97Undefined behavior when disabling AWS2021-12-06T12:37:57ZPaal KvammeUndefined behavior when disabling AWSSetting DISABLE_AWS_IOMANAGER=TRUE triggers a bug in IOManager::CreateIOManager() in IOManager.cpp. The ifdef test for OPENVDS_NO_AWS_IOMANAGER should have been one line further up. The bug looks harmless but in fact triggers **C++ undef...Setting DISABLE_AWS_IOMANAGER=TRUE triggers a bug in IOManager::CreateIOManager() in IOManager.cpp. The ifdef test for OPENVDS_NO_AWS_IOMANAGER should have been one line further up. The bug looks harmless but in fact triggers **C++ undefined behavior**. It creates a path where this non-void function might fail to return a value. Even if the bad path is not executed, the compiler is allowed to completely trash your program. I have personally seen g++ do just that in a very similar case.
Some but not all compilers detect the problem and report a fatal error.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/98Please document the naming convention for branches and tags2021-12-03T15:37:44ZPaal KvammePlease document the naming convention for branches and tagsCould you consider adding a few lines in the README explaining the main branches are named? For a long while I thought the latest "stable" version was missing a year of development. Here is what I eventually figured out. Can you confirm,...Could you consider adding a few lines in the README explaining the main branches are named? For a long while I thought the latest "stable" version was missing a year of development. Here is what I eventually figured out. Can you confirm, and maybe summarize and update the readme? Or maybe it is already documented and I just didn't see it...
The current stable version as of this writing is v0.12.0.
Pulling the tip of the master branch will normally get a slightly newer version.
branch 2.1, tag 2.1.9 is the current development branch. Only relevant for those contributing to the code, and/or those that really like the bleeding edge. Looking at the git history it appears to have branched off from master a long time ago, but in reality almost all changes have been merged back to master. There are no obvious merge commits however. This was a major source of confusion as it looked like those consuming v0.12 were missing out on a lot of changes.
Tags do not change. As development continues, new 2.1.* tags are added and the 2.1 branch is fast forwarded to the newest tag.
Tag 1.2.5 is the tip of an older development branch and only has historical interest. Some 1.XX.YY labels are connected using fast forward, others are not and appear as dead ends.
Stable releases are named slightly differently. The code is fast forwarded, possibly all the way up to master, and a new label is created. The old release/0.XX is untagged and a new release/0.YY is created.
As of this writing the most recent stable is branch release/0.12 tag v0.12.0. Note the "v" in the tag that development branches are missing. As with the development branches, tags do not change. The next release will probably be branch release/0.13 tag v0.13.0. Branch release/0.12 gets deleted so if you need it you will have to select the tag or the git hash instead.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/99VDS roundtrip to dms fails.2021-12-20T09:32:27ZPaal KvammeVDS roundtrip to dms fails.OpenVDS #9d3e767e
I did the following (partly redacted)
- VDSCopy Volve_from_segy.vds sd://tenant/project/tmp.vds
- VDSCopy sd://tenant/project/tmp.vds roundtrip.vds
The second line resulted in an error about "TraceDimensions_012LOD0/C...OpenVDS #9d3e767e
I did the following (partly redacted)
- VDSCopy Volve_from_segy.vds sd://tenant/project/tmp.vds
- VDSCopy sd://tenant/project/tmp.vds roundtrip.vds
The second line resulted in an error about "TraceDimensions_012LOD0/ChunkMetadata/0" not being found.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/100mutability of VDS2021-12-23T12:59:57ZFilip Brzękmutability of VDSDear developers,
is there a way to update a data chunk of already existing VDS? Is this even possible when considering the low-lvl design of the open VDS format?
use-case: data pipeline workers process pre-defined subsets of the data, ...Dear developers,
is there a way to update a data chunk of already existing VDS? Is this even possible when considering the low-lvl design of the open VDS format?
use-case: data pipeline workers process pre-defined subsets of the data, and we want to re-upload the results to a single VDS entity. Preferably, we want to avoid, a single worker/step that collects all the chunks and then uploads it from within a single context manager, while creating output VDS on s3/DMS.
Thanks,
Filip
PS. I believe the below snippet should not result in seg-fault, even if VDS is immutable after the creation. But it is, tested on 2.1.9 linux version:
```python
import openvds
import numpy as np
def create_vds(
opts,
shape=(100, 100, 100),
format=openvds.VolumeDataChannelDescriptor.Format.Format_R32,
brickSize=openvds.VolumeDataLayoutDescriptor.BrickSize.BrickSize_1024,
):
layout_descriptor = openvds.VolumeDataLayoutDescriptor(
brickSize,
0,
0,
4,
openvds.VolumeDataLayoutDescriptor.LODLevels.LODLevels_None,
openvds.VolumeDataLayoutDescriptor.Options.Options_None,
)
axis_descriptors = [
openvds.VolumeDataAxisDescriptor(shape[2], "Z", "m", 0.0, 2000.0),
openvds.VolumeDataAxisDescriptor(shape[1], "Y", "m", 0.0, 2000.0),
openvds.VolumeDataAxisDescriptor(shape[0], "X", "m", 0.0, 2000.0),
]
channel_descriptors = [
openvds.VolumeDataChannelDescriptor(
format,
openvds.VolumeDataChannelDescriptor.Components.Components_1,
"Value",
"",
0.0,
((shape[2] * 3) * (shape[1] * 2) * shape[0]) - 1.0,
)
]
metadata_container = openvds.MetadataContainer()
vds = openvds.create(
opts[0],
opts[1],
layout_descriptor,
axis_descriptors,
channel_descriptors,
metadata_container,
)
manager = openvds.getAccessManager(vds)
accessor = manager.createVolumeDataPageAccessor(
openvds.DimensionsND.Dimensions_012,
0,
0,
8,
openvds.IVolumeDataAccessManager.AccessMode.AccessMode_Create,
)
for c in range(accessor.getChunkCount()):
page = accessor.createPage(c)
buf = np.array(page.getWritableBuffer(), copy=False)
(min, max) = page.getMinMax()
buf[:, :, :] = np.array([1.0] * buf.size, dtype=float).reshape(buf.shape)
page.release()
accessor.commit()
openvds.close(vds)
create_vds(("error.vds", ""))
with openvds.open("./error.vds", "") as vds:
manager = openvds.getAccessManager(vds)
accessor = manager.createVolumeDataPageAccessor(
openvds.DimensionsND.Dimensions_012,
0,
0,
8,
openvds.IVolumeDataAccessManager.AccessMode.AccessMode_ReadWrite,
)
for c in range(accessor.getChunkCount()):
page = accessor.createPage(c)
buf = np.array(page.getWritableBuffer(), copy=False)
(min, max) = page.getMinMax()
buf[:, :, :] = np.array([1.0] * buf.size, dtype=float).reshape(buf.shape)
page.release()
accessor.commit()
```https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/101Can OpenVDS read old VDS files?2022-01-31T16:08:51ZPaal KvammeCan OpenVDS read old VDS files?From the README.md file:
The specification is based on, but not similar to, the existing Volume Data Store (VDS) file format
So the old VDS and the new OpenVDS are two different file formats, right? Does OpenVDS then support readin...From the README.md file:
The specification is based on, but not similar to, the existing Volume Data Store (VDS) file format
So the old VDS and the new OpenVDS are two different file formats, right? Does OpenVDS then support reading old VDS files?
Or have I misunderstood, and this is just about the "api specification" and that the underlying file format is the same?https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/102Looking for test data2022-01-12T14:40:52ZPaal KvammeLooking for test dataIs there anywhere I can find a vds file for testing that has u8 or u16 samples? I tried running SEGYImport on an 8-bit Seg-Y file but that just crashed.
I did write my own program create such a file but it is not very useful for testing...Is there anywhere I can find a vds file for testing that has u8 or u16 samples? I tried running SEGYImport on an 8-bit Seg-Y file but that just crashed.
I did write my own program create such a file but it is not very useful for testing. Because any mistake I made might well be in both the writer and the reader and thus cancel each other out.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/103Dimension and Produce2022-09-03T00:48:52ZPaal KvammeDimension and ProduceI have been using OpenVDS for a while to read 3d seismic data and I have accumulated several questions. My apologies if I did not read the documentation closely enough.
## What is DimensionsND and DimensionsGroup?
I suspect that Dimens...I have been using OpenVDS for a while to read 3d seismic data and I have accumulated several questions. My apologies if I did not read the documentation closely enough.
## What is DimensionsND and DimensionsGroup?
I suspect that DimensionsND is OpenVDS, DimensionsGroup is the corresponding VDS type. That doesn't help me much because I don't really understand why either is needed. It looks to be that these are enums used to extract e.g. 3d data from a 4d, 5d, or 6d cube by skipping some dimensions but not re-ordering or transposing them.
DimensionsND extracts two or three dimensions, e.g. Dimensions_01 or Dimensions_012. DimensionsGroup extracts 1 to all 6 dimensions.
But why are these needed at all when I read data? If skipping a dimension, why not set min==max instead for the "constant" indices and min=max=0 for the unused ones?
Can you confirm the following: If I want to read a 3d sub-cube from a 3d dataset, i.e. layout->GetDimensionality()==3, then DimensionsND can only be Dimensions_012 when i want to read a sub-cube.
How can I handle OpenVDS files having more that three dimensions, that are really just multiple 3d cubes packed together with the fourth dimension being the cube number? I suspect the answer to the previous question might shed some light in this.
## What is a multi-component file?
And how should it be handled? Is this the same as a 4d cube or is this a different feature?
## What is ProduceStatus?
Can you confirm that I can ignore this when reading full resolution data? I am guessing that "Unavailable" tells me I might get an error later and the difference between "Normal" and "Remapped" is I suspect just a hint about the cost. I am also guessing that Remapped is only relevant for LOD > 0.