Open VDS issueshttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues2022-01-31T16:08:51Zhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/101Can OpenVDS read old VDS files?2022-01-31T16:08:51ZPaal KvammeCan OpenVDS read old VDS files?From the README.md file:
The specification is based on, but not similar to, the existing Volume Data Store (VDS) file format
So the old VDS and the new OpenVDS are two different file formats, right? Does OpenVDS then support readin...From the README.md file:
The specification is based on, but not similar to, the existing Volume Data Store (VDS) file format
So the old VDS and the new OpenVDS are two different file formats, right? Does OpenVDS then support reading old VDS files?
Or have I misunderstood, and this is just about the "api specification" and that the underlying file format is the same?https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/102Looking for test data2022-01-12T14:40:52ZPaal KvammeLooking for test dataIs there anywhere I can find a vds file for testing that has u8 or u16 samples? I tried running SEGYImport on an 8-bit Seg-Y file but that just crashed.
I did write my own program create such a file but it is not very useful for testing...Is there anywhere I can find a vds file for testing that has u8 or u16 samples? I tried running SEGYImport on an 8-bit Seg-Y file but that just crashed.
I did write my own program create such a file but it is not very useful for testing. Because any mistake I made might well be in both the writer and the reader and thus cancel each other out.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/103Dimension and Produce2022-09-03T00:48:52ZPaal KvammeDimension and ProduceI have been using OpenVDS for a while to read 3d seismic data and I have accumulated several questions. My apologies if I did not read the documentation closely enough.
## What is DimensionsND and DimensionsGroup?
I suspect that Dimens...I have been using OpenVDS for a while to read 3d seismic data and I have accumulated several questions. My apologies if I did not read the documentation closely enough.
## What is DimensionsND and DimensionsGroup?
I suspect that DimensionsND is OpenVDS, DimensionsGroup is the corresponding VDS type. That doesn't help me much because I don't really understand why either is needed. It looks to be that these are enums used to extract e.g. 3d data from a 4d, 5d, or 6d cube by skipping some dimensions but not re-ordering or transposing them.
DimensionsND extracts two or three dimensions, e.g. Dimensions_01 or Dimensions_012. DimensionsGroup extracts 1 to all 6 dimensions.
But why are these needed at all when I read data? If skipping a dimension, why not set min==max instead for the "constant" indices and min=max=0 for the unused ones?
Can you confirm the following: If I want to read a 3d sub-cube from a 3d dataset, i.e. layout->GetDimensionality()==3, then DimensionsND can only be Dimensions_012 when i want to read a sub-cube.
How can I handle OpenVDS files having more that three dimensions, that are really just multiple 3d cubes packed together with the fourth dimension being the cube number? I suspect the answer to the previous question might shed some light in this.
## What is a multi-component file?
And how should it be handled? Is this the same as a 4d cube or is this a different feature?
## What is ProduceStatus?
Can you confirm that I can ignore this when reading full resolution data? I am guessing that "Unavailable" tells me I might get an error later and the difference between "Normal" and "Remapped" is I suspect just a hint about the cost. I am also guessing that Remapped is only relevant for LOD > 0.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/104Annotation to World coordinates2021-12-15T14:46:10ZPaal KvammeAnnotation to World coordinatesClass KnownMetadata allows for several ways of specifying the annotation to world coordinate conversion. Simple Origin/ISpacing/XSpacing, or IJK with any 3d affine transform, or 4 control points, or XYZ unconverted.
Is there a single me...Class KnownMetadata allows for several ways of specifying the annotation to world coordinate conversion. Simple Origin/ISpacing/XSpacing, or IJK with any 3d affine transform, or 4 control points, or XYZ unconverted.
Is there a single method in the API that an application can use to get this transform?
Failing that, are there usage rules when writing files that makes it simpler to read them later? For example there might be a rule that if the 4-point style (GridPoint0 etc.) is used, the simple origin and spacing must also be provided.
IJK origin and step is a superset of the simple 2d origin and step, so a seismic cube might be stored as IJK with K pointing straight down. But does that make sense? Or put differently: Should a program that only reads poststack 3d seismic need to be able to process a file with IJK coordinate information as long as K is vertical?https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/1053D visualization demo using Python FastAPI, Three.js. Error when include lib ...2022-01-05T09:58:53ZTrung Dang3D visualization demo using Python FastAPI, Three.js. Error when include lib in C++.I've worked on a simple demo to retrieve slices data and plot 3 type of slices using Three.js which is available [here](https://github.com/trungdang97/openvds-threejs).
![image](/uploads/2d890c3fe67ef220995bbadff3923cf8/image.png)
I do...I've worked on a simple demo to retrieve slices data and plot 3 type of slices using Three.js which is available [here](https://github.com/trungdang97/openvds-threejs).
![image](/uploads/2d890c3fe67ef220995bbadff3923cf8/image.png)
I don't have access to any cloud S3 (I've tried Minio Playground but it's terribly slow) so I used VDS file instead for the demo - which is imported from Kerry3D data. It works fine but the response time is slow (between 2-3 seconds for a slice) so the transition is not smooth.
I'm trying to create a REST service using C++ but I'm not familiar with the language. I copied these 2 folder (OpenVDS, SEGYUtils) from OpenVDS+2.1.8 distribution into my include folder and proceed copy the C++ example. But there was an issue with undefined error from the variable "volumeDataAccessManagerInterface" at line 944 in OpenVDS.h.
`
g++ -std=c++11 -fdiagnostics-color=always -g W:\VDS\vds-3d-plot\cpp\api.cpp -o W:\VDS\vds-3d-plot\cpp\api.exe -I W:\VDS\vds-3d-plot\cpp\include
`
`C:\Users\Trung\AppData\Local\Temp\ccGYBsSs.o: In function 'main':
W:/VDS/vds-3d-plot/cpp/api.cpp:22: undefined reference to '__imp__ZN7OpenVDS4OpenENS_13StringWrapperES0_RNS_5ErrorE'
`
`C:\Users\Trung\AppData\Local\Temp\ccGYBsSs.o: In function
'GetAccessManager':
W:/VDS/vds-3d-plot/cpp/include/OpenVDS/OpenVDS.h:944: undefined reference to '__imp__ZN7OpenVDS25GetAccessManagerInterfaceEPNS_3VDSE
'`
`
collect2.exe: error: ld returned 1 exit status
`
Can someone help me with the error? I know this is not Stack Overflow but if someone can show me a minimal working example using OpenVDS in C++ so I can figure out how it works and continue doing experiments on decreasing the service response time. Thank you in advance.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/106DMS token callback2022-01-06T12:59:13ZPaal KvammeDMS token callbackI would very much like to have a token callback in DMSOpenOptions, to be used as an alternative to supplying fixed strings.
I only need this for C++. Where I believe it is trivial to implement.
My application already uses the DMS for o...I would very much like to have a token callback in DMSOpenOptions, to be used as an alternative to supplying fixed strings.
I only need this for C++. Where I believe it is trivial to implement.
My application already uses the DMS for other purposes, and has proprietary code to manage tokens. Including refreshing them every hour. There isn't really any way of extending that to OpenVDS without exposing SDManager::setAuthProviderCallback().
Suggested implementation in [tokencb.patch](/uploads/f6e86db89db54e8c0c6474d5980c8580/tokencb.patch)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/107OpenVDS / OpenVDS+ compatibility : VolumeDataFormat2022-03-18T11:24:46ZMichael HeckOpenVDS / OpenVDS+ compatibility : VolumeDataFormatSome code that works with OpenVDS (2.1.11) does not compile with OpenVDS+ (2.1.9)
Platform: Windows 10 / VStudio 2017
error C2664: 'int64_t OpenVDS::VolumeDataAccessManager::GetVolumeSubsetBufferSize(const int (&)[6],const int (&)[6],Op...Some code that works with OpenVDS (2.1.11) does not compile with OpenVDS+ (2.1.9)
Platform: Windows 10 / VStudio 2017
error C2664: 'int64_t OpenVDS::VolumeDataAccessManager::GetVolumeSubsetBufferSize(const int (&)[6],const int (&)[6],OpenVDS::VolumeDataChannelDescriptor::Format,int,int)':disappointed: cannot convert argument 3 from 'OpenVDS::VolumeDataFormat' to 'OpenVDS::VolumeDataChannelDescriptor::Format'
Low priority - can work around by replacing OpenVDS::VolumeDataFormat everywhere.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/108Seismc DMS error on close aborts application2022-02-16T09:37:59ZPaal KvammeSeismc DMS error on close aborts applicationMy application crashed after encountering an error in DMS, in spite of trying to catch exceptions.
Here is what I believe to be the sequence of events.
- OpenVDS::Close() is called by the application and does an explicit delete of this...My application crashed after encountering an error in DMS, in spite of trying to catch exceptions.
Here is what I believe to be the sequence of events.
- OpenVDS::Close() is called by the application and does an explicit delete of this->vds.
- ~VDS destructs its unique_ptr VolumeDataStore (default deleter).
- ~VolumeDataStoreIOManager() destructs its unique_ptr m_ioManager (default deleter), concrete type is IOManagerDms
- ~IOManagerDms() calls m_dataset->close() without protecting against exceptions.
- seismicdrive::SDGenericDataset::close() finds some problem and throws.
- terminate() is called because destructors are not allowed to throw.
Wishful thinking: Is it valid to use a VolumeDataAccessManager after OpenVDS::Close() has been called? If so, would it be possible to use the AddUploadError() / GetCurrentUploadError() (and Download...) to propagate such errors?
I was using OpenVDS from git hash 1199f2bc (probably).https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/109Issue with two parallel processes writing to same VDS source.2022-02-07T10:52:20ZMichał MurawskiIssue with two parallel processes writing to same VDS source.
I was trying to implement workflow of multiple parallel processes which are writing to separate chunks of the same VDS source. Unfortunately I encountered some issues in the moment of reading modified VDS source.
Here is code example...
I was trying to implement workflow of multiple parallel processes which are writing to separate chunks of the same VDS source. Unfortunately I encountered some issues in the moment of reading modified VDS source.
Here is code example that I am running
```
from multiprocessing import Process
import openvds
import numpy as np
def unlock_dataset(sd_path):
"Disclosed implementation"
return
def write_zero_pages(accessor):
chunks_count = accessor.getChunkCount()
for c in range(chunks_count):
page = accessor.createPage(c)
buf = np.array(page.getWritableBuffer(), copy=False)
buf[:, :, :] = np.zeros(buf.shape, dtype=float)
page.release()
accessor.commit()
def create_vds(
path,
connection_string,
shape=None,
databrick_size=openvds.VolumeDataLayoutDescriptor.BrickSize.BrickSize_128,
access_mode=openvds.IVolumeDataAccessManager.AccessMode.AccessMode_Create,
components=openvds.VolumeDataChannelDescriptor.Components.Components_1,
format=openvds.VolumeDataChannelDescriptor.Format.Format_R32,
create_and_write_pages=True,
):
layout_descriptor = openvds.VolumeDataLayoutDescriptor(
brickSize=databrick_size,
lodLevels=openvds.VolumeDataLayoutDescriptor.LODLevels.LODLevels_1,
brickSize2DMultiplier=4,
options=openvds.VolumeDataLayoutDescriptor.Options.Options_None,
negativeMargin=0,
positiveMargin=0,
fullResolutionDimension=0,
)
metadata_container = openvds.MetadataContainer()
axis_descriptors = []
for i, size in enumerate(shape):
axis_descriptors.append(
openvds.VolumeDataAxisDescriptor(
size,
f"X{i}",
"unitless",
-1000.0,
1000.0,
)
)
channel_descriptors = [
openvds.VolumeDataChannelDescriptor(
format=format,
components=components,
name=f"Channel0",
unit="unitless",
valueRangeMin=0.0,
valueRangeMax=1000.0,
)
]
vds = openvds.create(
path,
connection_string,
layout_descriptor,
axis_descriptors,
channel_descriptors,
metadata_container,
)
access_manager = openvds.getAccessManager(vds)
accessor = access_manager.createVolumeDataPageAccessor(
dimensionsND=openvds.DimensionsND.Dimensions_012,
accessMode=access_mode,
lod=0,
channel=0,
maxPages=8,
chunkMetadataPageSize=1024,
)
chunks_count = accessor.getChunkCount()
if create_and_write_pages:
write_zero_pages(accessor)
openvds.close(vds)
return chunks_count
def writing_process(path, connection_string, chunks_range, number):
vds = openvds.open(path, connection_string)
manager = openvds.getAccessManager(vds)
accessor = manager.createVolumeDataPageAccessor(
dimensionsND=openvds.DimensionsND.Dimensions_012,
lod=0,
channel=0,
maxPages=8,
accessMode=openvds.IVolumeDataAccessManager.AccessMode.AccessMode_ReadWrite,
chunkMetadataPageSize=1024,
)
for c in range(chunks_range[0], chunks_range[1]):
page = accessor.createPage(c)
buf = np.array(page.getWritableBuffer(), copy=False)
buf[:, :, :] = np.reshape(np.array([float(number)] * buf.size), buf.shape)
page.release()
accessor.commit()
# openvds.close(vds)
def get_data(path, connection_string):
with openvds.open(path, connection_string) as vds_source:
layout = openvds.getLayout(vds_source)
axis_descriptors = [
layout.getAxisDescriptor(dim) for dim in range(layout.getDimensionality())
]
begin_slice = [0, 0, 0, 0, 0, 0]
end_slice = (
int(axis_descriptors[0].numSamples),
int(axis_descriptors[1].numSamples),
int(axis_descriptors[2].numSamples),
1,
1,
1,
)
accessManager = openvds.VolumeDataAccessManager(vds_source)
req = accessManager.requestVolumeSubset(
begin_slice, # start slice
end_slice, # end slice
format=openvds.VolumeDataChannelDescriptor.Format.Format_R32,
lod=0,
replacementNoValue=0.0,
channel=0,
)
if req.data is None:
err_code, err_msg = accessManager.getCurrentDownloadError()
print(err_code)
print(err_msg)
raise RuntimeError("requestVolumeSubset failed!")
dims = (
end_slice[2] - begin_slice[2],
end_slice[1] - begin_slice[1],
end_slice[0] - begin_slice[0],
)
return req.data.reshape(*dims)
if __name__ == "__main__":
path = "sd://osdu/example/dataset-4"
connection_string = "sd_authority_url=https://example.com/api/seismic-store/v3;sd_api_key=xxx;auth_token_url=https://example.com/oauth2/token;sdtoken=SDTOKEN;client_id=CLIENTID;refresh_token=REFRESH_TOKEN;scopes=openid email;LogLevel=100"
numer_of_processes = 2
processes = []
chunks_ranges = []
chunks_count = create_vds(path, connection_string, shape=(512, 512, 512))
a = chunks_count // numer_of_processes
r = chunks_count % numer_of_processes
for i in range(numer_of_processes):
if i == numer_of_processes - 1:
chunks_ranges.append((i * a, (i + 1) * a + r))
else:
chunks_ranges.append((i * a, (i + 1) * a))
print(chunks_ranges)
unlock_dataset(path)
for i, chunks_range in enumerate(chunks_ranges):
p = Process(
target=writing_process,
args=(
path,
connection_string,
chunks_range,
i,
),
)
processes.append(p)
p.start()
for p in processes:
p.join()
print("finished")
vds = openvds.open(path, connection_string)
openvds.close(vds)
data = get_data(path, connection_string)
```
[code.py](/uploads/14209e46c690704b192278aecac37b1c/code.py)
Here is output
```
-- sdapi 3.14.0 - Fri Feb 4 13:40:40 2022 -- Write Block Dimensions_012LOD0/ChunkMetadata/0 --- 0.750 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:41 2022 -- Write Block Dimensions_012LOD0/ChunkMetadata/0 --- 0.716 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:41 2022 -- Write Block LayerStatus --- 0.730 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:41 2022 -- Write Block LayerStatus --- 0.737 s
finished
-- sdapi 3.14.0 - Fri Feb 4 13:40:43 2022 -- Open Dataset sd://osdu/mergingtests4/dataset-4 in ReadOnly mode --- 1.722 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:45 2022 -- Get Block Size --- 1.320 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:45 2022 -- Read Block VolumeDataLayout --- 0.599 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:46 2022 -- Get Block Size --- 0.590 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:46 2022 -- Read Block LayerStatus --- 0.596 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:47 2022 -- Close Dataset sd://osdu/mergingtests4/dataset-4 --- 0.412 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:48 2022 -- Open Dataset sd://osdu/mergingtests4/dataset-4 in ReadOnly mode --- 1.272 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:49 2022 -- Get Block Size --- 0.592 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:49 2022 -- Read Block VolumeDataLayout --- 0.575 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:50 2022 -- Get Block Size --- 0.643 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:50 2022 -- Read Block LayerStatus --- 0.579 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:51 2022 -- Get Block Size --- 0.582 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:52 2022 -- Read Block Dimensions_012LOD0/ChunkMetadata/0 --- 0.600 s
-1
Missing data for chunk: Dimensions_012LOD0/0
-- sdapi 3.14.0 - Fri Feb 4 13:40:52 2022 -- Close Dataset sd://osdu/mergingtests4/dataset-4 --- 0.355 s
Traceback (most recent call last):
File "simple.py", line 208, in <module>
data = get_data(path, connection_string)
File "simple.py", line 159, in get_data
raise RuntimeError("requestVolumeSubset failed!")
RuntimeError: requestVolumeSubset failed!
```
[logs_from_sd_path.log](/uploads/316c5d6bce889df37d29a44f45124f67/logs_from_sd_path.log)
I executed code using seismic store and s3 with the same result. Looks like data are getting corupted when multiple processes writing to the same VDS source.
Can I receive some guidance on this problem?
Is my implementation bad or there is something wrong inside OpenVDS?
I am using python lib openvds 2.2.0 and sd api 3.14.0https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/110Typo in volumedataaccess.py2022-05-04T07:15:08ZFilip BrzękTypo in volumedataaccess.pyPython's bidings for `requestVolumeSamples` have a typos, [line 522](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/blob/master/python/openvds/volumedataaccess.py#L522) should have `samplePosit...Python's bidings for `requestVolumeSamples` have a typos, [line 522](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/blob/master/python/openvds/volumedataaccess.py#L522) should have `samplePosition` positional argument passed, not `arr` that causes `NameError: name 'arr' is not defined`.
```Python
def requestVolumeSamples(self, samplePositions, data_out = None, dimensionsND = DimensionsND.Dimensions_012, lod = 0, channel = 0, interpolationMethod = InterpolationMethod.Cubic, replacementNoValue = None):
"""Request a set of samples from the volume. The samples are always in 32-bit floating point format.
Parameters
----------
samplePositions:
A set of voxel coordinates to obtain sample values for.
data_out : numpy.ndarray, optional
If specified, the data requested is copied to this array. Otherwise, a suitable numpy array is allocated.
dimensionsND : DimensionsND, optional
If specified, determine the dimensiongroup requested. Defaults to Dimensions_012
lod : int, optional
Which LOD level to request. Defaults to 0
channel : int, optional
Channel index. Defaults to 0.
interpolationMethod: InterpolationMethod, optional
Defaults to InterpolationMethod.Cubic
replacementNoValue: float, optional
If specified, NoValue data in the dataset is replaced with this value.
Returns
-------
request : VolumeDataRequest
An object encapsulating the request, the request state, and the requested data.
"""
if data_out is None:
data_out = self.allocateVolumeSamplesBuffer(len(samplePositions), channel)
else:
if data_out.nbytes < self.getVolumeSamplesBufferSize(sampleCount, channel):
raise ValueError("output array is too small to hold the requested data with format {}".format(str(VoxelFormat.Format_R32)))
req = VolumeDataRequest(
data_out = data_out,
dimensionsND = dimensionsND,
lod = lod,
channel = channel,
format = VoxelFormat.Format_R32,
replacementNoValue = replacementNoValue,
interpolationMethod = interpolationMethod)
req.samplePositions = _ndarraypositions(arr)
req._request = self._manager.requestVolumeSamples(
req.data_out,
req.dimensionsND,
req.lod,
req.channel,
req.samplePositions,
req.samplePositions.shape[0],
req.interpolationMethod,
req.replacementNoValue)
return req
```
PS. If you want I can create PR for that, together with smoke test with some comparison against `.getValue()`.
PSS. `sampleCount` in the branch where `data_out` is passed is also undefined.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/111Int32 not supported by SEGYExport2022-07-07T10:11:08ZGary WinfieldInt32 not supported by SEGYExportI have managed to use SEGYImport to create a VDS from a SEGY in Int32 sample format, and it seems to work fine.
However, it appears that SEGYExport does not support the same format to go back to SEGY..
Is there a good reason for that? I...I have managed to use SEGYImport to create a VDS from a SEGY in Int32 sample format, and it seems to work fine.
However, it appears that SEGYExport does not support the same format to go back to SEGY..
Is there a good reason for that? If not, how easy would it be to fix?https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/112IBM env not working with latest ovds image2022-03-11T11:23:07ZShrikant GargIBM env not working with latest ovds imageWe tried latest image for ovds community.opengroup.org:5555/osdu/platform/domain-data-mgmt-services/seismic/open-vds/openvds-ingestion:2.1.12 and it is not working for IBM.
![image](/uploads/9126a91d03f71e4205e27b313c93a833/image.png)
...We tried latest image for ovds community.opengroup.org:5555/osdu/platform/domain-data-mgmt-services/seismic/open-vds/openvds-ingestion:2.1.12 and it is not working for IBM.
![image](/uploads/9126a91d03f71e4205e27b313c93a833/image.png)
@anujgupta FYI..M10 - Release 0.13Morten OfstadJørgen Lindjorgen.lind@3lc.aiMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/113SEGYImport "unexpected EOF" error when importing in minio2022-03-24T09:50:24ZPol JekaSEGYImport "unexpected EOF" error when importing in minio`./SEGYImport --url s3://open-vds/ --url-connection "Region=local;AccessKeyId=key;SecretKey=secret;EndpointOverride=http://localhost:9000/;LogLevel=Error;" test.segy`
When importing file into a local service [minio](https://min.io/) (or...`./SEGYImport --url s3://open-vds/ --url-connection "Region=local;AccessKeyId=key;SecretKey=secret;EndpointOverride=http://localhost:9000/;LogLevel=Error;" test.segy`
When importing file into a local service [minio](https://min.io/) (or Zenko s3 server), on the minio side occurs error:
```
Error: unexpected EOF (*errors.errorString)
5: cmd/fs-v1-helpers.go:322:cmd.fsCreateFile()
4: cmd/fs-v1.go:1196:cmd.(*FSObjects).putObject()
3: cmd/fs-v1.go:1112:cmd.(*FSObjects).PutObject()
2: cmd/object-handlers.go:1617:cmd.objectAPIHandlers.PutObjectHandler()
1: net/http/server.go:2042:http.HandlerFunc.ServeHTTP()
```
This error only happens if the client didn't send the entire object as specified in the content-length.
But if use the same parameters for uploading to amazon s3, it works correctly.
Сan tell me what the problem might be?https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/114Random crash in VDSCopy and other apps2022-04-05T08:23:53ZPaal KvammeRandom crash in VDSCopy and other appsI am seeing spurious hangs, segfaults, and OOM kills when decompressing a wavelet compressed VDS file. Seen in my own code but I was able to reproduce it using VDSCopy:
```
VDSCopy Volve_Seismic_LL_Wavelet.vds /tmp/junk.vds --compressio...I am seeing spurious hangs, segfaults, and OOM kills when decompressing a wavelet compressed VDS file. Seen in my own code but I was able to reproduce it using VDSCopy:
```
VDSCopy Volve_Seismic_LL_Wavelet.vds /tmp/junk.vds --compression-method None
```
I am fairly sure I found at least part of the problem. After some work. In VolumeDataStore.cpp function ToDataBlock() the field dataBlock.Components is left un-initialized and will contain a random value. Later this value contributes to the number of bytes to allocate to hold the block. Due to a couple of 32-bit overflows it can even end up as smaller than actually needed - but usually it will be a lot more. The memory allocation is in VolumeDataStore::SerializeVolumeData() in the CompressionMethod::None branch.
I found a similar but unrelated problem in WaveletAdaptiveLLDecompress_CreateDecodeIterator(). If called with transformDataCount = zero or one the secondTransformMask will contain garbage data but will still be used later in the function. My test file has a ridiculous number of LODs (it has all 12) and I assume this is happening when the last few LODs are accessed. I have no idea how to fix it.
Tested on OpenVDS 2.3.0.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/115Support SEGYImport from one sd-location to another with Seismic running with ...2022-04-05T09:49:41ZYan Sushchynski (EPAM)Support SEGYImport from one sd-location to another with Seismic running with MinIO.Hello!
I'd like to run SEGYImport from one `sd`-location to another. Our Seismic DMS uses MinIO implementation that mostly follows AWS implementation.
As I understand, if Open VDS works with `sd` it uses `seismic-cpp-lib` to handle fi...Hello!
I'd like to run SEGYImport from one `sd`-location to another. Our Seismic DMS uses MinIO implementation that mostly follows AWS implementation.
As I understand, if Open VDS works with `sd` it uses `seismic-cpp-lib` to handle files, which works with GCS (`GcsAccessorStorage`) by default.
Probably you know what variable I need to set to start working with AwsStorage?
Image:
`community.opengroup.org:5555/osdu/platform/domain-data-mgmt-services/seismic/open-vds/openvds-ingestion:2.3.2`
Error:
```
https://storage.googleapis.com/storage/v1/b/test-seismic-store$$39p8c84ic5hvyx5u/o/
```
Thanks!Morten OfstadJørgen Lindjorgen.lind@3lc.aiMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/116ZLib version change2022-04-04T10:18:26ZPaal KvammeZLib version changeOpenZGY Version 2.3.2 no longer builds due to a 404 on http://zlib.net/zlib-1.2.11.tar.gz
The problem is that Zlib is hard coded at version 1.2.11 and downloaded as a source tarball. Which makes a lot of sense. The ZLib maintainers fixe...OpenZGY Version 2.3.2 no longer builds due to a 404 on http://zlib.net/zlib-1.2.11.tar.gz
The problem is that Zlib is hard coded at version 1.2.11 and downloaded as a source tarball. Which makes a lot of sense. The ZLib maintainers fixed a few bugs and bumped the version to 1.2.12. And then promptly removed 1.2.11 from the servers to make sure everybody uses the latest version. Thanks, guys.
From my point of view the issue is not critical since I can just patch the source before building.
FYI, I don't normally need to rebuild OpenVDS unless the version changes, but I was working on my build scripts.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/117Could you please tell me how to convert SEG-Y files to vds using SEGImport in...2022-05-03T11:58:21ZEddie TuCould you please tell me how to convert SEG-Y files to vds using SEGImport in openVDS?This is my first time with VDS, can you please give me an example about how to convert SEG-Y files to vds using SEGImport? I would be very grateful!This is my first time with VDS, can you please give me an example about how to convert SEG-Y files to vds using SEGImport? I would be very grateful!https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/118Is there a way to use the default AWS credentials in Python API?2022-05-03T08:16:24ZAlex GassIs there a way to use the default AWS credentials in Python API?Hello,
Could you please clarify if there is an ability to use the default AWS credentials to access S3 resources in OpenVDS Python API?
At the moment, if I'm not specifying the connection string with parameters (region, access, and sec...Hello,
Could you please clarify if there is an ability to use the default AWS credentials to access S3 resources in OpenVDS Python API?
At the moment, if I'm not specifying the connection string with parameters (region, access, and secret keys), getting `Access Denied` or `error while sending http request` errors (currently using Windows).
So I wonder if it's possible to grab credentials/config from the default local creds stored in `$USERHOME/.aws`?
Thanks!https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/119SEGYImport make option to remove the default location of headerfields2022-08-26T08:26:21ZJørgen Lindjorgen.lind@3lc.aiSEGYImport make option to remove the default location of headerfieldsThe user has to specify their own headerfield file defining all headerfield offsets, giving an error on undefined locations.The user has to specify their own headerfield file defining all headerfield offsets, giving an error on undefined locations.Jørgen Lindjorgen.lind@3lc.aiJørgen Lindjorgen.lind@3lc.aihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/120Can't create VDS handle to read from S3 if remote VDS compressed with Wavelet...2022-06-03T11:07:05ZAlex TCan't create VDS handle to read from S3 if remote VDS compressed with WaveletNormalizeBlock methodHello!
I ran into a problem trying to open remote VDS on S3 compressed using `WaveletNormalizeBlock` method.
**The problem** description: I receive "illegal compression method" error trying to read from that VDS. It looks impossible to...Hello!
I ran into a problem trying to open remote VDS on S3 compressed using `WaveletNormalizeBlock` method.
**The problem** description: I receive "illegal compression method" error trying to read from that VDS. It looks impossible to read from `WaveletNormalizeBlock`-compressed VDS using VDSInfo or any other OpenVDS tool (e.g. slicedump sample), but I thought it is OK to use wavelet-compressed VDS just to read.
Moreover, such a VDS can't be read from even using OpenVDS+ library tools!
Surprisingly, if I use local file-based VDS with `WaveletNormalizeBlock` compression, all works perfectly.
If it's important, I have uploaded such a VDS to S3 using OpenVDS+ utils (SEGYImport/VDSCopy).
As I can see in OpenVDS sources, the reason is that ParseVDSJson.cpp module serializes `CompressionMethod::WaveletNormalizeBlock` enum to string as `"WaveletNormalizeBlock"` inside `std::string ToString(CompressionMethod compressionMethod)` function, but deserialize function `CompressionMethod CompressionMethodFromJson(Json::Value const &jsonCompressionMethod)` has the following code:
static CompressionMethod CompressionMethodFromJson(Json::Value const &jsonCompressionMethod) {
std::string compressionMethodString = jsonCompressionMethod.asString();
...
else if(compressionMethodString == "WaveletNormalizeBlockExperimental") // NOTE THIS
{
return CompressionMethod::WaveletNormalizeBlock;
}
...
else
{
throw Json::Exception("Illegal compression method");
}
}
So strings don't match each other (`"WaveletNormalizeBlock"` != `"WaveletNormalizeBlockExperimental"`) and we run into exception.
**My question** is: isn't this is a bug/typo in source code and can it be fixed? (for OpenVDS+ release too, if I may ask)
Or if this is intended behavior, can you please reveal the reason behind this logic?https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/121For GCP OSDU environments the name of the DAG that performs the conversion fr...2022-06-13T20:13:48ZKamlesh TodaiFor GCP OSDU environments the name of the DAG that performs the conversion from SEGY to OpenVDS are not consistent.In the Platform Validation (GCP Dev2) environment the name of the DAG is "Segy_to_vds_conversion_sdms" and in the pre-ship environment name of DAG is "openvds_import"
Also, the document that describes how to trigger the workflow needs up...In the Platform Validation (GCP Dev2) environment the name of the DAG is "Segy_to_vds_conversion_sdms" and in the pre-ship environment name of DAG is "openvds_import"
Also, the document that describes how to trigger the workflow needs updating.
At present it says to trigger
```
curl --location --request POST 'https://<base_url>/api/workflow/v1/workflow/openvds_import/workflowRun' \
--header 'Content-Type: application/json' \
--header 'data-partition-id: opendes' \
--header 'Authorization: <Bearer Token>' \
--data-raw '{
"executionContext": {
"url_connection":"Region=us-east-1;AccessKeyId=XXX;SecretKey=XXX;SessionToken=XXX",
"input_connection":"Region=us-east-1;AccessKeyId=XXX;SecretKey=XXX;SessionToken=XXX",
"segy_file":"s3://aws-osdu-sample-data/sample-data/seismic/st0202/stacks/ST0202R08_PS_PSDM_RAW_PP_TIME.MIG_RAW.POST_STACK.3D.JS-017534.segy",
"url":"s3://aws-osdu-sample-data/"
}
}
But know if we one is going to create a File record as well as bingrid, seimictrace and work-product-component records then
url_connection and input_connection do not make sense?M12 - Release 0.15Yan Sushchynski (EPAM)Dzmitry Malkevich (EPAM)Yan Sushchynski (EPAM)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/122Unhandled exception from VolumeDataStoreIOManager::ReadChunkImpl()2022-08-09T09:57:58ZAlex TUnhandled exception from VolumeDataStoreIOManager::ReadChunkImpl()Hello!
I'm running a GUI QT application and use openvds+.
When I try to request some data from VDS, it throws an unhandled exception.
I was able to receive a stack of it and check for the reason linking against openvds built from sour...Hello!
I'm running a GUI QT application and use openvds+.
When I try to request some data from VDS, it throws an unhandled exception.
I was able to receive a stack of it and check for the reason linking against openvds built from sources.
The stack is as follows:
[External Code]
openvdsd.dll!fmt::v7::detail::do_throw(const fmt::v7::system_error & x) Line 108 C++
openvdsd.dll!fmt::v7::detail::fwrite_fully(const void * ptr, unsigned __int64 size, unsigned __int64 count, _iobuf * stream) Line 166 C++
openvdsd.dll!fmt::v7::detail::vprint_mojibake(_iobuf * f, fmt::v7::basic_string_view format_str, fmt::v7::format_args args) Line 2791 C++
openvdsd.dll!fmt::v7::print(_iobuf * f, const char[92] & format_str) Line 2096 C++
openvdsd.dll!OpenVDS::VolumeDataStoreIOManager::ReadChunkImpl(const OpenVDS::VolumeDataChunk & chunk, int adaptiveLevel, std::vector> & serializedData, std::vector> & metadata, OpenVDS::CompressionInfo & compressionInfo, OpenVDS::Error & error) Line 503 C++
openvdsd.dll!OpenVDS::VolumeDataStore::ReadChunk(const OpenVDS::VolumeDataChunk & chunk, int adaptiveLevel, std::vector> & serializedData, std::vector> & metadata, OpenVDS::CompressionInfo & compressionInfo, OpenVDS::Error & error) Line 306 C++
openvdsd.dll!OpenVDS::VolumeDataPageAccessorImpl::ReadPreparedPaged(OpenVDS::VolumeDataPage * page) Line 362 C++
openvdsd.dll!OpenVDS::ProcessPageInJob(OpenVDS::Job * job, int pageIndex, OpenVDS::VolumeDataPageAccessorImpl * pageAccessor, std::function processor) Line 2425 C++
openvdsd.dll!OpenVDS::VolumeDataRequestProcessor::AddJob::__l32::() Line 2562 C++
[External Code]
openvdsd.dll!ThreadPool::Enqueue::__l3::() Line 98 C++
[External Code]
openvdsd.dll!ThreadPool::{ctor}::__l3::() Line 72 C++
[External Code]
The reason is in the following code:
bool VolumeDataStoreIOManager::ReadChunkImpl(const VolumeDataChunk &chunk, int adaptiveLevel, std::vector &serializedData, std::vector &metadata, CompressionInfo &compressionInfo, Error &error)
{
...
fmt::print(stderr, "Dataset has missing metadata tags, degraded data verification, reverting to metadata pages\n");
Execution of `fmt::print` ends up with a method `fmt::v7::detail::fwrite_fully`:
// A wrapper around fwrite that throws on error.
inline void fwrite_fully(const void* ptr, size_t size, size_t count,
FILE* stream) {
size_t written = std::fwrite(ptr, size, count, stream);
if (written < count) FMT_THROW(system_error(errno, "cannot write to file"));
}
and here we can't write into stream (`stderr`) from GUI app and receive `written==0` and then this code throws 'system_error' exception which no one handles.
That is an important warning surely, but I suppose it shouldn't lead to unhandled exceptions.
I have seen the possibility to disable warnings with environment variables:
m_warnedAboutMissingMetadataTag(getBooleanEnvironmentVariable("OPENVDS_DISABLE_WARNINGS"))
but it can't be done programmatically on WIN since the environment is fixed by OS on process start and the application can not change its own environment.
I would suggest not using `fmt::print` to print into `stderr` and `using fmt::format` and `fprint(stderr)` instead.
PS: On the same VDS dataset, console utils like slicedump work OK (as expected, though printing warning message).https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/123Segmentation fault on VDSCopy2023-01-25T12:13:20ZMichał MurawskiSegmentation fault on VDSCopyI was trying to copy data from the local directory to the seismic store using the SD protocol. I was using OpenVDS in version 2.3.7
I executed the following command:
`
VDSCopy ./data.vds sd://osdu/testproject1/test-27 -d "SdAuthorityUr...I was trying to copy data from the local directory to the seismic store using the SD protocol. I was using OpenVDS in version 2.3.7
I executed the following command:
`
VDSCopy ./data.vds sd://osdu/testproject1/test-27 -d "SdAuthorityUrl=https://****/seismic-store/v3;SdApiKey=xxx;AuthTokenUrl=https://*****/token.oauth2;SdToken=***;ClientId=***;ClientSecret=***;RefreshToken=***;Scopes=offline_access;LogLevel=Trace" --tolerance 1 --compression-method None
`
In the end when the counter reached 100% percent it resulted in the following error
`
qemu: uncaught target signal 11 (Segmentation fault) - core dumped
Segmentation fault
`
The same command works fine for 2.2.0https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/124Error building the package as a static library2022-09-03T00:51:23ZZhao ZhangError building the package as a static libraryCan the package be build as a static library?
I tried by changing the option BUILD_SHARED_LIBS to OFF in the main CMakeList.txt, then I encountered the following error:
```
CMake Error: install(EXPORT "openvds-export" ...) includes tar...Can the package be build as a static library?
I tried by changing the option BUILD_SHARED_LIBS to OFF in the main CMakeList.txt, then I encountered the following error:
```
CMake Error: install(EXPORT "openvds-export" ...) includes target "openvds" which requires target "openvds_objects" that is not in any export set.
CMake Error in src/OpenVDS/CMakeLists.txt:
export called with target "openvds" which requires target "openvds_objects"
that is not in any export set.`
-- Generating done
CMake Generate step failed. Build files cannot be regenerated correctly.
```
Any insights are appreciated!https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/125"OpenVDS Works IBM Platform Validation SSDMS_to_SSDMS conversion CI/CD" is fa...2024-02-26T20:14:34ZAnkit Goyal"OpenVDS Works IBM Platform Validation SSDMS_to_SSDMS conversion CI/CD" is failing at Segy to VDS conversion "Check the triggered OpenVDS workflow status"AIRFLOW_CTX_DAG_RUN_ID=2d859565-e230-40f1-824e-84294e90ef94
[2022-06-15 13:17:49,173] {kubernetes_pod.py:365} INFO - creating pod with labels {'dag_id': 'openvds_import', 'task_id': 'segy_to_vds_ssdms_conversion', 'execution_date': '2022...AIRFLOW_CTX_DAG_RUN_ID=2d859565-e230-40f1-824e-84294e90ef94
[2022-06-15 13:17:49,173] {kubernetes_pod.py:365} INFO - creating pod with labels {'dag_id': 'openvds_import', 'task_id': 'segy_to_vds_ssdms_conversion', 'execution_date': '2022-06-15T131744.4632090000-bb7a04682', 'try_number': '1'} and launcher <airflow.providers.cncf.kubernetes.utils.pod_launcher.PodLauncher object at 0x7f0ed4a2ad60>
[2022-06-15 13:17:49,300] {pod_launcher.py:198} INFO - Event: segy-vds-conversion.4b797bbebaa84dd8b718869a9505e28b had an event of type Pending
[2022-06-15 13:17:49,300] {pod_launcher.py:128} WARNING - Pod not yet started: segy-vds-conversion.4b797bbebaa84dd8b718869a9505e28b
[2022-06-15 13:17:50,322] {pod_launcher.py:198} INFO - Event: segy-vds-conversion.4b797bbebaa84dd8b718869a9505e28b had an event of type Pending
[2022-06-15 13:17:50,322] {pod_launcher.py:128} WARNING - Pod not yet started: segy-vds-conversion.4b797bbebaa84dd8b718869a9505e28b
[2022-06-15 13:17:51,341] {pod_launcher.py:198} INFO - Event: segy-vds-conversion.4b797bbebaa84dd8b718869a9505e28b had an event of type Pending
[2022-06-15 13:17:51,341] {pod_launcher.py:128} WARNING - Pod not yet started: segy-vds-conversion.4b797bbebaa84dd8b718869a9505e28b
[2022-06-15 13:17:52,358] {pod_launcher.py:198} INFO - Event: segy-vds-conversion.4b797bbebaa84dd8b718869a9505e28b had an event of type Pending
[2022-06-15 13:17:52,358] {pod_launcher.py:128} WARNING - Pod not yet started: segy-vds-conversion.4b797bbebaa84dd8b718869a9505e28b
[2022-06-15 13:17:53,372] {pod_launcher.py:198} INFO - Event: segy-vds-conversion.4b797bbebaa84dd8b718869a9505e28b had an event of type Failed
[2022-06-15 13:17:53,373] {pod_launcher.py:308} ERROR - Event with job id segy-vds-conversion.4b797bbebaa84dd8b718869a9505e28b Failed
[2022-06-15 13:17:53,386] {pod_launcher.py:198} INFO - Event: segy-vds-conversion.4b797bbebaa84dd8b718869a9505e28b had an event of type Failed
[2022-06-15 13:17:53,386] {pod_launcher.py:308} ERROR - Event with job id segy-vds-conversion.4b797bbebaa84dd8b718869a9505e28b Failed
[2022-06-15 13:17:53,402] {pod_launcher.py:198} INFO - Event: segy-vds-conversion.4b797bbebaa84dd8b718869a9505e28b had an event of type Failed
[2022-06-15 13:17:53,402] {pod_launcher.py:308} ERROR - Event with job id segy-vds-conversion.4b797bbebaa84dd8b718869a9505e28b Failed
[2022-06-15 13:17:53,468] {taskinstance.py:1501} ERROR - Task failed with exceptionAnuj GuptaAnuj Guptahttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/126SEGYImport lod generation2022-07-21T13:14:48ZArtem ВSEGYImport lod generationCan't find any suitable example of LOD generation while preparing vds file (for example from segy) or LOD usage for already existed `vds` file.
By logic - I changed next lines for SEGYImport:
```
OpenVDS::VolumeDataLayoutDescriptor::LO...Can't find any suitable example of LOD generation while preparing vds file (for example from segy) or LOD usage for already existed `vds` file.
By logic - I changed next lines for SEGYImport:
```
OpenVDS::VolumeDataLayoutDescriptor::LODLevels lodLevels = OpenVDS::VolumeDataLayoutDescriptor::LODLevels_1; // 1-2-3-12 it doesn't matter
// Iterate over new possible lod
for (int lod{0}; lod <= layoutDescriptor.GetLODLevels(); ++lod)
{
auto amplitudeAccessor = accessManager.CreateVolumeDataPageAccessor(writeDimensionGroup, lod, 0, 8, OpenVDS::VolumeDataAccessManager::AccessMode_Create);
auto traceFlagAccessor = accessManager.CreateVolumeDataPageAccessor(writeDimensionGroup, lod, 1, 8, OpenVDS::VolumeDataAccessManager::AccessMode_Create);
auto segyTraceHeaderAccessor = accessManager.CreateVolumeDataPageAccessor(writeDimensionGroup, lod, 2, 8, OpenVDS::VolumeDataAccessManager::AccessMode_Create);
auto offsetAccessor = fileInfo.HasGatherOffset() ? accessManager.CreateVolumeDataPageAccessor(writeDimensionGroup, lod, offsetChannelIndex, 8, OpenVDS::VolumeDataAccessManager::AccessMode_Create) : nullptr;
auto azimuthAccessor = isAzimuth ? accessManager.CreateVolumeDataPageAccessor(writeDimensionGroup, lod, azimuthChannelIndex, 8, OpenVDS::VolumeDataAccessManager::AccessMode_Create) : nullptr;
auto muteAccessor = isMutes ? accessManager.CreateVolumeDataPageAccessor(writeDimensionGroup, lod, muteChannelIndex, 8, OpenVDS::VolumeDataAccessManager::AccessMode_Create) : nullptr;
...
}
```
For `LOD = 0`:
```
- segyTraceHeaderPitch 0x0000006e8751c178 {1, 240, 15360, 0, 0, 0} int[6]
[0] 1 int
[1] 240 int
[2] 15360 int
[3] 0 int
[4] 0 int
[5] 0 int
```
```
assert(!segyTraceHeaderBuffer || segyTraceHeaderPitch[fileInfo.IsOffsetSorted() ? 2 : 1] == SEGY::TraceHeaderSize);
```
`TraceHeaderSize == segyTraceHeaderPitch[1] == 240 - OK`
Next, use `LOD = 1`:
```
- segyTraceHeaderPitch 0x0000006e8751c178 {1, 120, 7680, 0, 0, 0} int[6]
[0] 1 int
[1] 120 int
[2] 7680 int
[3] 0 int
[4] 0 int
[5] 0 int
```
assert on line, because `segyTraceHeaderPitch` divided by factor 2?
Can you add small code snippet, how to use LOD with vds? It is really working feature?https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/127The call to VolumeDataPageAccessor::CopyPage never returns if internet connec...2022-08-08T09:23:50ZAlex TThe call to VolumeDataPageAccessor::CopyPage never returns if internet connection lostHello!
I'm testing the following scenario:
- (0) make sure Internet connection is available
- (1) start vdscopy-based code which copies vds from local source to online destination
- (2) turn off the Internet for a few minutes
- (3) (o...Hello!
I'm testing the following scenario:
- (0) make sure Internet connection is available
- (1) start vdscopy-based code which copies vds from local source to online destination
- (2) turn off the Internet for a few minutes
- (3) (optional) turn on the Internet
Online destination is S3 cloud with default timeouts.
The result I get is that not-blocking (according to documentation) call `VolumeDataAccessManager::CopyPage()` blocks forever, internally it waits for `std::future::get` on the following lines of `VolumeDataAccessManagerImpl::AddCopyPageJob`:
if (m_copyJobs[m_copyJobIndex].size() == m_requestProcessor->GetThreadPool().ThreadCount())
{
auto& otherjobs = m_copyJobs[!m_copyJobIndex];
for (auto& job : otherjobs)
{
auto error = job.second.get(); <<<< Hangs here
My call stack looks like this:
[External Code]
openvdsd.dll!OpenVDS::VolumeDataAccessManagerImpl::AddCopyPageJob(OpenVDS::VolumeDataChunk & chunk, OpenVDS::VolumeDataPageAccessorImpl & destination, OpenVDS::VolumeDataPageAccessorImpl & source) Line 710 C++
openvdsd.dll!OpenVDS::VolumeDataPageAccessorImpl::CopyPage(__int64 chunkIndex, OpenVDS::VolumeDataPageAccessor & source) Line 492 C++
MyApp.exe!`anonymous namespace'::process(seismic::IVDSCopyCallback & callback, const `anonymous-namespace'::VDSCopyInternalParams & params) Line 296 C++
MyApp.exe!seismic::VDSCopy::upload(seismic::IVDSCopyCallback & callback, const seismic::VDSCopyParams & importParams) Line 342 C++
[External Code]
No matter whether step 3 is done or not - it hangs there, execution is not restored, even if internet becomes available after few minutes of being offline.
But even if there is no internet finally - I suppose the code should just return after some reasonable timeout, storing errors inside `VolumeDataAccessManagerImpl::m_uploadErrors` rather than block forever.
This behavior creates serious problems with exiting the application after the connection is lost, for example.
PS. in case of very short periods of being offline (e.g. few seconds, in my tests) call to `CopyPage()` is able to recover and continue execution.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/128Any plans for a .NET API?2022-07-21T09:12:27ZRobert SchmidtAny plans for a .NET API?I've seen mention of .NET support for Bluware VDS, which I assume means the commercial offering.
Is there any .NET (specifically .NET 6) support available for OpenVDS? We're happy to test out early versions.I've seen mention of .NET support for Bluware VDS, which I assume means the commercial offering.
Is there any .NET (specifically .NET 6) support available for OpenVDS? We're happy to test out early versions.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/129Improve performance for sequential read from big volume2022-09-08T07:36:35ZAnatoly YanchevskyImprove performance for sequential read from big volumeDuring tests OpenVDS with big volume (like 56Gb) we found serious performance degradations (up to 3x).
Profiling in Intel VTune show problem in VolumeDataPageAccessorImpl::m_pages.
Changing container type from std::list to std::unordere...During tests OpenVDS with big volume (like 56Gb) we found serious performance degradations (up to 3x).
Profiling in Intel VTune show problem in VolumeDataPageAccessorImpl::m_pages.
Changing container type from std::list to std::unordered_map help as to resolve this:
std::unordered_map<int64_t, VolumeDataPageImpl*> m_pages;
Where key is chunk index.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/130AWS: segy to oVDS Job is not executing in Platform Validation, showing only S...2023-04-03T08:37:17ZAnkit GoyalAWS: segy to oVDS Job is not executing in Platform Validation, showing only Submitted status, not updating in Airflow DAG.@fhoueto.amz , please find the job details below.
{
"workflowId": "osdu:Segy_vds_conversion_test_999515179012",
"runId": "59c4d924-8f1d-427f-9108-0bdb7aa63c36",
"startTimeStamp": 1657302545031,
"status": "submitted",
...@fhoueto.amz , please find the job details below.
{
"workflowId": "osdu:Segy_vds_conversion_test_999515179012",
"runId": "59c4d924-8f1d-427f-9108-0bdb7aa63c36",
"startTimeStamp": 1657302545031,
"status": "submitted",
"submittedBy": "admin@testing.com"
}https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/131Can't remove file segy file after vds conversion2022-08-09T09:56:47ZArtem ВCan't remove file segy file after vds conversionCase:
1. Use original code from openvds SEGYImport
2. Convert any segy to vds - it OK, vds file successfully created.
3. Do not close original app - and try to remove input segy file.
4. Fail - user can't remove input segy while app is o...Case:
1. Use original code from openvds SEGYImport
2. Convert any segy to vds - it OK, vds file successfully created.
3. Do not close original app - and try to remove input segy file.
4. Fail - user can't remove input segy while app is open.
For example:
![image](/uploads/25b2e7105fdfca86afbb59d0776c2a75/image.png)
```
traceDataManagers[fileIndex].addDataRequests(chunkInfo.secondaryKeyStart, chunkInfo.secondaryKeyStop, lower, upper);
->
m_dataViewManager->addDataRequests(requests);
->
prefetchUntilMemoryLimit();
->
auto dataView = std::make_shared<DataView>(m_dataProvider, req.offset, req.size, true, m_error);
```
segy use `win32api file mapping` and seems like ptr not cleared.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/132openvds+ linked sdapi version vs release tag2023-01-11T21:09:07ZFilip Brzękopenvds+ linked sdapi version vs release tagI would like to understand, what versions of seismic-store/sdapi are used to build openvds+ binaries, and if possible the reasoning behind it.
From quick glance at `CMake/Fetch3rdPartyInBuild.cmake` across 2.X openvds versions/tags:
- 2...I would like to understand, what versions of seismic-store/sdapi are used to build openvds+ binaries, and if possible the reasoning behind it.
From quick glance at `CMake/Fetch3rdPartyInBuild.cmake` across 2.X openvds versions/tags:
- 2.0.X - dms - 1e933303
- 2.1.X - dms - 98d59b27b5
- 2.2.X - dms - 98d59b27b5
- 2.3.X - dms - 3633f2030
- 2.4.X - dms - 3633f2030
- `release/0.14` - dms - 98d59b27b5
- `release/0.15` - dms - 3633f2030
whereas sdapi release tags, have the following sha commits:
- `release/0.14` - d96f1e9b9806486e523ac4d9ea74a124af7ee68d
- `release/0.15` - 04d68a061c3311c041d0ace4c222880032172065
it seems openVDS version tagged as `release/0.14` is missing 35 commits (`git rev-list d96f1e9b9 ^98d59b27b5 --pretty=oneline | wc -l`), from the same release tag `release/0.14` on sdapi; and openVDS version tagged as `release/0.15` is missing 5 commits on the corresponding `release/0.15` (`git rev-list 04d68a061c3311c041d0ace4c222880032172065 ^3633f2030 --pretty=oneline`).
I understand that those might not be "feature commits" irrelevant for the functionalities, but can we have some clarity on which seismic-store/sdapi version is supported in a given openVDS release?
Lastly, what is the desired flow of reporting issues emerging at sdapi level, when using openVDS SDK with a given OSDU DP release? Should the tickets be created for openVDS, or directly in [sd-api repository](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-cpp-lib) with reference to openVDS version used?https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/133When I am doing Vdscopy from local windows machine using python script using ...2022-08-25T04:10:53Zsangamesh hooliWhen I am doing Vdscopy from local windows machine using python script using sub process getting'[Dimensions_012LOD0/50: sdapi 3.14.0 - : Encountered network error when sending http request]\nVolumeDataAccessManager destructor: there where upload errors\n''[Dimensions_012LOD0/50: sdapi 3.14.0 - : Encountered network error when sending http request]\nVolumeDataAccessManager destructor: there where upload errors\n'https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/134Read/write speed in 'tests/python/basic_tests/createtest.py'2022-08-08T16:44:06ZAli VaziriRead/write speed in 'tests/python/basic_tests/createtest.py'Hi,
I used the test in the script 'tests/python/basic_tests/createtest.py' to measure the read/write speed (lines 44 and 47). I compared the results (in seconds) with [OpenZGY](https://community.opengroup.org/osdu/platform/domain-data-m...Hi,
I used the test in the script 'tests/python/basic_tests/createtest.py' to measure the read/write speed (lines 44 and 47). I compared the results (in seconds) with [OpenZGY](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-zgy) for three cubes with 1024, 2048, 3072 points/direction and float32 data:
ZGY (write): 10, 81, 344\
VDS (write): 19, 105, 686
ZGY (read): 4, 32, 151\
VDS (read): 1, 11, 331
I see the writing is slower than OpenZGY (which is sequential and if there's any interest, I can provide the small script I used for timing OpenZGY).
Is the 'createtest.py' the most efficient way for reading/writing without compression? Thank you in advance!
Sincerely,\
Ali
P.S. I used 'BrickSize_512' as greater sizes are not allowed due to 0x7FFFFFFF number hardcoded in 'DataBlock.cpp'.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/135Refresh Token Flow failing with SDapi 3.162023-01-24T13:01:46ZFilip BrzękRefresh Token Flow failing with SDapi 3.16Hi,
during our testing, we've observed that the same connection string to SD store seems to work with versions of openvds+ linked to sdapi 3.14, but does not work with sdapi 3.16 (default for ~2.3.0 and ~2.4.0 openvds+ versions. Traceb...Hi,
during our testing, we've observed that the same connection string to SD store seems to work with versions of openvds+ linked to sdapi 3.14, but does not work with sdapi 3.16 (default for ~2.3.0 and ~2.4.0 openvds+ versions. Traceback is somewhat cryptic but related to response type.
```
'sd_authority_url=<<REDACTED>>/api/seismic-store/v3;sd_api_key=xxx;auth_token_url=<<REDACTED>>/token.oauth2;sdtoken=<<REDACTED>>;client_id=<<REDACTED>>;client_secret=<<REDACTED>>;refresh_token=<<REDACTED>>;scopes=offline_access'
ERROR:<<REDACTED>>:sdapi 3.16.0 - CallbackAuthProvider::getServiceAuthTokenImpl: Failed converting text to json format
text:
null
Traceback (most recent call last):
File "<<REDACTED>>", line 211, in _grab_vds_base_volume
return openvds.open(input, options)
RuntimeError: sdapi 3.16.0 - CallbackAuthProvider::getServiceAuthTokenImpl: Failed converting text to json format
text:
null
```
As said, exact same connection string does work with sdapi 3.14, so something breaking must have happened between 3.14 and 3.16; but I'm not skilled enough to dissect it further.
Seismic store details:
* OSDU M11 release,
* AWS flavor,
* Custom Identity Provider Oauth2.0 compatible (not the default cognito-idp)
Let me know, what's desired next steps are, both platform access and accessed data are restricted, so I don't think I can provide more details in public.
Best,
Filiphttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/136When I execute VdsInfo on sd://<>//<>/<> using python sub process, getting t...2022-09-05T14:48:30Zsangamesh hooliWhen I execute VdsInfo on sd://<>//<>/<> using python sub process, getting truncated response backhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/137SEGYImport does not populate WKT metadata when importing 2D SEGY2022-08-25T09:51:55ZMarius Storm-Olsenmarius@bluware.comSEGYImport does not populate WKT metadata when importing 2D SEGYSEGYImport allows the user to specify a CRS WKT, which is then set in the VDS metadata. When importing 2D SEGY this metadata is not set.
**DATASET**:
Any 2D SEGY.
**REPRO STEPS**:
Import the data while specifying a CRS WKT.
**WORKARO...SEGYImport allows the user to specify a CRS WKT, which is then set in the VDS metadata. When importing 2D SEGY this metadata is not set.
**DATASET**:
Any 2D SEGY.
**REPRO STEPS**:
Import the data while specifying a CRS WKT.
**WORKAROUND**:
noneJørgen Lindjorgen.lind@3lc.aiJørgen Lindjorgen.lind@3lc.aihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/138VDSError: [Chunk: 0, Channel: 1 LOD: 0: Trying to read from a layer that has ...2022-08-25T04:10:22Zsangamesh hooliVDSError: [Chunk: 0, Channel: 1 LOD: 0: Trying to read from a layer that has not been added] VolumeDataAccessManager destructor: there where upload errorsWhen I do VDS copy from local machine to sd://<>/<>/<> getting below error
VDSError: [Chunk: 0, Channel: 1 LOD: 0: Trying to read from a layer that has not been added] VolumeDataAccessManager destructor: there where upload errorsWhen I do VDS copy from local machine to sd://<>/<>/<> getting below error
VDSError: [Chunk: 0, Channel: 1 LOD: 0: Trying to read from a layer that has not been added] VolumeDataAccessManager destructor: there where upload errorshttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/139Build Open VDS with the new version of seismic-cpp-lib2022-09-08T12:14:19ZYan Sushchynski (EPAM)Build Open VDS with the new version of seismic-cpp-libCould I ask you to build Open VDS convertor with using the latest sdapi version with Anthos support?
SDAPI Anthos MR:
https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-cpp-li...Could I ask you to build Open VDS convertor with using the latest sdapi version with Anthos support?
SDAPI Anthos MR:
https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-cpp-lib/-/merge_requests/147M14 - Release 0.17Morten OfstadJørgen Lindjorgen.lind@3lc.aiMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/140I have tried to do VDS copy of file around 100GB got below error2022-08-25T09:50:50Zsangamesh hooliI have tried to do VDS copy of file around 100GB got below errorvds.VDSError: [Chunk: 187, Channel: 0 LOD: 0: Read error: ReadSync::GetOverlappedResult: An unexpected network error occurred.
]
VolumeDataAccessManager destructor: there where upload errors
As per my understanding network error should...vds.VDSError: [Chunk: 187, Channel: 0 LOD: 0: Read error: ReadSync::GetOverlappedResult: An unexpected network error occurred.
]
VolumeDataAccessManager destructor: there where upload errors
As per my understanding network error should not stop VDS copyhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/141SEGYImport ignores scaling factor for integral samples2024-02-26T20:14:01ZPaal KvammeSEGYImport ignores scaling factor for integral samplesSEGYImport doesn't seem to handle offset/scale when reading Seg-Y files with integral samples. Actually, Seg-Y doesn't allow specifying offset. But it can specify scale. The scale is found in the TRWF field, bytes 169-170, in the trace h...SEGYImport doesn't seem to handle offset/scale when reading Seg-Y files with integral samples. Actually, Seg-Y doesn't allow specifying offset. But it can specify scale. The scale is found in the TRWF field, bytes 169-170, in the trace header. Assuming I understand the spec correctly.
If the TRWF is the same for all traces then the scale factor (2^-TRWF) could easily be stored in the VDS metadata. So, this is arguably a bug. See createChannelDescriptors() in SEGYImport.cpp.
Varying TRWF is trickier and would probably require the samples to be converted to float with each trace being scaled individually. Note that I have not seen such files in the wild. So, this second issue might be of academic interest only.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/142Assert when trying to make slice on connection lost2022-08-30T08:30:45ZArtem ВAssert when trying to make slice on connection lostWe catch assertion in OpenVDS in base scenario with remote (S3) requests + lost internet conenction case:
P1. Open handle to remote S3 and make N requests:
```
std::vector<std::array<float, OpenVDS::Dimensionality_Max>> samplesPosition...We catch assertion in OpenVDS in base scenario with remote (S3) requests + lost internet conenction case:
P1. Open handle to remote S3 and make N requests:
```
std::vector<std::array<float, OpenVDS::Dimensionality_Max>> samplesPositions;
// Generate required points for small area
// And fetch via RequestVolumeSamples
auto vdsRequest(accessManager.RequestVolumeSamples(
OpenVDS::Dimensions_012,
0,
channelIndex,
reinterpret_cast<const float(*)[OpenVDS::Dimensionality_Max]>(samplesPositions.data()),
static_cast<int>(samplesPositions.size()),
OpenVDS::InterpolationMethod::Linear));
```
P2. Wrap P1 as std::future list:
```
std::vector<std::future<std::shared_ptr<VolumeDataRequestFloat>>> requests;
for (std::size_t t(0); t < 100; t++)
{
requests.emplace_back(std::async(makeRequest(t));
}
```
P3. Disconnect ethernet cable or turn off wifi connection
```
Expression: m_pendingDownloadRequests.find(chunk) == m_pendingDownloadRequests.end()
For information on how your program can cause an assertion
failure, see the Visual C++ documentation on asserts
(Press Retry to debug the application - JIT must be enabled)
---------------------------
Abort Retry Ignore
---------------------------
```
![image](/uploads/2b8169f2b0246eeadba6daaa02b6443d/image.png)
Looks like library should check chunk existage before access:
```
bool VolumeDataStoreIOManager::PrepareReadChunkImpl(const VolumeDataChunk &chunk, int adaptiveLevel, Error &error)
{
...
// HERE
if (m_pendingDownloadRequests.find(chunk) == m_pendingDownloadRequests.end())
return false
/// Continue
std::string url = CreateUrlForChunk(layerName, chunk.index);
auto transferHandler = std::make_shared<ReadChunkTransfer>(compressionInfo, (metadataManager != nullptr) ? parsedMetadata.CreateChunkMetadata() : std::vector<uint8_t>());
if (isConstantValue)
{
m_pendingDownloadRequests[chunk] = PendingDownloadRequest(transferHandler);
}
else
{
m_pendingDownloadRequests[chunk] = PendingDownloadRequest(m_ioManager->ReadObject(url, transferHandler, ioRange), transferHandler);
}
return true;
}
```https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/143No README.md for HueBDS2022-09-13T12:25:25ZMorten OfstadNo README.md for HueBDSThere is no README.md for the HueBDS tool and it doesn't get an entry in the documentation.There is no README.md for the HueBDS tool and it doesn't get an entry in the documentation.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/144Build OpenVDS+ with the latest msvc 20222022-09-26T09:44:53ZArtem ВBuild OpenVDS+ with the latest msvc 2022Can you provide information about msvc complier version of openvds+ (2.4.3) build?
`msvc_140` - it is actual msvc toolset, isn't it?
Do you have any plans to migrate to newly msvc (2022) and update dlls?Can you provide information about msvc complier version of openvds+ (2.4.3) build?
`msvc_140` - it is actual msvc toolset, isn't it?
Do you have any plans to migrate to newly msvc (2022) and update dlls?https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/145Wrong error message in OpenVDSInterfaceImpl::Close2022-09-26T14:42:45ZAnatoly YanchevskyWrong error message in OpenVDSInterfaceImpl::CloseIn function `OpenVDSInterfaceImpl::Close`:
`throw InvalidOperation(fmt::format("Flush in Close failed: {}", error.string).c_str());`
need to change on:
`throw InvalidOperation(fmt::format("Flush in Close failed: {}", flashError.string...In function `OpenVDSInterfaceImpl::Close`:
`throw InvalidOperation(fmt::format("Flush in Close failed: {}", error.string).c_str());`
need to change on:
`throw InvalidOperation(fmt::format("Flush in Close failed: {}", flashError.string).c_str());`https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/146SEGYImport includes sas-token in VolumeDataLayout2022-10-17T09:58:21ZErlend HårstadSEGYImport includes sas-token in VolumeDataLayoutHi,
If invoking SEGYImport (v2.3.3) with a blob sas url, the sas is included as part of the metadata in VolumeDataLayout. E.g:
```
SEGYImport --url "azureSAS://myaccount.blob.core.windows.net/container" --url-connection "Suffix=?$SAS"...Hi,
If invoking SEGYImport (v2.3.3) with a blob sas url, the sas is included as part of the metadata in VolumeDataLayout. E.g:
```
SEGYImport --url "azureSAS://myaccount.blob.core.windows.net/container" --url-connection "Suffix=?$SAS" --compression-method=None "https://myaccount.blob.core.windows.net/container/path/mysegy.segy?sp=r&st=2022-09-30T09%3A32%3A03Z&se=2022-09-30T17%3A32%3A03Z&spr=https&sv=2021-06-08&sr=b&sig=dy%2FVXSzS15yjFgMuk0rSYzh2eIXRdjHf0MyaxxLRgI%3D"
```
produces the following entries in the VolumeDataLayout
```
{
"category" : "ImportInformation",
"name" : "DisplayName",
"type" : "String",
"value" : "mysegy.segy?sp=r&st=2022-09-30T09%3A32%3A03Z&se=2022-09-30T17%3A32%3A03Z&spr=https&sv=2021-06-08&sr=b&sig=dy%2FVXSzS15yjFgMuk0rSYzh2eIXRdjHf0MyaxxLRgI%3D"
},
{
"category" : "ImportInformation",
"name" : "InputFileName",
"type" : "String",
"value" : "mysegy.segy?sp=r&st=2022-09-30T09%3A32%3A03Z&se=2022-09-30T17%3A32%3A03Z&spr=https&sv=2021-06-08&sr=b&sig=dy%2FVXSzS15yjFgMuk0rSYzh2eIXRdjHf0MyaxxLRgI%3D"
},
```https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/147Nearest interpolation does not choose the numerically nearest voxel2023-10-19T18:24:41ZErlend HårstadNearest interpolation does not choose the numerically nearest voxel
Hi,
Why does vds’ nearest interpolation interpolate in the range `[0, 1)` in around voxels, and not `[-0.5, 0.5)`? I find this a bit surprising. If I ask for voxel 1.99 in some dimension I would expect to get voxel 2 not 1, as 2 is nume...
Hi,
Why does vds’ nearest interpolation interpolate in the range `[0, 1)` in around voxels, and not `[-0.5, 0.5)`? I find this a bit surprising. If I ask for voxel 1.99 in some dimension I would expect to get voxel 2 not 1, as 2 is numerically “nearer” to 1.99 than 1https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/148Open local vds file with 260+ symbols in path2022-10-17T09:54:36ZArtem ВOpen local vds file with 260+ symbols in pathProblem in opening vds file with 260+ symbols path.
For example:
`D:\Documents\Documents\1234567890 168e79156-f70c-41a9-9b84-89f5fb9402e9 8e79156-f70c-41a9-9b84-89f5fb9402e9 8e79156-f70c-41a9-9b84-89f5fb9402e9 8e79156-f70c-41a9-9b84-89f...Problem in opening vds file with 260+ symbols path.
For example:
`D:\Documents\Documents\1234567890 168e79156-f70c-41a9-9b84-89f5fb9402e9 8e79156-f70c-41a9-9b84-89f5fb9402e9 8e79156-f70c-41a9-9b84-89f5fb9402e9 8e79156-f70c-41a9-9b84-89f5fb9402e9 8e79156-\Test_Lateral.StarSteer.data\sc\f1c720af-8e57-4137-874b-11f96e47bda7.vds`
can't be opened. Error:
```
{code=3 string="File::Open: The system cannot find the path specified.\r\n" }
```
YES, 260 Character Path Limit already fixed on my PC :smile:
```
bool File::Open(const std::string& filename, bool isCreate, bool isDestroyExisting, bool isWriteAccess, Error& error)
std::wstring native_name;
s2ws(_cFileName, native_name);
_pxPlatformHandleRead = CreateFileW(
native_name.c_str(),
GENERIC_READ | (isWriteAccess ? GENERIC_WRITE : 0),
FILE_SHARE_READ | FILE_SHARE_WRITE,
NULL,
dwCreationDisposition,
FILE_ATTRIBUTE_NORMAL | FILE_FLAG_RANDOM_ACCESS | FILE_FLAG_OVERLAPPED,
NULL);
}
```
generates `INVALID_HANDLE_VALUE`.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/149unable to install openvds on amazon linux2022-10-21T12:32:18Zsangamesh hooliunable to install openvds on amazon linuxERROR: Could not find a version that satisfies the requirement openvds (from versions: none)
ERROR: No matching distribution found for openvds
I have tried with all the options mentioned in https://packaging.python.org/en/latest/tutoria...ERROR: Could not find a version that satisfies the requirement openvds (from versions: none)
ERROR: No matching distribution found for openvds
I have tried with all the options mentioned in https://packaging.python.org/en/latest/tutorials/installing-packages/#installing-from-pypihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/150Create pkg-config configuration (.pc file) for consumption by other projects2022-10-28T07:22:45ZAlexander JaustCreate pkg-config configuration (.pc file) for consumption by other projectsIt would be nice if OpenVDS could generate a [pkg-config](https://www.freedesktop.org/wiki/Software/pkg-config/) configuration file (`.pc` file) for easy consumption of the OpenVDS project in other projects. This would especially help wi...It would be nice if OpenVDS could generate a [pkg-config](https://www.freedesktop.org/wiki/Software/pkg-config/) configuration file (`.pc` file) for easy consumption of the OpenVDS project in other projects. This would especially help with projects that are not using CMake as build system. One example would be projects using Go and its build system. Go has explicit support for pkg-config in the [CGO package](https://pkg.go.dev/cmd/cgo#hdr-Using_cgo_with_the_go_command) to obtain the needed flags for compiling and linking of applications to other packages. Would this be something one could add?
I made a quick test with a minimal template for the pkg-config configuration file which worked in my setup. I have attached the patch. It creates a `openvds.pc` file from the `openvds.pc.in` template file stored in `CMake` via a `configure_file` step in the CMake build process. The configuration file is installed into `TARGET_DIR/lib/pkgconfig/`. If the patch would be already acceptable or only small extra work, I would submit it as merge request.
[0001-Add-pkg-config-configuration-file-generation.patch](/uploads/612c2ede69944f24f2e8cb4ec83d8927/0001-Add-pkg-config-configuration-file-generation.patch)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/1513.0.4 fmt related build failure in IOManagerAzureSdkForCpp2022-11-01T11:40:00ZAlena Chaikouskaya3.0.4 fmt related build failure in IOManagerAzureSdkForCppHi,
We have a problem with building 3.0.4 with new azure sdk.
We are running
```
RUN cmake -S . \
-B build \
-DCMAKE_BUILD_TYPE=Release \
-DBUILD_JAVA=OFF \
-DBUILD_PYTHON=OFF \
-DBUILD_EXAMPLES=OFF \
-DBUILD_TE...Hi,
We have a problem with building 3.0.4 with new azure sdk.
We are running
```
RUN cmake -S . \
-B build \
-DCMAKE_BUILD_TYPE=Release \
-DBUILD_JAVA=OFF \
-DBUILD_PYTHON=OFF \
-DBUILD_EXAMPLES=OFF \
-DBUILD_TESTS=OFF \
-DBUILD_DOCS=OFF \
-DDISABLE_AWS_IOMANAGER=ON \
-DDISABLE_AZURESDKFORCPP_IOMANAGER=OFF \
-DDISABLE_GCP_IOMANAGER=ON \
-DDISABLE_DMS_IOMANAGER=OFF \
-DDISABLE_STRICT_WARNINGS=OFF
```
Unfortunately on all checked OS (alpine, etc) that fails with
```
In file included from /open-vds/3rdparty/fmt-9.1.0/include/fmt/format.h:48,
from /open-vds/src/OpenVDS/IO/IOManagerAzureSdkForCpp.h:33,
from /open-vds/src/OpenVDS/IO/IOManager.cpp:32:
/open-vds/3rdparty/fmt-9.1.0/include/fmt/core.h:1334:47: error: there are no arguments to 'format_as' that depend on a template parameter, so a declaration of 'format_as' must be available [-fpermissive]
```
If in the build script we exchange 9.1.0 with 7.1.3, build is run fine.
But then on usage new errors appear:
```
libazure-core.so, needed by /open-vds/Dist/OpenVDS/lib/libopenvds.so, not found (try using -rpath or -rpath-link)
libazure-storage-blobs.so, needed by /open-vds/Dist/OpenVDS/lib/libopenvds.so, not found (try using -rpath or -rpath-link)
libazure-storage-common.so, needed by /open-vds/Dist/OpenVDS/lib/libopenvds.so, not found (try using -rpath or -rpath-link)
```
That makes sense as only some azure libraries are copied:
```
Installing: /open-vds/Dist/OpenVDS/lib/libazurestorage.so.7.5
Installing: /open-vds/Dist/OpenVDS/lib/libazurestorage.so.7
```
when before (code based on version 2.3.3) `libazure-core.so`, `libazure-storage-blobs.so` and `libazure-storage-common.so` were installed as well.
```
Installing: /open-vds/Dist/OpenVDS/lib/libazurestorage.so.7.5
Installing: /open-vds/Dist/OpenVDS/lib/libazurestorage.so.7
Installing: /open-vds/Dist/OpenVDS/lib/libazurestorage.so
Installing: /open-vds/Dist/OpenVDS/lib/libazure-core.so
Installing: /open-vds/Dist/OpenVDS/lib/libazure-storage-common.so
Installing: /open-vds/Dist/OpenVDS/lib/libazure-template.so
Installing: /open-vds/Dist/OpenVDS/lib/libazure-identity.so
Installing: /open-vds/Dist/OpenVDS/lib/libazure-security-keyvault-common.so
Installing: /open-vds/Dist/OpenVDS/lib/libazure-security-keyvault-keys.so
Installing: /open-vds/Dist/OpenVDS/lib/libazure-storage-blobs.so
Installing: /open-vds/Dist/OpenVDS/lib/libazure-storage-files-datalake.so
Installing: /open-vds/Dist/OpenVDS/lib/libazure-storage-files-shares.so
```
Now to work around this we have to manually copy `azure-sdk-for-cpp_12.3.0_install` artifacts to installation directory.
Can this be fixed?https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/152Broken Win ErrorToString implementation for non-english system lang2023-04-04T19:09:53ZArtem ВBroken Win ErrorToString implementation for non-english system langLet's explain minimum code sample:
```
OpenVDS::Error error;
OpenVDS::SetIoError(8,error );
```
We want to get description with system lang on Windows.
```
error {code=8 string="???????????? ???????? ?????? ??? ????????? ???? ??...Let's explain minimum code sample:
```
OpenVDS::Error error;
OpenVDS::SetIoError(8,error );
```
We want to get description with system lang on Windows.
```
error {code=8 string="???????????? ???????? ?????? ??? ????????? ???? ???????.\r\n" } OpenVDS::VDSError
```
*Non-english system lang* will return unreadable description. For example - I use Kazakh lang.
Solution:
```
std::string ws2s(const std::wstring& s)
{
int len;
int slength = (int)s.length() + 1;
len = WideCharToMultiByte(CP_UTF8, 0, s.c_str(), slength, 0, 0, 0, 0);
std::string r(len, '\0');
WideCharToMultiByte(CP_UTF8, 0, s.c_str(), slength, &r[0], len, 0, 0);
return r;
}
std::string ErrorToString(DWORD error)
{
wchar_t buf[256];
FormatMessageW(FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS,
NULL, error, MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT),
buf, (sizeof(buf) / sizeof(wchar_t)), NULL);
std::wstring ws(&buf[0]);
return ws2s(ws);
}
```
Or any equivalent of wchar usage in FormatMessage WinAPI call.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/153Cannot create IJKCoordinateTransformer for 2D dataset (Python)2022-10-31T14:31:13ZAlexander JaustCannot create IJKCoordinateTransformer for 2D dataset (Python)## Description
I am currently playing around with the creation of VDS files. I am especially interested in working via the Python interface and the different coordinate systems. I set up the a small [Python script](/uploads/8d79d553af1f...## Description
I am currently playing around with the creation of VDS files. I am especially interested in working via the Python interface and the different coordinate systems. I set up the a small [Python script](/uploads/8d79d553af1f908973720a1e03b21e06/write_2d_vds_data_testing.py) that creates a simple 2D dataset from a NumPy array with random content. Parts of the script is based on the `npz_to_vds.py` script from the examples. I would like to convert between inline/crossline coordinates, voxel coordinates and world coordinates.
In my script, the creation of the VDS file is successful. I also see that the file is recognized as 2D file by OpenVDS during writing since the chunks written to the page buffer are 4*brick_size. However, when I want to obtain the `IJKCoordinateTransformer` for this file, I run into the following exception
```text
Exception:
Dimension -1 is not a valid dimension. Dimensionality_Max is 6.
```
When I create a 3D file with only one coordinate in z direction, obtaining the transformer seems to be successful.
## Expectation
I obtain the coordinate transformer which allows me to transform between [different coordinate](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/cppdoc/struct/structOpenVDS_1_1IJKCoordinateTransformer.html) systems (ijk, inline/crossline etc.).
## Questions
- Is this behavior expected? I assumed that I could still work with the IJK transformer.
- Do I create the file as a 2D file in a wrong way?
- If the behavior is expected, would it be possible to extend the transformer to work with 2D data and/or as a quick fix to make the error message more expressive?
## System
- Arm64 MacOS 12.6
- VDS 3.0.3 with Python interfacehttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/154Crash in Segy import while eject flash drive with segy file (or disconnect la...2023-03-28T15:10:48ZArtem ВCrash in Segy import while eject flash drive with segy file (or disconnect lan/wi-fi with source segy file)It is extremely hard to reproduce, but bug is already exists in all releases of open-vds.
Let's discusse two different cases:
1. Move source segy file to flash drive
2. Import segy file (via SEGImport utils)
3. Eject flash drive while p...It is extremely hard to reproduce, but bug is already exists in all releases of open-vds.
Let's discusse two different cases:
1. Move source segy file to flash drive
2. Import segy file (via SEGImport utils)
3. Eject flash drive while processing
4. Crash without any dump
Another case:
1. Move source segy file to external network drive (google drive, smb or any other)
2. Import segy file (via SEGImport utils)
3. Unplug ethernet cable (disconnect from wifi/lan, close your corporate vpn connection or any other) while processing
4. Crash without any dump
I found, that main reason of crash inside `TraceDataManager`
Crash will be here:
```
const char* header = traceDataManager.getTraceData(trace, error);
if (error.code)
{
outputPrinter.printWarning("IO", "Failed when reading data", fmt::format("{}", error.code), error.string);
break;
}
const void* data = header + SEGY::TraceHeaderSize;
int primaryTest = fileInfo.Is2D() ? 0 : SEGY::ReadFieldFromHeader(header, fileInfo.m_primaryKey, fileInfo.m_headerEndianness),
secondaryTest;
```
![image](/uploads/6b6a1d4b0140865730491d87a35c71da/image.png)
As you can see `TraceDataManager::getTraceData` imp is safe.
```
const char
* basePtr = static_cast<const char *>(pageView->Pointer(error));
```
but... You loose nullptr check while return address.
I added required comparator for MVP:
![image](/uploads/a31740e7589fad71c1f61cea788424e4/image.png)
And yes - `basePtr` is nullptr.
Return ptr with any offset will crash application.
---
Solution: Add code snippet
```
const char *
getTraceData(int64_t traceNumber, OpenVDS::Error & error) const
{
...
// Additional check
if (basePtr == nullptr)
{
error.code = 1;
error.string = "Failed to acquire pageView pointer";
return nullptr;
}
return basePtr + (traceNumber - pageTrace) * m_traceByteSize;
}
```
---
How to 100% reproduce:
0. Start SEGYImport with debug session
1. Copy segy to flash, start importing and waith for 10-20-N% done.
2. Set breakpoint to line: ``` for (int64_t trace(firstTrace); trace <= segment->m_traceStop && error.code == 0; ++trace, ++tertiaryIndex)```
3. Eject flash drive
4. Iterate over loop up to crash :)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/155Allocation error when using openvds.whl2023-03-28T14:58:41ZSubha RamAllocation error when using openvds.whlCurrently trying to read a .vds from SDMS using vds = openvds.open(URL,connection_string).
Get the following error:
RuntimeError: Error on downloading VolumeDataLayout object: bad allocation.
Any suggestions ?Currently trying to read a .vds from SDMS using vds = openvds.open(URL,connection_string).
Get the following error:
RuntimeError: Error on downloading VolumeDataLayout object: bad allocation.
Any suggestions ?https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/156SEGYImport run on segy with format codes 3 and 8 creates wrong data2022-11-23T09:30:29ZAlena ChaikouskayaSEGYImport run on segy with format codes 3 and 8 creates wrong dataSEGYImport states to support the following data sample format codes:
```
3 = 2-byte, two's complement integer
8 = 1-byte, two's complement integer
```
Yet the conversion results seem wrong.
Attached are the synthetic files with 3 iline...SEGYImport states to support the following data sample format codes:
```
3 = 2-byte, two's complement integer
8 = 1-byte, two's complement integer
```
Yet the conversion results seem wrong.
Attached are the synthetic files with 3 ilines and xlines [100_as_format_3.segy](/uploads/a03cb15e40a745ce15f23797d4765815/100_as_format_3.segy)[100_as_format_8.segy](/uploads/14721b426d8805a3140f5c877a15ab1f/100_as_format_8.segy).
Formats are 3 and 8.
All the values in the files equal to 100. [One can see that data is stored as `00 64` (format 3) and `64` (format 8) which correspond to two's complement integer notation. Processing tools also read the data as `100`.]
Running
```
SEGYImport --vdsfile 100_as_format_8.vds 100_as_format_8.segy
SEGYImport --vdsfile 100_as_format_3.vds 100_as_format_3.segy
```
creates vds files with corresponding formats:
```
"format" : "Format_U16"
"format" : "Format_U8"
```
Running modified code from one of the examples
```
sliceDimension=2
sliceIndex=0
accessManager = openvds.getAccessManager(vds)
layout = openvds.getLayout(vds)
axisDescriptors = [layout.getAxisDescriptor(dim) for dim in range(layout.getDimensionality())]
min = tuple(sliceIndex + 0 if dim == sliceDimension else 0 for dim in range(6))
max = tuple(sliceIndex + 1 if dim == sliceDimension else layout.getDimensionNumSamples(dim) for dim in range(6))
req = accessManager.requestVolumeSubset(min, max, format = format)
height = max[0] if sliceDimension != 0 else max[1]
width = max[2] if sliceDimension != 2 else max[1]
data = req.data.reshape(width, height).transpose()
print(data)
```
with
```
vds=openvds.open("100_as_format_3.vds")
format = openvds.VolumeDataChannelDescriptor.Format.Format_U16
```
gives a slice of `32868`
and with
```
vds=openvds.open("100_as_format_8.vds")
format = openvds.VolumeDataChannelDescriptor.Format.Format_U8
```
gives a slice of `228`
which are not `100`.
This seems to be caused by code [adding 32768 and 128](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/blob/master/src/SEGYUtils/VDSSEGYInfo.cpp#L7) to the values respectively, but I fail to see why it was there in the first place.
All other supported format codes in spot-tests returned the expected values (check was run on little-endian only system/files).
Is there something I misunderstand about these two codes?https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/157Behavior for files with irregular inlines/crosslines2022-11-15T17:29:04ZAlena ChaikouskayaBehavior for files with irregular inlines/crosslines_(sorry, I will try to divide issue into parts, as for some reason it gets constantly marked as spam)_
While playing with openvds we accidentally created synthetic segy files which are imported into vds incorrectly (roundtrip breaks).
W..._(sorry, I will try to divide issue into parts, as for some reason it gets constantly marked as spam)_
While playing with openvds we accidentally created synthetic segy files which are imported into vds incorrectly (roundtrip breaks).
We are not sure how likely files like that can appear in reality, but our domain knowledge source tells us that it is possible in theory.
1. File [broken1.segy](/uploads/1e35d0885342952d3f3c5ec69273e02c/broken1.segy) with ilines `[1, 6, 11, 15]`
(Stride is 5, last element is at distance 4)
2. File [broken2.segy](/uploads/f2e69b64faf656cdb771ac2215264221/broken2.segy) with ilines `[1, 6, 11, 16, 21, 26, 27]`
(Stride is 5, last element is at distance 1)
Some rows just get lost, never to be found in vds.
Main observation is that as openvds for some reason on purpose ignores the distance between two last values, we need to supply a different distance to last element to reproduce this behavior.
How short/long is the distance of the last element might also matter as changing [this](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/blob/master/tools/SEGYImport/SEGYImport.cpp#L2185) suspicious piece of code (as it seems that `a + (a-b)%c - b` is not divisible by `c`) fixed only one of those cases for me.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/158Installation of master fails on MacOS 13.0 with Arm642024-02-01T07:15:58ZAlexander JaustInstallation of master fails on MacOS 13.0 with Arm64## Description
I am trying to build OpenVDS `master` on an Arm64 Mac with the current MacOS release (Ventura), but it fails. Any input would be appreciated. I would also try to supply patches/merge requests where it makes sense.
I am u...## Description
I am trying to build OpenVDS `master` on an Arm64 Mac with the current MacOS release (Ventura), but it fails. Any input would be appreciated. I would also try to supply patches/merge requests where it makes sense.
I am using the following command to configure
```text
cmake -S . \
-B build \
-DCMAKE_BUILD_TYPE=Release \
-DBUILD_SHARED_LIBS=ON \
-DBUILD_JAVA=OFF \
-DBUILD_PYTHON=ON \
-DBUILD_EXAMPLES=ON \
-DBUILD_TESTS=OFF \
-DBUILD_DOCS=OFF \
-DDISABLE_AWS_IOMANAGER=ON \
-DDISABLE_AZURESDKFORCPP_IOMANAGER=ON \
-DDISABLE_GCP_IOMANAGER=ON \
-DDISABLE_DMS_IOMANAGER=ON \
-DDISABLE_STRICT_WARNINGS=ON \
-DCMAKE_INSTALL_PREFIX="${INSTALLATION_DIR}" \
```
where `INSTALLATION_DIR` points to `/Users/aej/software/openvds-master-install-python`.
and the following command line for building OpenVDS
```text
cmake --build "build" \
--config Release \
--target install \
-j 1 \
--verbose \
```
## Expectation
OpenVDS is built and installed in the specified directory.
## Actual behavior
The build fails. I found the following problems
1. If I delete a downloaded third-party dependency from the `3rdParty` directory, delete my build directory and then rerun the CMake configuration step the automatic fetching of the library fails. After a second deletion of the build directory and rerunning the CMake configuration step the third-party library seems to be fetched correctly.
2. I get a problem due to the inclusion of `curl.h` by `cpprestsdk`
```text
In file included from /Users/aej/software/compilescripts/openvds/openvds-3.1.0-src/src/OpenVDS/IO/IOManagerCurl.h:41:
/Library/Developer/CommandLineTools/SDKs/MacOSX13.0.sdk/usr/include/curl/curl.h:115:41: error: too few arguments provided to function-like macro invocation
__has_declspec_attribute(dllimport))
```
This seems to be related to changed behavior of LLVM/clang and it appears [with other projects](https://github.com/llvm/llvm-project/issues/53269) and has been reported to [cURL as well](https://github.com/curl/curl/issues/8293). It seems to be some interaction of cURL and casablanca. There are an [issue](https://github.com/microsoft/cpprestsdk/issues/1710) and a [pull request](https://github.com/microsoft/cpprestsdk/pull/1723) in the `cpprestsdk` repository for this, but they say that this will not be fixed since `cpprestsdk` is in maintenance mode.
I can fix it by commenting out the `#define dllimport`, but I am not sure if that is the best thing to do.
```text
cpprestapi_file="${SOURCE_DIR}/3rdparty/cpprestapi-2.10.16/Release/include/cpprest/details/cpprest_compat.h"
sed -i '' 's/\#define dllimport/\/\/\#define dllimport/' "${cpprestapi_file}"
sed -i '' 's/\/\/\/\/\#define dllimport/\/\/\#define dllimport/' "${cpprestapi_file}"
```
As the [`cpprestsdk`](https://github.com/microsoft/cpprestsdk) project is marked as being in maintenance mode so maybe it is necessary to move to another project in the (near?) future.
Side question: Why is the package called `cpprestapi` within the OpenVDS project? It makes debugging a bit confusing since the actual package/repository is called `cpprestsdk`.
3. Building the AWS IOManager fails
```text
/Users/aej/software/compilescripts/openvds/openvds-master-src/src/OpenVDS/IO/IOManagerAWSCurl.h:10:10: fatal error: 'aws/crt/auth/Credentials.h' file not found
#include <aws/crt/auth/Credentials.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
```
The file exists though if I search for in from the OpenVDS repository root
```text
$ find . -iname "Credentials.h" -type f
./3rdparty/google-cloud-cpp-1.14.0/google/cloud/storage/oauth2/credentials.h
./3rdparty/aws-cpp-sdk-1.9.336_/aws-cpp-sdk-cognito-identity/include/aws/cognito-identity/model/Credentials.h
./3rdparty/aws-cpp-sdk-1.9.336_/aws-cpp-sdk-finspace-data/include/aws/finspace-data/model/Credentials.h
./3rdparty/aws-cpp-sdk-1.9.336_/aws-cpp-sdk-connect/include/aws/connect/model/Credentials.h
./3rdparty/aws-cpp-sdk-1.9.336_/aws-cpp-sdk-sts/include/aws/sts/model/Credentials.h
./3rdparty/aws-cpp-sdk-1.9.336_/crt/aws-crt-cpp/include/aws/crt/auth/Credentials.h
./3rdparty/aws-cpp-sdk-1.9.336_/crt/aws-crt-cpp/crt/aws-c-auth/include/aws/auth/credentials.h
```
I am currently a bit stuck at this step since I did not find any straightforward way yet to avoid this problem. I assume that the include paths is not populated properly.
## System
- Arm64 MacOS 13.0.1
- OpenVDS `master` branch
- clang 14.0.0
```text
$ clang --version
Apple clang version 14.0.0 (clang-1400.0.29.202)
Target: arm64-apple-darwin22.1.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
```
- cmake 3.24.3 (via Homebrew)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/159Online Python documentation incomplete2022-11-21T09:34:46ZAlexander JaustOnline Python documentation incompleteIt seems that the online documentation of the Python interface is incomplete. I did not go through everything, but I found the following inconsistencies and problems.
Would it be possible to expose more documentation on the homepage? I ...It seems that the online documentation of the Python interface is incomplete. I did not go through everything, but I found the following inconsistencies and problems.
Would it be possible to expose more documentation on the homepage? I am not sure if some documentation is simply missing or not generated on purpose. In all cases that I have checked the classes and methods have a documentation accessible via Python's `help(...)` function.
## Observations
### VolumeDataLayoutDescriptor
I do not find anything about the constructors of `class VolumeDataLayoutDescriptor` in the [online documentation](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/python-api.html#openvds.VolumeDataLayoutDescriptor). When I query the documentation via Python I get much more information including the constructor.
That means if I do the following in a Python session
```text
import openvds
help(openvds.VolumeDataLayoutDescriptor)
```
gives the following (shortened) output
```text
Help on class VolumeDataLayoutDescriptor in module openvds.core:
class VolumeDataLayoutDescriptor(pybind11_builtins.pybind11_object)
| Method resolution order:
| VolumeDataLayoutDescriptor
| pybind11_builtins.pybind11_object
| builtins.object
|
| Methods defined here:
|
| __init__(...)
| __init__(*args, **kwargs)
| Overloaded function.
|
| 1. __init__(self: openvds.core.VolumeDataLayoutDescriptor) -> None
|
| 2. __init__(self: openvds.core.VolumeDataLayoutDescriptor, brickSize: OpenVDS::VolumeDataLayoutDescriptor::BrickSize, negativeMargin: int, positiveMargin: int, brickSize2DMultiplier: int, lodLevels: OpenVDS::VolumeDataLayoutDescriptor::LODLevels, options: OpenVDS::VolumeDataLayoutDescriptor::Options, fullResolutionDimension: int = 0) -> None
|
...
```
As you can see there is some information on the constructor.
### VolumeDataRequest
There are (at least?) two types of `VolumeDataRequest`, but only one is documented in the [online documentation](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/python-api.html#openvds.VolumeDataRequest).
The documentation explains the usage of an object of type `openvds.core.VolumeDataRequest`, see
```text
>>> import openvds
>>> data_request = openvds.VolumeDataRequest
>>> print(data_request)
<class 'openvds.core.VolumeDataRequest'>
>>> buffer = data_request.buffer
>>> data = data_request.data
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: type object 'openvds.core.VolumeDataRequest' has no attribute 'data'
```
This class does not have any attribute called `data` so the error is correct.
However, a function call to the function `requestVolumeSubset` of a `VolumeDataAccessManager` will return an object of type `openvds.volumedataaccess.VolumeDataRequest` with different interface which has a property called `data` instead of `buffer`.
```text
>>> import openvds
>>> data_request = openvds.volumedataaccess.VolumeDataRequest
>>> print(data_request)
<class 'openvds.volumedataaccess.VolumeDataRequest'>
>>> buffer = data_request.buffer
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: type object 'VolumeDataRequest' has no attribute 'buffer'
>>> data = data_request.data
```
This class does not have a `buffer` attribute so the error message is fine. It is not clear from the documentation of [`VolumeDataAccessManager`](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/python-api.html#id59) that the return type is not a `openvds.core.VolumeDataRequest`.
### openvds.IJKCoordinateTransformer
There is no documentation on the `IJKCoordinateTransformer` for Python even though it is accessible from Python
```text
>>> transformer = openvds.IJKCoordinateTransformer
>>> print(transformer)
<class 'openvds.core.IJKCoordinateTransformer'>
```
### VolumeDataAccessManager
The type annotation for the constructor of the `handle` seems to be off as it is `int`, but should be `openvds.core.VDS`.
```text
>>> vdam = openvds.VolumeDataAccessManager
>>> print(vdam)
<class 'openvds.volumedataaccess.VolumeDataAccessManager'>
>>> vdam = openvds.VolumeDataAccessManager(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/aej/software/openvds-3.0.3-install-python/python/openvds/volumedataaccess.py", line 184, in __init__
self._manager = openvds.core.getAccessManager(handle)
TypeError: getAccessManager(): incompatible function arguments. The following argument types are supported:
1. (handle: openvds.core.VDS) -> OpenVDS::VolumeDataAccessManager
Invoked with: 1
```
Update: In the last case, supplying an object of type `openvds.core.VDS` also fails so I assume the documentation might be off here or I misunderstand something.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/160Extend documentation of examples and supply sample files2022-11-18T12:56:34ZAlexander JaustExtend documentation of examples and supply sample filesIt is great that there are some examples on how to use OpenVDS. It would be even more helpful if there would be a short explanation of what each example does, what assumptions are made and, if necessary, that input files would be include...It is great that there are some examples on how to use OpenVDS. It would be even more helpful if there would be a short explanation of what each example does, what assumptions are made and, if necessary, that input files would be included. I think that would make life much easier for beginners.
For example, the [npz_to_vds.py](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/blob/master/examples/NpzToVds/npz_to_vds.py) script is really helpful, but it would be nice if the input file would be provided with the script. It is not immediately clear that what assumptions on the file are made. When I looked into the file first I had the following questions:
- Why is there a `--npy` command line parameter that is never used?
- Why do the axis descriptors seem to expect x, y and z to be in a certain range [0,2000]?
- Why is the value range computed in the way it is computed? What is the "correct" way to give a value range? Should it be certain percentiles?
- Where is the input file or how can I create a valid input file myself?
- How does writing data via the page accessor actually work and where do I find more information about that?
- Do I have to use `open` and `close` for interaction with the VDS file or could I also use
```python
...
with openvds.create(
args.url,
args.connection,
layoutDescriptor,
axisDescriptors,
channelDescriptors,
metaData,
compressionMethod,
compressionTolerance
) as vds:
layout = openvds.getLayout(vds)
...
accessor.commit()https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/161World coordinates when cdp is not defined2022-11-30T13:39:02ZAlena ChaikouskayaWorld coordinates when cdp is not definedWe believe that this case is unlikely to happen in real files, but wanted to point that out anyway.
When file has no cdp information, asking for information in World coordinates returns data in Annotation coordinate system. If transfor...We believe that this case is unlikely to happen in real files, but wanted to point that out anyway.
When file has no cdp information, asking for information in World coordinates returns data in Annotation coordinate system. If transformation with missing World coordinates is impossible, I would have expected some error message along the way.
SEGY [without_cdp.segy](/uploads/2af802cd954648a22c6a8b03c6241f54/without_cdp.segy):
```
spec.samples = [4, 8]
spec.ilines = [3, 4, 5]
spec.xlines = [10, 11]
DelayRecordingTime: 5,
```
openvds:
```
auto transformer = OpenVDS::IJKCoordinateTransformer(layout);
auto annotation = transformer.IJKIndexToAnnotation({0, 0, 0})
auto world = transformer.IJKIndexToWorld({ 0, 0, 0 });
```
Result:
```
Annotation: 3 10 5
World: 3 10 -5
```
World coordinates created from transformed I/J are same as annotation, which is misleading.
World coordinate created from transformed K is `-Time` (also in files with cdp), which seems strange, though somewhat understandable.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/162Byte swap bug in Ibm2ieee2023-06-20T12:16:12ZArtem ВByte swap bug in Ibm2ieeeWe found that `Ibm2ieee` implementation does not take into account file and host endianness.
For example:
```
template<SEGY::Endianness ENDIANNESS, SEGY::BinaryHeader::DataSampleFormatCode FORMAT>
void copySamples(float * prTarget, con...We found that `Ibm2ieee` implementation does not take into account file and host endianness.
For example:
```
template<SEGY::Endianness ENDIANNESS, SEGY::BinaryHeader::DataSampleFormatCode FORMAT>
void copySamples(float * prTarget, const unsigned char * puSource, int iSampleMin, int iSampleMax)
{
if (ENDIANNESS == SEGY::Endianness::BigEndian)
{
nValue = (int)(puSource[iSample * 4 + 0] << 24 | puSource[iSample * 4 + 1] << 16 | puSource[iSample * 4 + 2] << 8 | puSource[iSample * 4 + 3]);
}
else
{
nValue = (int)(puSource[iSample * 4 + 3] << 24 | puSource[iSample * 4 + 2] << 16 | puSource[iSample * 4 + 1] << 8 | puSource[iSample * 4 + 0]);
}
}
```
use native conversion with real file endianess.
The original 'Ibm2ieee' form SEGY.cpp always swaps bytes:
```
template<SEGY::Endianness ENDIANNESS, SEGY::BinaryHeader::DataSampleFormatCode FORMAT>
void copySamples(float * prTarget, const unsigned char * puSource, int iSampleMin, int iSampleMax)
{
if (FORMAT == SEGY::BinaryHeader::DataSampleFormatCode::IBMFloat)
{
SEGY::Ibm2ieee(prTarget, puSource + iSampleMin * 4, iSampleMax - iSampleMin);
return;
}
}
void ibm2ieee(void * to, const void * from, size_t len)
{
...
#ifdef WIN32
fr = _byteswap_ulong(fr);
#else
fr = __builtin_bswap32(fr);
#endif // WIN32
...
}
```
It is needed to check real file and hosts endianess before swap bytes.
---
It makes wrong traces data for little-endian SEG-Ys with IBMFloats data formathttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/163Default behaviour of SEGYImport --sample-unit2022-12-15T15:04:09ZErlend HårstadDefault behaviour of SEGYImport --sample-unitHi!
I recently noticed that the default value of SEGYImport --sample-unit is "ms". This seams like a dangerous default. I think it's reasonable to assume that a lot of people (including myself :smile:) will mess up by trusting the defau...Hi!
I recently noticed that the default value of SEGYImport --sample-unit is "ms". This seams like a dangerous default. I think it's reasonable to assume that a lot of people (including myself :smile:) will mess up by trusting the default here. Would a better default be the value of the Trace Value Measurement Unit (byte 203-204) in the SEGY? Or if that might not be trusted and/or rarely present, then another good default could unitless?
I would also _love_ a strict option that fails the creation if something is even slightly off.
Related to axis units, I also have a question about the relationship between axis units and axis name. I would expect that the name and unit corresponded. I.e. if unit = m then name (annotation) = Depth and so forth. However, SEGYImport seams to always write "Sample" as the axis namehttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/164Build fails for OpenSSL v3 due to strict warnings in Azure dependency2022-12-22T08:54:27ZAlexander JaustBuild fails for OpenSSL v3 due to strict warnings in Azure dependencyWhen I want to install OpenVDS in a Docker container using Alpine 3.17 the build fails. This seems to be due to the fact that Alpine has updated their default OpenSSL installation to OpenSSL v3. This triggers warnings in the Azure depend...When I want to install OpenVDS in a Docker container using Alpine 3.17 the build fails. This seems to be due to the fact that Alpine has updated their default OpenSSL installation to OpenSSL v3. This triggers warnings in the Azure dependencies. The dependencies are, by default, compiled with "Warnings as errors".
I am not sure what the best way would be to fix the problem. In general, it would be nice to be able to preinstall the dependency myself and avoid building it during the OpenVDS build process (this applies to all 3rd party dependencies). In that case one could install dependencies beforehand with suitable options and or from the system's package manager. Moreover, one also could reuse the dependencies when upgrading to new OpenVDS versions making the upgrade process faster more and much lighter.
## Further Observations
- On Alpine 3.16 installed OpenSSL v1 so it was no issue back then.
- Setting `DISABLE_STRICT_WARNINGS=OFF` in OpenVDS is not forwarded to the dependency.
## Steps to reproduce
Create a Docker container based on alpine 3.17 and install OpenVDS dependencies
```bash
apk --no-cache add \
curl \
git \
g++ \
gcc \
make \
cmake \
curl-dev \
boost-dev \
libxml2-dev \
libuv-dev \
util-linux-dev
```
Afterwards, download, configure, and build OpenVDS with the following options
```bash
cmake -S . \
-B build \
-DCMAKE_BUILD_TYPE=Release \
-DBUILD_JAVA=OFF \
-DBUILD_PYTHON=OFF \
-DBUILD_EXAMPLES=OFF \
-DBUILD_TESTS=OFF \
-DBUILD_DOCS=OFF \
-DDISABLE_AWS_IOMANAGER=ON \
-DDISABLE_AZURESDKFORCPP_IOMANAGER=OFF \
-DDISABLE_GCP_IOMANAGER=ON \
-DDISABLE_DMS_IOMANAGER=OFF \
-DDISABLE_STRICT_WARNINGS=OFF
cmake --build build --config Release --target install --verbose
```
## Error
```text
#11 21.29 -- stderr output is:
[1433](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1434)
#11 21.29 /open-vds/3rdparty/azure-sdk-for-cpp-12.3.0/sdk/core/azure-core/src/cryptography/md5.cpp: In member function 'virtual void {anonymous}::Md5OpenSSL::OnAppend(const uint8_t*, size_t)':
[1434](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1435)
#11 21.29 /open-vds/3rdparty/azure-sdk-for-cpp-12.3.0/sdk/core/azure-core/src/cryptography/md5.cpp:166:65: error: 'int MD5_Update(MD5_CTX*, const void*, size_t)' is deprecated: Since OpenSSL 3.0 [-Werror=deprecated-declarations]
[1435](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1436)
#11 21.29 166 | void OnAppend(const uint8_t* data, size_t length) { MD5_Update(m_context.get(), data, length); }
[1436](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1437)
#11 21.29 | ~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[1437](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1438)
#11 21.29 In file included from /open-vds/3rdparty/azure-sdk-for-cpp-12.3.0/sdk/core/azure-core/src/cryptography/md5.cpp:13:
[1438](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1439)
#11 21.29 /usr/include/openssl/md5.h:50:27: note: declared here
[1439](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1440)
#11 21.29 50 | OSSL_DEPRECATEDIN_3_0 int MD5_Update(MD5_CTX *c, const void *data, size_t len);
[1440](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1441)
#11 21.29 | ^~~~~~~~~~
[1441](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1442)
#11 21.29 /open-vds/3rdparty/azure-sdk-for-cpp-12.3.0/sdk/core/azure-core/src/cryptography/md5.cpp: In member function 'virtual std::vector<unsigned char> {anonymous}::Md5OpenSSL::OnFinal(const uint8_t*, size_t)':
[1442](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1443)
#11 21.29 /open-vds/3rdparty/azure-sdk-for-cpp-12.3.0/sdk/core/azure-core/src/cryptography/md5.cpp:172:14: error: 'int MD5_Final(unsigned char*, MD5_CTX*)' is deprecated: Since OpenSSL 3.0 [-Werror=deprecated-declarations]
[1443](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1444)
#11 21.29 172 | MD5_Final(hash, m_context.get());
[1444](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1445)
#11 21.29 | ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~
[1445](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1446)
#11 21.29 /usr/include/openssl/md5.h:51:27: note: declared here
[1446](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1447)
#11 21.29 51 | OSSL_DEPRECATEDIN_3_0 int MD5_Final(unsigned char *md, MD5_CTX *c);
[1447](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1448)
#11 21.29 | ^~~~~~~~~
[1448](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1449)
#11 21.29 /open-vds/3rdparty/azure-sdk-for-cpp-12.3.0/sdk/core/azure-core/src/cryptography/md5.cpp: In constructor '{anonymous}::Md5OpenSSL::Md5OpenSSL()':
[1449](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1450)
#11 21.29 /open-vds/3rdparty/azure-sdk-for-cpp-12.3.0/sdk/core/azure-core/src/cryptography/md5.cpp:180:13: error: 'int MD5_Init(MD5_CTX*)' is deprecated: Since OpenSSL 3.0 [-Werror=deprecated-declarations]
[1450](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1451)
#11 21.29 180 | MD5_Init(m_context.get());
[1451](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1452)
#11 21.29 | ~~~~~~~~^~~~~~~~~~~~~~~~~
[1452](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1453)
#11 21.29 /usr/include/openssl/md5.h:49:27: note: declared here
[1453](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1454)
#11 21.29 49 | OSSL_DEPRECATEDIN_3_0 int MD5_Init(MD5_CTX *c);
[1454](https://github.com/equinor/vds-slice/actions/runs/3675967242/jobs/6216036291#step:3:1455)
#11 21.29 |
```https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/165Cpython3.11 wheels for linux2022-12-22T08:52:47ZFilip BrzękCpython3.11 wheels for linuxCurrently, the PyPI index has 3.1.2 wheels built for cpython3.11 only for windows, AFAIK latest `manylinux_2014` does have cpython3.11.
```log
docker run -it quay.io/pypa/manylinux2014_x86_64 ls /opt/python
cp310-cp310 cp36-cp36m cp38-...Currently, the PyPI index has 3.1.2 wheels built for cpython3.11 only for windows, AFAIK latest `manylinux_2014` does have cpython3.11.
```log
docker run -it quay.io/pypa/manylinux2014_x86_64 ls /opt/python
cp310-cp310 cp36-cp36m cp38-cp38 pp37-pypy37_pp73 pp39-pypy39_pp73
cp311-cp311 cp37-cp37m cp39-cp39 pp38-pypy38_pp73
```
Thanks!https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/166Cannot import openvds with Python 3.112023-01-03T12:43:01ZDavid WadeCannot import openvds with Python 3.11There is definitely something very wrong when attempting to import openvds on Python 3.11
Please fix?
```
Collecting openvds
Downloading openvds-3.1.3-cp311-cp311-win_amd64.whl (19.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ...There is definitely something very wrong when attempting to import openvds on Python 3.11
Please fix?
```
Collecting openvds
Downloading openvds-3.1.3-cp311-cp311-win_amd64.whl (19.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 19.1/19.1 MB 2.3 MB/s eta 0:00:00
Installing collected packages: numpy, openvds
Successfully installed numpy-1.24.1 openvds-3.1.3
(vds-311) PS C:\xxxx\xxxxx\xxxxx> python
Python 3.11.1 (tags/v3.11.1:a7a450f, Dec 6 2022, 19:58:39) [MSC v.1934 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import openvds
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\xxxx\xxxx\vds-311\Lib\site-packages\openvds\__init__.py", line 1, in <module>
from .api import *
File "C:\xxxx\xxxx\vds-311\Lib\site-packages\openvds\api.py", line 18, in <module>
import openvds.core
ModuleNotFoundError: No module named 'openvds.core'
```https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/167Abort trap 6 on OpenVDS::Open2023-08-29T14:42:46ZErlend HårstadAbort trap 6 on OpenVDS::OpenPassing `OpenVDS::Open` a sas token where the Signed Resource Type (`srt`) parameter contains 'Container' (`c`) and _not_ object (`o`) cause an Abort trap 6. As far as I can tell, this happen for any combination of `srt` options as long ...Passing `OpenVDS::Open` a sas token where the Signed Resource Type (`srt`) parameter contains 'Container' (`c`) and _not_ object (`o`) cause an Abort trap 6. As far as I can tell, this happen for any combination of `srt` options as long as it contains `c` and not `o`. Although this particular combination doesn't make a whole lot of sense when working with VDS, it's still a completely valid sas.
I've only tested this on 3.0.3https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/168SEGYImport allows nonsensical values for margin2023-05-05T14:52:33ZAlexander JaustSEGYImport allows nonsensical values for marginI have been playing around a bit with `SEGYImport` and observed the following behavior which does not really make sense to me. I wonder if one could check for such nonsensical values when using `SEGYImport`
## Observations
- I can spec...I have been playing around a bit with `SEGYImport` and observed the following behavior which does not really make sense to me. I wonder if one could check for such nonsensical values when using `SEGYImport`
## Observations
- I can specify negative margin sizes
```
SEGYImport --lod-levels=0 --margin -1 volve.sgy --vdsfile=volve.vds
```
The conversion will go up to `99.X %` and then hang. I did not wait too long and aborted the conversion. The resulting file still looks somewhat reasonable and the header is intact, i.e., I can check the header with `VDSInfo`. The resulting file also looks reasonably large, but a bit smaller than for `--margin 0`. I did not try to run any actual operations (requesting data or similar) from the VDS dataset.
- I can specify margins greater and equal to the brick size.
```
SEGYImport --lod-levels=0 --margin 64 volve.sgy --vdsfile=volve.vds
```
This returns really quickly **without any error message**. However, the resulting VDS dataset is only 2MB large. It seems that (only) the header is written (correctly), but no other content. It have not checked the generated VDS any further though.
For margin sizes greater than the brick size, I wonder what the meaning would be. Should the margin contain container information of neighboring bricks and their neighbors?
- I observe the behavior of generating VDS datasets of 2MB (point above) as soon as my margin is at least half the brick size. In this case 32.
It could be that some of the behavior is linked to the amount of memory (32GB) or storage (about 640 Gi available).
## Platform
- Apple Arm M1 Max
- MacOS 13.1
- OpenVDS 3.0.3 (compile from source)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/169SEGYImport documentation incomplete/inconsistent (legal values, default values)2023-01-23T12:28:35ZAlexander JaustSEGYImport documentation incomplete/inconsistent (legal values, default values)The comments here refer to the documentation of `SEGYImport` and the deep dive. I found some things that are inconsistent or easy to overlook depending on how one works with the tools provided by OpenVDS.
## CLI help
I find that docume...The comments here refer to the documentation of `SEGYImport` and the deep dive. I found some things that are inconsistent or easy to overlook depending on how one works with the tools provided by OpenVDS.
## CLI help
I find that documentation of `SEGYImport` in the terminal is lacking important information. I tend to use the terminal a lot and thus also use `SEGYImport --help` frequently.
The documentation of valid input values is a bit inconsistent. For some options (attribute name, attribute unit...) the output shows the accepted options.
```
...
--attribute-name <string>
The name of the primary VDS channel. The
name may be Amplitude (default), Attribute,
Depth, Probability, Time, Vavg, Vint, or
Vrms (default: Amplitude)
--attribute-unit <string>
The units of the primary VDS channel. The
unit name may be blank (default), ft, ft/s,
Hz, m, m/s, ms, or s
...
```
However, for "brick size" and "level of detail" levels the limits are not mentioned:
```
...
-b, --brick-size <value> The brick size for the volume data store.
--lod-levels <value> The number of LODs to generate.
...
```
When digging a bit deeper in the, [developer documentation](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/cppdoc/namespace/namespaceOpenVDS.html#_CPPv4N7OpenVDS26VolumeDataLayoutDescriptor9BrickSizeE) might even give the expectation that one should be allowed to use larger brick sizes that currently allowed. I assume that brick sizes >256 are only useful for 2D datasets. For level of detail levels, it is explained in the deep dive that the value can be at at least 0 and at most 12.
Additionally to that, neither does the [documentation](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/vds/deepdive/deepdive.html#brick-size) directly mention that the brick size has to be (certain) powers of two.
The output neither states the default value of the number of LODs. I expected that it would be zero as there is no general recommendation for number of levels in the deep dive (see also comment below).
The output of `SEGYImport --help` also deviates slightly from the [documentation on the homepage](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/tools/SEGYImport/README.html). I am not sure if it would be possible to synchonize the content of the README with the actual `SEGYImport` output.
There might be further options with incomplete documentation.
## Deep dive
### Margin
It is unclear to me what the correct margin size to chose is and what the actual default is without looking into the source code of `SEGYImport`. In the [deep dive](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/vds/deepdive/deepdive.html#margin-size) it is mentioned that a margin of 4 should be used as default and that the value is important when working with level of details. No connection to wavelets is mentioned.
In the [README](https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/tools/SEGYImport/README.html) it mentions that the default value for the margin is 0 and 4 if one uses wavelet compression. Checking the source code confirms the statement of the README.
### Level of detail
It is explained when one should use level of details, but no general recommendation for how many level of detail levels one should generate is given (except if one wants to use FAST). However, `SEGYImport` chooses 2 as general default and 4 for poststack data.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/170Allow VDSCopy to overwrite existing SDMS dataset: `--allow-overwrite`2023-03-15T08:22:49ZFilip BrzękAllow VDSCopy to overwrite existing SDMS dataset: `--allow-overwrite`Hello,
first of all, thank you very much for the great `3.1` release, and re-implementation of `IOManagers`, for SDMS, it's been very helpful as `sdapi` was troublesome.
While examining new IOManagers with `OPENVDS_DMS_CURL=1`, I have ...Hello,
first of all, thank you very much for the great `3.1` release, and re-implementation of `IOManagers`, for SDMS, it's been very helpful as `sdapi` was troublesome.
While examining new IOManagers with `OPENVDS_DMS_CURL=1`, I have noticed a change in the behavior, that prevents us from writing bulk trace data (VDS content) into the previously created dataset (with the metadata we require), that doesn't yet have any data loaded.
```shell
OPENVDS_DMS_CURL=1 AWS_REGION=us-east-1 VDSCopy ./data/syntethic_data.wavelet.vds sd://osdu/<test-project>/demo-1
[CURL http respons error 409. Automatic rety https://<REDACTED>/api/seismic-store/v3/dataset/tenant/osdu/subproject/<REDACTED>/dataset/demo-1?path=%2F]
...
[Could not create VDS sd://osdu/<REDACTED>/gsi-demo-1] Seismic dms lock failed: Http error respons: 409 -> https://<REDACTED>/api/seismic-store/v3/dataset/tenant/osdu/subproject/<REDACTED>/dataset/demo-1?path=%2F
- [seismic-store-service] The dataset sd://osdu/<REDACTED>/demo-1 already exists[seismic-store-service]
```
when it's run through old DMS flow (using `sdapi`), it's happy to ignore 409, and proceed to write it, command for the reference, but it's results in `seg-fault` at the end (it was reported here #123)
```
AWS_REGION=us-east-1 VDSCopy ./data/syntethic_data.wavelet.vds sd://osdu/<test-project>/demo-1
```
Is it possible to add `--allow-overwrite` similar to the flag available in `VDSUploader.sh` in HueSpace SDK, which would ignore 409, when the dataset was created previously?
Rationale: we want to have control over how the dataset is created in SDMS, for data-lineage reasons, for which we need to create it ourselves, and then populate sd location with VDS content.
Regards,
Filip
PS. Is there a way to submit a feature request for `VDSUploader.sh` as well? If so what's the best channel? We've been exploring both tools for loading bulk data to OSDU, and we're seeing some gaps e.g. in S3 auth (only profile/dedicated role auth flow available), and no control over the dataset name when loading to SDMS, it generates random name, which prevents from targeting previously created dataset.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/171Angular interpolation does not return original values2023-06-15T16:21:27ZAlena ChaikouskayaAngular interpolation does not return original valuesHi,
I've been trying to understand openvds interpolation behavior and noticed the following.
Interpolation documentation says that
> The sampled value will be exactly equal to the original at the voxel center, regardless of the interpol...Hi,
I've been trying to understand openvds interpolation behavior and noticed the following.
Interpolation documentation says that
> The sampled value will be exactly equal to the original at the voxel center, regardless of the interpolation method used.
But I observe on my synthetic file [test.vds](/uploads/629b6b4d6052de8a6d2562395cff3721/test.vds) that this doesn't hold. For the voxel middle points 4 methods return same data (as I would expect), but angular interpolation returns something different.
```
#include <OpenVDS/OpenVDS.h>
#include <OpenVDS/VolumeDataLayout.h>
#include <OpenVDS/VolumeDataAccess.h>
#include <iostream>
#include <map>
int main(int argc, char *argv[]) {
/*
| xlines-ilines | 1 | 3 | 5 |
|---------------|--------------------|--------------------|--------------------|
| 10 | 100, 101, 102, 103 | 108, 109, 110, 111 | 116, 117, 118, 119 |
| 11 | 104, 105, 106, 107 | 112, 113, 114, 115 | 120, 121, 122, 123 |
*/
std::string url = "file://test.vds";
std::string connectionString = "";
OpenVDS::Error error;
OpenVDS::VDSHandle handle = OpenVDS::Open(url, connectionString, error);
OpenVDS::VolumeDataAccessManager accessManager = OpenVDS::GetAccessManager(handle);
OpenVDS::VolumeDataLayout const *layout = accessManager.GetVolumeDataLayout();
int sampleCount0 = layout->GetDimensionNumSamples(0);
int traceCount = 1;
// Angular is correct only for (1.5, 0.5)
float inlineValue = 2.5; // choose from 0.5, 1.5 and 2.5
float xlineValue = 1.5; // choose from 0.5 and 1.5
std::vector<float> buffer(traceCount * sampleCount0);
float tracePos[traceCount][6];
for (int trace = 0; trace < traceCount; trace++)
{
tracePos[trace][0] = 0;
tracePos[trace][1] = xlineValue;
tracePos[trace][2] = inlineValue;
tracePos[trace][3] = 0;
tracePos[trace][4] = 0;
tracePos[trace][5] = 0;
}
std::map<std::string, OpenVDS::InterpolationMethod> interpolations{
{"Nearest", OpenVDS::InterpolationMethod::Nearest},
{"Linear", OpenVDS::InterpolationMethod::Linear},
{"Cubic", OpenVDS::InterpolationMethod::Cubic},
{"Triangular", OpenVDS::InterpolationMethod::Triangular},
{"Angular", OpenVDS::InterpolationMethod::Angular}
};
for (const auto& interpolation : interpolations) {
auto request = accessManager.RequestVolumeTraces(
buffer.data(),
buffer.size() * sizeof(float),
OpenVDS::Dimensions_12, // I understand it doesn't matter if it is Dimensions_12 or Dimensions_012
0,
0,
tracePos,
traceCount,
interpolation.second,
0
);
request.get()->WaitForCompletion();
std::cout << "Interpolation " << interpolation.first <<": " ;
for (float f: buffer) {
std::cout << f << ' ';
}
std::cout << "\n" << std::flush;
}
}
```
(my openvds version is some 3.1-ish build, but I do not see anything about changes to angular interpolation in release notes, so I assume it's the same)
Am I doing something wrong?https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/172KnownChannelNames class2023-02-23T12:36:51ZMorten OfstadKnownChannelNames classThere should be a KnownChannelNames class with the names from the documentation (https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/vds/specification/Metadata.html#named-channels) -- For C++ this is foun...There should be a KnownChannelNames class with the names from the documentation (https://osdu.pages.opengroup.org/platform/domain-data-mgmt-services/seismic/open-vds/vds/specification/Metadata.html#named-channels) -- For C++ this is found in GlobalMetadataCommon.h, but it is not available in Python/Java.Morten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/173Updating information stored in a VDS (Are VDS datasets mutable?)2023-03-27T06:48:15ZAlexander JaustUpdating information stored in a VDS (Are VDS datasets mutable?)I was playing around with OpenVDS to figure out whether and to what extent VDS datasets are mutable or not. My big question is: What parts of a VDS dataset are mutable. If there are any parts that are mutable, what is the correct way to ...I was playing around with OpenVDS to figure out whether and to what extent VDS datasets are mutable or not. My big question is: What parts of a VDS dataset are mutable. If there are any parts that are mutable, what is the correct way to change these parts.
Either way it has different implications for certain workflows (to me). This concerns metadata as well as channel data stored within the VDS.
1. If a VDS dataset is always immutable, I can be sure that nobody with accidentally change/break a VDS dataset.
2. If a VDS dataset is mutable, I could update some fields if, e.g., `SEGYImport` does not accept certain names/units during ingestion and/or update data within the VDS, e.g. add fast slice, an additional channel or update data within a channel without recreating the VDS dataset from scratch.
I potentially do some "stupid" things here, but I also try to model a worst case scenario here like "what is the worst thing somebody can do wrong?".
I found an [older issue](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/28) which mentions the addition of LOD levels to a VDS, but it does not specify whether this would happen in-place or would create a new VDS dataset.
## Observations / Experiments
1. I am able to add additional channel data to an existing channel of a VDS dataset. The additional data seems to hide the initially available channel data.
For testing I created a small Python script: [create_and_change_vds_inplace_small.py](/uploads/72d056826ce7bf455f1b69ddf023fec2/create_and_change_vds_inplace_small.py). The script creates a small artifical VDS dataset with one channel. The script carries out the following steps.
- Create VDS dataset with all values in the channel are set to `1`.
- Close file hande such that it can be written to disk.
- Open the file and extract a slice, copy the slice data and close the file.
- Open the file and get an AccessManager, write the value `2*old_value` (`2` in this case) to the VDS dataset and close the file.
- Open the file and extract a slice, copy the slice data and close the file.
- Plot the slice data.
When I run the script that the data extracted from the VDS indeed changes. For the first slice I get constant `1` values and constant `2` values for the second time I extract a slice. I am not sure if one can still access the "old" data. The file size increases so it appears to me as if the old data is still stored in the VDS dataset.
Is this behavior intended? If so, how can I access the "old" data? Would it be possible to actually update the data without increasing the file size?
2. I tried to update the metadata. For that I wrote a [small C++ code](/uploads/013acdfdfe460bef0d5dbec4b517ee98/update_metadata.cpp) as C++ seemed to have (more?) direct access to the `MetadataWriteAccess` object than Python. The code basically opens a specified VDS dataset and replaces the `ImportTimeStamp` values with some unrealistic time stamp.
The code executes, but gives a segmentation fault (`Segmentation fault: 11`). From debugging I concluded that the segmentation fault rises when the VDS dataset is being closed. Calling the `SetMetadataString` function seems to be fine.
Is this behavior intended? I guess the segmentation fault should never happen, but to me it is not clear if that is an error when updating the VDS dataset or it is a side effect of illegally writing the metadata.
## Platform
* Apple Arm M1 Max
* MacOS 13.1
* OpenVDS 3.0.3 (compiled from source)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/174Addition of valid input for SEGYImport2023-02-23T13:23:58ZAlexander JaustAddition of valid input for SEGYImportI wonder what would be the correct the conversion of SEG-Y files to VDS using `SEGYImport` when the provided options by `SEGYImport` are not rich enough.
Example: I have an attribute map in a SEG-Y file. However, none of the allowed na...I wonder what would be the correct the conversion of SEG-Y files to VDS using `SEGYImport` when the provided options by `SEGYImport` are not rich enough.
Example: I have an attribute map in a SEG-Y file. However, none of the allowed names for the `Attribute` property (`--attribute-name`: Amplitude (default), Attribute, Depth, Probability, Time, Vavg, Vint, or Vrms) are fitting my needs. The attribute name `Attribute` would be too vague for my use case and the other allowed names do not fit either.
- Should I import the SEG-Y file to VDS and afterwards change the name? I am not sure if that is possible, cf. #173.
- Could I provide a patch that extends SEGYImport by the units, names etc. that I would need?
- Should I fork and create my own SEGYImport? I would really like to avoid this.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/175Downsampling for LOD levels wrong for first point2023-03-07T10:19:02ZAlexander JaustDownsampling for LOD levels wrong for first pointI have been playing around with LOD level generation on a VDS with synthetic content. My goal is to understand the LOD levels and the downsampling better.
## Observation/Experiment
I wrote a Python script ([test_lod_levels_sine_functio...I have been playing around with LOD level generation on a VDS with synthetic content. My goal is to understand the LOD levels and the downsampling better.
## Observation/Experiment
I wrote a Python script ([test_lod_levels_sine_function.py](/uploads/f7a3eb217cdba359ea77b0b7c96ec0a4/test_lod_levels_sine_function.py)) which generates a sine function (with specified frequency and amplitude etc) that is written to a 3D VDS. I let OpenVDS add 4 LOD levels such that I have 5 levels in total. In the current setup in the script, I have a single brick with 32 samples in each direction on level 0.
In the next step I load the VDS file and extract the data along a line. The data is plotted it against the analytical function and I compute some error norms (only printed on screen). See the following plot:
## Observations and questions
From what I understand here seems to be the following:
- If data is sampled down, I kind only keep the second, fourth etc. sample on each level is kept. This looks somewhat like this
```text
level Samples
0 0 1 2 3 4 5 6 7 8 ...
1 0 2 4 6 8 ...
2 0 4 8 ...
```
Missing values indicate samples that are not available on the LOD level.
Is this understanding correct?
- At the moment, it looks to me that down sampling simply removes/ignores values that are in between the sample that are being kept. At least for to me it looks like this to me from the plot.
Is this understanding correct or do you do any kind of elaborate downsampling that includes some kind of anti-aliasing?
- From my experiments I expect that for my sine wave, I simply get bigger gaps between discrete points the higher go up per level. I plotted this for the all levels in my VDS:
![lod_level_sine_function](/uploads/7e94e7385e12e5b044569d64b1ecc2c0/lod_level_sine_function.png){width=60%}
My assumptions on how the downsampling is done, seem to hold mostly. However, for some reason the **first** sample on levels > 0 seems to be shifted (point at top left in the plot). The y-value here is close to 1 instead of 0.
If my interpretation of how the x-coordinate is determined, I would expect that all points also show the same offset. However, all other samples on a level of detail seem to match with a sample of level 0.
Am I doing something wrong here when determining the x-coordinate or does the downsampling have a bug here?
## Platform
* Apple Arm M1 Max
* MacOS 13.2.1
* OpenVDS 3.0.3 (compiled from source)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/176VDS with LOD levels unexpectedly large2023-03-22T14:43:59ZAlexander JaustVDS with LOD levels unexpectedly largeI find VDS data sets to increase much more in size when LOD levels are added than expected.
## Obervation/Experiment
I created a VDS data sets from a volve data set (`ST0202R08_PS_PSDM_FULL_OFFSET_PP_TIME.MIG_FIN.POST_STACK.3D.JS-01753...I find VDS data sets to increase much more in size when LOD levels are added than expected.
## Obervation/Experiment
I created a VDS data sets from a volve data set (`ST0202R08_PS_PSDM_FULL_OFFSET_PP_TIME.MIG_FIN.POST_STACK.3D.JS-017534.segy`, about 1GiB) for different LOD levels. This is a 3D post-stack dataset such that I would expect an increase in file size of about 15% if all LOD levels are created. The major increase should appear for the first few LOD levels.
I observe the following file sizes:
| LOD level | Size in MiB | Relative size |
|-----------|-------------|---------------|
| 0 | 864,04 | 100% |
| 1 | 1248,06 | 144% |
| 2 | 1456,09 | 169% |
| 3 | 1456,09 | 169% |
| 4 | 1454,31 | 168% |
| 5 | 1456,09 | 169% |
| 6 | 1456,09 | 169% |
| 7 | 1456,09 | 169% |
| 8 | 1456,09 | 169% |
| 9 | 1454,88 | 168% |
| 10 | 1456,10 | 169% |
| 11 | 1456,10 | 169% |
| 12 | 1456,11 | 169% |
Now I see that the file size increases by more than 50%. If I scale my estimate I see that the increase at least flattens out quickly what agrees with the geometric series. However, I also see a small dip for LOD level 10 and 11 which I don't exactly understand, but maye that is due to some collapsed blocks.
I used the default settings, but also tested with LOD level and compression set explicitly with the same results for the file size. The file is stored on a local hard drive and file size is checked with `du`.
I also tested with a larger SEGY where VDS with no LOD levels is 11 GiB large. When adding 4 LOD levels the file size goes up to 20 GiB. This is an increase of file size by even more than **80%**.
## Questions
Is this increase in file size expected? I assumed that the file size increase should be according to the [geometric series](https://en.wikipedia.org/wiki/Geometric_series) with `a=1` and `r=(1/2)^d` which would be `r=1/8` for 3D data sets.
## Platform
* Apple Arm M1 Max
* MacOS 13.2.1
* OpenVDS 3.0.3 (compiled from source)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/177Memory leak when using openvds-python to access wavelet-compressed vds in linux2023-03-27T06:41:31ZXudong DuanMemory leak when using openvds-python to access wavelet-compressed vds in linuxwhen I use the openvds-python lib to open a vds, access data, and close it, there is a problem of memory leak.
In linux, I open and close a wavelet-compressed vds severial times in one python script, the memory usage will increase rapid...when I use the openvds-python lib to open a vds, access data, and close it, there is a problem of memory leak.
In linux, I open and close a wavelet-compressed vds severial times in one python script, the memory usage will increase rapidly until the python process ends.
In windows, there is no problem.
In linux, there is no problem when open a uncompresssed vds.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/178Installation on MacOS does not install libuv (OpenVDS 3.2.1)2023-04-03T08:34:43ZAlexander JaustInstallation on MacOS does not install libuv (OpenVDS 3.2.1)Thank you so much for updating the installation process! The actual compile process worked for me without any problems. :thumbsup:
## Description
I install OpenVDS in a non-standard installation prefix (I will call it `INSTALLATION_DI...Thank you so much for updating the installation process! The actual compile process worked for me without any problems. :thumbsup:
## Description
I install OpenVDS in a non-standard installation prefix (I will call it `INSTALLATION_DIR`). After the compilation and installation step I get the following error message when using tools like `VDSInfo` or `SEGYImport`:
```
$ VDSInfo --help
dyld[19054]: Library not loaded: @rpath/libuv.1.dylib
Referenced from: <B3C82796-5CB5-3C06-97AE-C111D8D19A9C> /Users/AEJ/software/openvds-3.2.1-install-python/lib/libopenvds.3.2.1.dylib
Reason: tried: '/Users/AEJ/software/openvds-3.2.1-install-python/lib/libuv.1.dylib' (no such file), '$ORIGIN/libuv.1.dylib' (no such file), '$ORIGIN/libuv.
1.dylib' (no such file), '$ORIGIN/../lib64/libuv.1.dylib' (no such file), '$ORIGIN/../lib64/libuv.1.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS@rpath/libuv.1.dylib' (no such file), '$ORIGIN/libuv.1.dylib' (no such file), '$ORIGIN/libuv.1.dylib' (no such file), '$ORIGIN/../lib64/libuv.1.dylib' (no
such file), '$ORIGIN/../lib64/libuv.1.dylib' (no such file), '/usr/local/lib/libuv.1.dylib' (no such file), '/usr/lib/libuv.1.dylib' (no such file, not in dyld cache)
Abort trap: 6
```
## Manual fix
I could fix the problem by:
1. Copy `libuv.1.0.0.dylib` from `${BUILD_DIR}/libuv_1.44.2_install/Release/lib/libuv.1.0.0.dylib` to `${INSTALLATION_DIR}/lib`.
2. Create a symlink within the ``${INSTALLATION_DIR}/lib/` directory such that `libuv.1.dylib -> libuv.1.0.0.dylib`.
Now the error is gone
```
$ VDSInfo --version
VDSInfo - OpenVDS 3.2.1 - Revision: 6922d9151901e3d727b31c776559dd033add30fd
```
## Compilation command
```
cmake -S . \
-B build \
-DCMAKE_BUILD_TYPE=Release \
-DBUILD_SHARED_LIBS=ON \
-DBUILD_JAVA=OFF \
-DBUILD_PYTHON=ON \
-DBUILD_EXAMPLES=ON \
-DBUILD_TESTS=OFF \
-DBUILD_DOCS=OFF \
-DDISABLE_AWS_IOMANAGER=ON \
-DDISABLE_AZURESDKFORCPP_IOMANAGER=ON \
-DDISABLE_GCP_IOMANAGER=ON \
-DDISABLE_DMS_IOMANAGER=ON \
-DDISABLE_STRICT_WARNINGS=ON \
-DCMAKE_FIND_FRAMEWORK=LAST \
-DAUTO_ADJUST_UUID=OFF \
-DBUILD_CURL=OFF \
-DCMAKE_MACOSX_RPATH=ON \
-DCMAKE_INSTALL_PREFIX="${INSTALLATION_DIR}"
```
## System
* Arm64 MacOS 13.2.1
* OpenVDS 3.2.1
* clang 14.0.0
```
$ clang --version
Apple clang version 14.0.0 (clang-1400.0.29.202)
Target: arm64-apple-darwin22.3.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
```
* cmake 3.26.0 (via Homebrew)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/180openvds fails to read from s3 bucket2023-03-27T06:24:30ZGeorge Zavitsanosopenvds fails to read from s3 bucketI am using openvds-3.2.1 python package from PyPI. I am trying to connect and read a .vds format file located in a AWS S3 bucket, in which I have full access.
But the below code, fails:
uri = "s3://bucket-name/path-to-vds-file/filename...I am using openvds-3.2.1 python package from PyPI. I am trying to connect and read a .vds format file located in a AWS S3 bucket, in which I have full access.
But the below code, fails:
uri = "s3://bucket-name/path-to-vds-file/filename.vds"
openvds.open(uri)
with the error:
RuntimeError: Error on downloading VolumeDataLayout object: Http error respons: 404 -> https://bucket-name.s3.eu-central 1.amazonaws.com/path-to-vds-file/filename.vds/VolumeDataLayout
Reading vds files is working smoothly in my local environment.
Any help would be really appreciated. Thanks.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/181Create 2d-lod for 3D post-stack data2023-03-31T12:48:06ZXudong DuanCreate 2d-lod for 3D post-stack dataAs the OpenVDS documentation, there is channel named "Fast Slices". I use SEGYImport tool with the "--create-2d-lods" option to translate a 3D post-stack SEGY. When I use the HueList the show the information of the generated VDS file, th...As the OpenVDS documentation, there is channel named "Fast Slices". I use SEGYImport tool with the "--create-2d-lods" option to translate a 3D post-stack SEGY. When I use the HueList the show the information of the generated VDS file, there is no data related to 2d-lod.
I want to ask how to generate 2d lods for 3d post-stack data.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/182The sdapi proxy is neither forward nor backward compatible.2024-01-30T12:14:42ZPaal KvammeThe sdapi proxy is neither forward nor backward compatible.I have run tests on version 3.2.1 using both the old "sdapi" I/O manager and the new "proxy" I/O manager, using VDSCopy. I have built OpenVDS from sources. Observations:
- A file on the cloud written using the "sdapi" manager cannot be ...I have run tests on version 3.2.1 using both the old "sdapi" I/O manager and the new "proxy" I/O manager, using VDSCopy. I have built OpenVDS from sources. Observations:
- A file on the cloud written using the "sdapi" manager cannot be read with the "proxy" manager.
- A file on the cloud written using the "proxy" manager cannot be read with the "sdapi" manager.
- Additionally, the file written using the "proxy" manager is seen as corrupt by other dms applications. The list of block names is empty. Somehow the meta-data isn't stored correctly.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/183Update documentation build system2023-09-15T11:31:24ZMorten OfstadUpdate documentation build systemInclude the sphinx-design module so we can have multi-language example code in tabs:
https://sphinx-design.readthedocs.io/en/rtd-theme/tabs.html
Replace the markdown module with MyST:
https://myst-parser.readthedocs.io/en/v0.15.1/sphinx/...Include the sphinx-design module so we can have multi-language example code in tabs:
https://sphinx-design.readthedocs.io/en/rtd-theme/tabs.html
Replace the markdown module with MyST:
https://myst-parser.readthedocs.io/en/v0.15.1/sphinx/intro.htmlMorten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/184Question: How to create VDS with an OpenVDS::VDSError in constructor? (Python...2023-05-16T14:07:50ZFilip BrzękQuestion: How to create VDS with an OpenVDS::VDSError in constructor? (Python API)Dear Devs,
I was trying to use `openvds.create` with an error output parameter, however, seems Pybind11 exports are not happy with accepting `<class 'openvds.core.VDSError'>` for argument typed as `error: OpenVDS::VDSError`.
The snippe...Dear Devs,
I was trying to use `openvds.create` with an error output parameter, however, seems Pybind11 exports are not happy with accepting `<class 'openvds.core.VDSError'>` for argument typed as `error: OpenVDS::VDSError`.
The snippet I'm playing with is something like that:
```Python
import openvds
self.forward_error = openvds.VDSError
self.vds_handle: openvds.core.VDS = openvds.create(
vds_url,
connection_string,
layout_descriptor,
axis_descriptors,
channel_descriptors,
metadata_container,
self.forward_error,
)
```
code without passing `self.forward_error` works correctly. If I try to fill error output parameter I'm getting:
```log
TypeError: create(): incompatible function arguments. The following argument types are supported:
E 1. (url: str, connectionString: str, layoutDescriptor: OpenVDS::VolumeDataLayoutDescriptor, axisDescriptors: List[OpenVDS::VolumeDataAxisDescriptor], channelDescriptors: List[OpenVDS::VolumeDataChannelDescriptor], metadata: OpenVDS::MetadataReadAccess, compressionMethod: OpenVDS::CompressionMethod, compressionTolerance: float, error: OpenVDS::VDSError) -> openvds.core.VDS
E 2. (url: str, connectionString: str, layoutDescriptor: OpenVDS::VolumeDataLayoutDescriptor, axisDescriptors: List[OpenVDS::VolumeDataAxisDescriptor], channelDescriptors: List[OpenVDS::VolumeDataChannelDescriptor], metadata: OpenVDS::MetadataReadAccess, compressionMethod: OpenVDS::CompressionMethod, compressionTolerance: float) -> openvds.core.VDS
E 3. (url: str, connectionString: str, layoutDescriptor: OpenVDS::VolumeDataLayoutDescriptor, axisDescriptors: List[OpenVDS::VolumeDataAxisDescriptor], channelDescriptors: List[OpenVDS::VolumeDataChannelDescriptor], metadata: OpenVDS::MetadataReadAccess, error: OpenVDS::VDSError) -> openvds.core.VDS
E 4. (url: str, connectionString: str, layoutDescriptor: OpenVDS::VolumeDataLayoutDescriptor, axisDescriptors: List[OpenVDS::VolumeDataAxisDescriptor], channelDescriptors: List[OpenVDS::VolumeDataChannelDescriptor], metadata: OpenVDS::MetadataReadAccess) -> openvds.core.VDS
...
E Invoked with: '/tmp/pytest-of-filip/pytest-32/<REDACTED>/subset_0.vds', '', <openvds.core.VolumeDataLayoutDescriptor object at 0x7f80f269c0b0>, [<openvds.core.VolumeDataAxisDescriptor object at 0x7f80f269c830>, <openvds.core.VolumeDataAxisDescriptor object at 0x7f80f269c970>, <openvds.core.VolumeDataAxisDescriptor object at 0x7f80f269cc70>], [<openvds.core.VolumeDataChannelDescriptor object at 0x7f80f269c870>], <openvds.core.MetadataContainer object at 0x7f80f2683fb0>, <class 'openvds.core.VDSError'>
```
From the log message, the constructor that works is number 4, and the one I'm trying to use is 3. Am I doing something dumb, and seeing it?
Context: We're trying to debug a rare issue, where writing data pages fails without apparent exception in Python, causing a zombie Python process in a container, that just hangs; My thought was, we might be able to use an error parameter constructor to have some logs, as the issue occurred a handful of times till then, and we don't have a root cause yet, but neither do we have any logs. If you have any tips, on how to grab any log that might have been created (we're writing to the S3 bucket, behind OSDU SDMS API) I would appreciate it.
Thanks,
Filiphttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/185SEGYImport stucks when importing seg-y with "Wavelet" compression method.2023-07-17T11:53:08ZTimofey AbramovSEGYImport stucks when importing seg-y with "Wavelet" compression method.We have a segy that is big-endian on headers data, but little endian in traces data. So SEGYImport cannot process traces data correctly. But it is not the issue we are reporting (and we [know about it](https://community.opengroup.org/osd...We have a segy that is big-endian on headers data, but little endian in traces data. So SEGYImport cannot process traces data correctly. But it is not the issue we are reporting (and we [know about it](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/162#note_172762)). The main issue is stucking the SEGYImport utility with "Wavelet" compression method on this file:
```
C:\Users\t.abramov_rogii\Downloads\openvds+-3.2.5\bin\msvc_141>SEGYImport.exe --input "D:/Format8_fixed_cropped.sgy" --vdsfile="D:/out" --compression-method=Wavelet
SEGYImport - OpenVDS+ 3.2.5 - Revision: c2525fbab73bd4977c1b6092fe9785bd612a479f
Importing into: D:/out
87.50 % Done.The value range in the wavelet compression was NaN or close to Inf. Clamping value range to +-10^30.
```
And it stucks at this progress.
SEGY: [Format8_fixed_cropped.sgy](/uploads/655e487aaef0133d347f51d5e61ba6ce/Format8_fixed_cropped.sgy)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/186Error copying VDS (VDSCopy) generated by Headwave to S32023-05-24T07:27:22ZRodrigo EirasError copying VDS (VDSCopy) generated by Headwave to S3I am trying to upload a VDS to a bucket into S3 and the error popped out into the console.
Screenshot attached.
I converted the SEGY to VDS using OpenVDS+ and I could to upload it to S3 bucket.
I converted the same SEGY to VDS using hea...I am trying to upload a VDS to a bucket into S3 and the error popped out into the console.
Screenshot attached.
I converted the SEGY to VDS using OpenVDS+ and I could to upload it to S3 bucket.
I converted the same SEGY to VDS using headwave and throws the error.
![error](/uploads/c9f8ddb5bf08994304e0733a1a4c0726/error.jpg)
This is the command that I used to:
.\VDSCopy.exe "C:\Users\rodri\IesBrazil\IesBrazil - PROJETOS\BLUWARE\VDS EXAMPLE\Sismica3D\Exmouth_tol1_w.vds" -d "Region=sa-east-1; AccessKeyId=xxxx; SecretKey=xxxx" s3://demo-vds-paleoscan/1/Exmouth_tol1_whttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/187[question] SDMS v4 support2023-05-24T16:10:07ZFilip Brzęk[question] SDMS v4 supportHi,
might I ask, what's the support posture for SDMS v4 endpoints, that AFAIK are available for openvds from M17 [link to the swagger](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/sei...Hi,
might I ask, what's the support posture for SDMS v4 endpoints, that AFAIK are available for openvds from M17 [link to the swagger](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/blob/v0.17.2/app/sdms-v4/docs/openapi.yaml?ref_type=tags)?
Regards,
Filiphttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/188Compressed and Uncompressed size of a VDS2024-02-26T20:09:18ZJørgen Lindjorgen.lind@3lc.aiCompressed and Uncompressed size of a VDSIt would be nice to be able to get the compressed and uncompressed size of a VDS. It would also be very handy if this was exposed in VDSInfoIt would be nice to be able to get the compressed and uncompressed size of a VDS. It would also be very handy if this was exposed in VDSInfohttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/189Error uploading VDS (VDSCopy) into a OSDU subproject2023-08-29T14:41:36ZJuliana Fernandesjuliana.fernandes@iesbrazil.com.brError uploading VDS (VDSCopy) into a OSDU subprojectHello,
I'm trying to use OpenVDS+ 3.2.6 to upload a vds file into a OSDU subproject at AWS.
The upload goes fine until 84.21% and them stop the progress. I left the process run more than 12 hours without any updates. The VDS file h...Hello,
I'm trying to use OpenVDS+ 3.2.6 to upload a vds file into a OSDU subproject at AWS.
The upload goes fine until 84.21% and them stop the progress. I left the process run more than 12 hours without any updates. The VDS file has around 5.7GB. I can get the dataset info in postman, but the path postman indicates does not exist into the console.
Regards,
Juliana Fernandes
![VDSCopy](/uploads/4b8652d47846bc60428d4d13500de65a/VDSCopy.png)
![dataset_info_postman](/uploads/875e9ae95b1839bc6e5639054e6f5cdb/dataset_info_postman.png)
![aws_console_subproject](/uploads/69272e0e2863e94b1e765d2d130c36c4/aws_console_subproject.png)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/190Add OSDU manifest generation capability2023-06-02T14:03:18ZMorten OfstadAdd OSDU manifest generation capabilityVDSInfo (or maybe a new vds utility) should be able to generate OSDU compliant JSON manifest for ingesting VDS. This was suggested by Juliana Fernandes (@fernandes_jfa) and would make life a lot easier when interacting with the OSDU inge...VDSInfo (or maybe a new vds utility) should be able to generate OSDU compliant JSON manifest for ingesting VDS. This was suggested by Juliana Fernandes (@fernandes_jfa) and would make life a lot easier when interacting with the OSDU ingestion service.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/191Getting histogram/statistics from OpenVDS data2023-06-22T14:11:50ZQiang FuGetting histogram/statistics from OpenVDS dataDoes OpenVDS data have embedded metadata for histogram/Statistics?
Go through all bricks to calculate the histogram or statistics could be quite expensive.Does OpenVDS data have embedded metadata for histogram/Statistics?
Go through all bricks to calculate the histogram or statistics could be quite expensive.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/192Samples order in the SliceRequestTraces does not depend on axis direction.2023-06-22T12:15:33ZTimofey AbramovSamples order in the SliceRequestTraces does not depend on axis direction.We had used the "VolumeDataAccessManager::SliceRequestSamples" api for making slices this way:
1) Obtaining number of samples for each axis via `axisDescriptors[i].GetNumSamples()`;
2) Obtaining min-max world coordinates of boundary poin...We had used the "VolumeDataAccessManager::SliceRequestSamples" api for making slices this way:
1) Obtaining number of samples for each axis via `axisDescriptors[i].GetNumSamples()`;
2) Obtaining min-max world coordinates of boundary points, i.e. at `0` and `GetNumSamples() - 1`;
3) Creating grid of 3D slice world coordinates within the min and max;
4) Translating them to Voxel indexes via `WorldCoordinatesToVoxelIndexFloat`;
5) Applying `RequestVolumeSamples` function.
Now we have migrated to `SliceRequestTraces` along the `traceDimension = 0`, i.e. vertical axis, let us name it "Z" axis. And we noticed in the `RequestVolumeSamples` output a point with min world "Z" coordinate (for example, `-1000`) corresponds to the bottom (deepest) point in the source seg-y, and max (for example,`0`) corresponds to the top, i.e. the "Z" axis directed from deep to surface. But in the `SlicRequestTraces` output the first point in the trace corresponds to the top (surface) sample, last one - to the bottom (deepest) one, as in the surce seg-y. So we have questions:
1) Is the samples order same as in the source seg-y? If yes, are there seg-ys that store trace in inverse order (deepest point first);
2) Is the samples number same as in the source seg-y? I.e. in all obtained by the "SliceRequestTraces" traces?
3) Does the samples number along Z in the `RequestVolumeTraces` result always equal to `axisDescriptors[0].GetNumSamples()`?https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/193Change in behavior of rare formats after upgrade2023-06-23T11:44:23ZAlena ChaikouskayaChange in behavior of rare formats after upgradeHi!
Some of our tests failed after upgrade to openvds master.
Seems to be caused by commit 560aa68029de532bc2a2b1d1c3d28229fcb9ba7a
Tests which started to fail request data from vds created from SEGY in formats [format3.segy](/uploads/...Hi!
Some of our tests failed after upgrade to openvds master.
Seems to be caused by commit 560aa68029de532bc2a2b1d1c3d28229fcb9ba7a
Tests which started to fail request data from vds created from SEGY in formats [format3.segy](/uploads/9e27a6b840c695d308517d1490bb2574/format3.segy), [format8.segy](/uploads/a8a04f7d69d85464db78c4c0f6fb9f4a/format8.segy), [format11.segy](/uploads/71dc95ca89c5c0e3d7869979d12c3f19/format11.segy), [format16.segy](/uploads/d0673ee9e49f08ed0de338a8bc3e46eb/format16.segy) (signed short 2 byte, signed char 1 byte, unsigned short 2 byte, unsigned char 1 byte) by using RequestVolumeTraces.
Per our previous experience, for all supported format this method returned us data in f4. Requested buffer is in floats, after all.
Now the following code [format.cpp](/uploads/38bf2d7b5adecf75b46bcb5f5e1f881b/format.cpp) produces results like that:
| | old openvds | new openvds |
|--------------|-----------------------------------------|-----------------------------------------------------------------------|
| format3.vds | 100 101 102 103 104 105 106 107 108 109 | -32768 -32768 -32768 -32768 -32768 -32768 -32768 -32768 -32768 -32768 |
| format8.vds | 100 101 102 103 104 105 106 107 108 109 | -128 -128 -128 -128 -128 -128 -128 -128 -128 -128 |
| format11.vds | 100 101 102 103 104 105 106 107 108 109 | 0 0 0 0 0 0 0 0 0 0 |
| format16.vds | 100 101 102 103 104 105 106 107 108 109 | 0 0 0 0 0 0 0 0 0 0 |
Did we as usual make some incorrect assumption or is it a breaking change?
[format16.vds](/uploads/c88014b946188f95c4511e7c517d48dd/format16.vds)
[format11.vds](/uploads/5f012a4590218ba68e27648ed06e30b6/format11.vds)
[format8.vds](/uploads/dc1df019ae30dd4863c67bc06f964b5f/format8.vds)
[format3.vds](/uploads/2a23f1b8f8a0155e9f8e1d69f72c7ab0/format3.vds)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/194allow limiting OpenMP/ThreadPool num of threads at runtime (via omp_num_threads)2023-07-12T13:55:36ZFilip Brzękallow limiting OpenMP/ThreadPool num of threads at runtime (via omp_num_threads)Hello,
It seems, that setting `OMP_NUM_THREADS=<int>` doesn't seem to have an effect, due to constants and limiting logic `src/OpenVDS/VDS/WaveletOpenMP.h`
and
```c++
#define WAVELET_OPENMP_SSE_THREAD_COUNT 4
#define WAVELET_OPENMP_MEM...Hello,
It seems, that setting `OMP_NUM_THREADS=<int>` doesn't seem to have an effect, due to constants and limiting logic `src/OpenVDS/VDS/WaveletOpenMP.h`
and
```c++
#define WAVELET_OPENMP_SSE_THREAD_COUNT 4
#define WAVELET_OPENMP_MEMORY_THREAD_COUNT 2
namespace Wavelet {
inline int Wavelet_GetEffectiveOpenMPThreadCount(int wantedThreadCount)
{
return std::max(1, std::min(omp_get_num_procs() - 2, wantedThreadCount));
}
}
```
and how that's used here
```log
src/OpenVDS/VDS/WaveletAdaptiveLLDecompress.cpp:537: const int threadCount = Wavelet_GetEffectiveOpenMPThreadCount(WAVELET_OPENMP_SSE_THREAD_COUNT);
src/OpenVDS/VDS/WaveletAdaptiveLLDecompress.cpp:666: const int threadCount = Wavelet_GetEffectiveOpenMPThreadCount(WAVELET_OPENMP_SSE_THREAD_COUNT);
src/OpenVDS/VDS/WaveletDecompress.cpp:636: WaveletTransform_InverseTransform_SSE(Wavelet_GetEffectiveOpenMPThreadCount(WAVELET_OPENMP_SSE_THREAD_COUNT), tempBuffer.data(), tempBufferSize, source, m_transformIterations, m_bandSize, m_transformMask, m_allocatedSizeX, m_allocatedSizeXY, m_integerInfo);
```
which always defaults to `omp_get_num_procs`, which is not ideal if the underlying machine is shared between workloads, it's spawning threads either based on `WAVELET_OPENMP_SSE_THREAD_COUNT 4` or `num_procs - 2`, whereas we would like to control max allowed by `OMP_NUM_THREADS`.
Is this something that can be improved? Or there are some designs for WaveletAdaptive implementation, that must have at least 4 compute threads.
context: Our recent observation yielded ~200+ Open VDSCopy threads on a 100+ core machine, which is of a shared nature, causing some other problems and unpredictable bursts of load when VDSCopy is invoked.
EDIT: seems the rest of the threads might be coming from threadPool, can we have a runtime limitation for this as well?
```log
src/OpenVDS/IO/IOManagerInMemory.cpp:11: , m_threadPool(std::thread::hardware_concurrency())
src/OpenVDS/VDS/VolumeDataRequestProcessor.cpp:1721: , m_threadPool(std::thread::hardware_concurrency())
```
I can submit a PR if you like.
Thanks,
Filiphttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/195Missing const in Sample-functions in VolumeDataAxisDescriptor2023-07-04T09:13:40ZAlena ChaikouskayaMissing const in Sample-functions in VolumeDataAxisDescriptor```
SampleIndexToCoordinate
CoordinateToSampleIndex
CoordinateToSamplePosition
```
functions in [OpenVDS::VolumeDataAxisDescriptor](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/blob/master/s...```
SampleIndexToCoordinate
CoordinateToSampleIndex
CoordinateToSamplePosition
```
functions in [OpenVDS::VolumeDataAxisDescriptor](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/blob/master/src/OpenVDS/OpenVDS/VolumeDataAxisDescriptor.h#L113) seem to not modify anything, yet they are not marked as const.
Not being able to use const objects with these functions is inconvenient.
Is this an oversight or is it in theory possible for these operations to change something?https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/196Wavelet compression in Python2023-08-29T14:41:08ZVasilii SinkevichWavelet compression in PythonHi,
I am investigating VDS capabilities and trying to convert numpy array into VDS using Wavelet compression.
I am using the example provided along the source code (npy_to_vds.py) with minimal changes
Everything works fine with no com...Hi,
I am investigating VDS capabilities and trying to convert numpy array into VDS using Wavelet compression.
I am using the example provided along the source code (npy_to_vds.py) with minimal changes
Everything works fine with no compression or with Zip or RLE compression, I can write and read data back, but if I try using Wavelet, like this:
```python
compressionMethod = openvds.CompressionMethod(1)
```
the code execution gets stuck at committing changes
```python
accessor.commit()
```
regardless of compression tolerance value. It just hangs here for ages.
I am wondering if python port is actual OpenVDS+ binding, maybe it is actually OpenVDS (without plus) that do not support wavelet compression?
If it *is* OpenVDS+, could you please provide a sample of the code (or maybe edit npy_to_vds.py) that saves VDS format with wavelet compression?
EDIT: I am working on Debian 9, python 3.9, openvds 3.2.6
Thanks,
Vasiliihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/197Deadlock in VolumeDataRequestProcessor::WaitForCompletion after cancel loadin...2023-08-25T07:48:49ZAnatoly YanchevskyDeadlock in VolumeDataRequestProcessor::WaitForCompletion after cancel loading one of LODsWe create several LODs in OpenVDS container. When our viewer display data, some time need to cancel loading one of the LODs.
It produce deadlock in open-vds library.
Two threads that are in the deadlock: [stack_1.txt](/uploads/25ff1d9f8...We create several LODs in OpenVDS container. When our viewer display data, some time need to cancel loading one of the LODs.
It produce deadlock in open-vds library.
Two threads that are in the deadlock: [stack_1.txt](/uploads/25ff1d9f89d453a9b1a11eec99dcb5ab/stack_1.txt) and [stack_2.txt](/uploads/8dfb70257b51f09f3f6c8df050abdc5b/stack_2.txt)
To fix that, we do next changes: [VolumeDataRequestProcessor.patch](/uploads/9ddab5fc678d95834df15a07fc5874d9/VolumeDataRequestProcessor.patch)
Problem occurred on Linux, open-vds 3.0.3. But we found that in latest version function VolumeDataRequestProcessor::WaitForCompletion did not changed.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/198Failed to create 3D Prestack container on OSDU2023-08-29T14:40:24ZAnatoly YanchevskyFailed to create 3D Prestack container on OSDUI try to create 3D Prestack with 4 axis:
Sample/Trace(offset)/Crossline/Inline
Error occurred on call "OpenVDS::Create".
Message: "DimensionGroup 41 is not a valid dimension."
Stack: [call_stack.txt](/uploads/fb0ec722ccda7ec907d357aa4b0...I try to create 3D Prestack with 4 axis:
Sample/Trace(offset)/Crossline/Inline
Error occurred on call "OpenVDS::Create".
Message: "DimensionGroup 41 is not a valid dimension."
Stack: [call_stack.txt](/uploads/fb0ec722ccda7ec907d357aa4b03a211/call_stack.txt)
Same volume can be created on local disk.
Output from VDSInfo: [VDSInfo_axis.txt](/uploads/1ecb09ef3ce874ef18686d3397a45a61/VDSInfo_axis.txt) and [VDSInfo_channels.txt](/uploads/bd3daf2ce80b5022ecab479f329109d7/VDSInfo_channels.txt)
Probably, I do something wrong, but I did not understand what exactly.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/199SEGYImport DataProvider for DMS needs to support chunked datasets2024-02-26T20:08:56ZMorten OfstadSEGYImport DataProvider for DMS needs to support chunked datasetsSEG-Y datasets in DMS uploaded with sdutil will have a default chunk size of 32MB, the DataProvider class needs to support this in order to successfully import the data. See this issue for details:
https://community.opengroup.org/osdu/p...SEG-Y datasets in DMS uploaded with sdutil will have a default chunk size of 32MB, the DataProvider class needs to support this in order to successfully import the data. See this issue for details:
https://community.opengroup.org/osdu/platform/pre-shipping/-/issues/585#note_246359Deepa KumariDeepa Kumarihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/200Any hint on generate fast slices?2023-08-30T13:35:01ZQiang FuAny hint on generate fast slices?I would like to read data inline by inline, crl by crl. I saw the documents said it could be done by fast slices, any hint how to handle it? Does segyimport support it?I would like to read data inline by inline, crl by crl. I saw the documents said it could be done by fast slices, any hint how to handle it? Does segyimport support it?https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/201SEG-Y header field definition at the last byte location doesn't work2023-09-06T10:11:02ZMorten OfstadSEG-Y header field definition at the last byte location doesn't workSEGYImport does not allow reading the last FourByte header word from the SEGY header.
If the crossline in the last header word, you can define a JSON file as follows:
```
{
"InlineNumber" : [ 233, "FourByte"],
"CrosslineNumber"...SEGYImport does not allow reading the last FourByte header word from the SEGY header.
If the crossline in the last header word, you can define a JSON file as follows:
```
{
"InlineNumber" : [ 233, "FourByte"],
"CrosslineNumber" : [ 237, "FourByte"]
}
```
This gives an ‘illegal field’ definition error because the code checks that 237 + 4 is no more than 240 which is off-by-one.