Seismic issueshttps://community.opengroup.org/groups/osdu/platform/domain-data-mgmt-services/seismic/-/issues2023-01-25T12:13:20Zhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/123Segmentation fault on VDSCopy2023-01-25T12:13:20ZMichał MurawskiSegmentation fault on VDSCopyI was trying to copy data from the local directory to the seismic store using the SD protocol. I was using OpenVDS in version 2.3.7
I executed the following command:
`
VDSCopy ./data.vds sd://osdu/testproject1/test-27 -d "SdAuthorityUr...I was trying to copy data from the local directory to the seismic store using the SD protocol. I was using OpenVDS in version 2.3.7
I executed the following command:
`
VDSCopy ./data.vds sd://osdu/testproject1/test-27 -d "SdAuthorityUrl=https://****/seismic-store/v3;SdApiKey=xxx;AuthTokenUrl=https://*****/token.oauth2;SdToken=***;ClientId=***;ClientSecret=***;RefreshToken=***;Scopes=offline_access;LogLevel=Trace" --tolerance 1 --compression-method None
`
In the end when the counter reached 100% percent it resulted in the following error
`
qemu: uncaught target signal 11 (Segmentation fault) - core dumped
Segmentation fault
`
The same command works fine for 2.2.0https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/212VDSCopy hanging when uploading to Seismic DDMS2023-10-23T11:39:11Zvinicius Vicente Silva RosaVDSCopy hanging when uploading to Seismic DDMSI am attempting to upload a local VDS file (1.5TB) to a SD Path, and after approximately an hour, there is no visible progress in the file upload, creating the impression that the process is stalled. No error messages are being displayed...I am attempting to upload a local VDS file (1.5TB) to a SD Path, and after approximately an hour, there is no visible progress in the file upload, creating the impression that the process is stalled. No error messages are being displayed. I suspect it may be related to the token refresh.
We are using the command line bellow:
OSDU/ADME M16
lIB: VDSCopy - OpenVDS+ 3.3.0 installed on Linux
```bash
VDSCopy -a 01 -a 02 -a 12 --tolerance=1.0 --compression-method=Wavelet -d 'sdAuthorityUrl=
https://{HOST}.energy.azure.com/seistore-svc/api/v3;authTokenUrl=https://login.microsoftonline.com/{TENANT}/oauth2/v2.0/token/;client_id={APP_ID};client_secret={APP_SECRET};scopes={APP_ID}/.default;'
'/local_disk0/vds/FILE.vds' 'sd://{TENANT}/{SUBPROJECT}/dataset_name.vds'
```
The intention of the command above is to authenticate using ClientID and ClientSecrect.
The upload completes successfully when the file is processed within an hour or less.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/issues/100SDMS commit "feat: GONRG-6259: Add osdu google" causes import error when running2023-07-12T02:21:00ZRashaad GraySDMS commit "feat: GONRG-6259: Add osdu google" causes import error when runningWhile testing a problem, at run time the service gets confused because of the "Auth" import that was include in your commit. [feat: GONRG-6259: Add osdu google](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seis...While testing a problem, at run time the service gets confused because of the "Auth" import that was include in your commit. [feat: GONRG-6259: Add osdu google](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/commit/43ea83caa7ff307e6014022244dfbaaa1734d1b2)
The file in question: [gc/index.ts](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/blob/43ea83caa7ff307e6014022244dfbaaa1734d1b2/app/sdms/src/cloud/providers/gc/index.ts)
When our [Dataset Parser](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/blob/master/app/sdms/src/services/dataset/parser.ts#L52) uses the Auth.isImpersonationToken() method it goes to your file instead of its proper location [Auth.ts](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-service/-/blob/master/app/sdms/src/auth/auth.ts) class.
This is causing an issue. While using Azure it goes to your method and returns false for all tokens that it checks even if it is an actual impersonation token.
Because it is NOT properly identifying the token as an impersonation token, incorrect information is passed into a new Dataset's metadata.
Please take some time to remedy this, as it affects more than just your added filesYan Sushchynski (EPAM)Yan Sushchynski (EPAM)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/135Refresh Token Flow failing with SDapi 3.162023-01-24T13:01:46ZFilip BrzękRefresh Token Flow failing with SDapi 3.16Hi,
during our testing, we've observed that the same connection string to SD store seems to work with versions of openvds+ linked to sdapi 3.14, but does not work with sdapi 3.16 (default for ~2.3.0 and ~2.4.0 openvds+ versions. Traceb...Hi,
during our testing, we've observed that the same connection string to SD store seems to work with versions of openvds+ linked to sdapi 3.14, but does not work with sdapi 3.16 (default for ~2.3.0 and ~2.4.0 openvds+ versions. Traceback is somewhat cryptic but related to response type.
```
'sd_authority_url=<<REDACTED>>/api/seismic-store/v3;sd_api_key=xxx;auth_token_url=<<REDACTED>>/token.oauth2;sdtoken=<<REDACTED>>;client_id=<<REDACTED>>;client_secret=<<REDACTED>>;refresh_token=<<REDACTED>>;scopes=offline_access'
ERROR:<<REDACTED>>:sdapi 3.16.0 - CallbackAuthProvider::getServiceAuthTokenImpl: Failed converting text to json format
text:
null
Traceback (most recent call last):
File "<<REDACTED>>", line 211, in _grab_vds_base_volume
return openvds.open(input, options)
RuntimeError: sdapi 3.16.0 - CallbackAuthProvider::getServiceAuthTokenImpl: Failed converting text to json format
text:
null
```
As said, exact same connection string does work with sdapi 3.14, so something breaking must have happened between 3.14 and 3.16; but I'm not skilled enough to dissect it further.
Seismic store details:
* OSDU M11 release,
* AWS flavor,
* Custom Identity Provider Oauth2.0 compatible (not the default cognito-idp)
Let me know, what's desired next steps are, both platform access and accessed data are restricted, so I don't think I can provide more details in public.
Best,
Filiphttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/132openvds+ linked sdapi version vs release tag2023-01-11T21:09:07ZFilip Brzękopenvds+ linked sdapi version vs release tagI would like to understand, what versions of seismic-store/sdapi are used to build openvds+ binaries, and if possible the reasoning behind it.
From quick glance at `CMake/Fetch3rdPartyInBuild.cmake` across 2.X openvds versions/tags:
- 2...I would like to understand, what versions of seismic-store/sdapi are used to build openvds+ binaries, and if possible the reasoning behind it.
From quick glance at `CMake/Fetch3rdPartyInBuild.cmake` across 2.X openvds versions/tags:
- 2.0.X - dms - 1e933303
- 2.1.X - dms - 98d59b27b5
- 2.2.X - dms - 98d59b27b5
- 2.3.X - dms - 3633f2030
- 2.4.X - dms - 3633f2030
- `release/0.14` - dms - 98d59b27b5
- `release/0.15` - dms - 3633f2030
whereas sdapi release tags, have the following sha commits:
- `release/0.14` - d96f1e9b9806486e523ac4d9ea74a124af7ee68d
- `release/0.15` - 04d68a061c3311c041d0ace4c222880032172065
it seems openVDS version tagged as `release/0.14` is missing 35 commits (`git rev-list d96f1e9b9 ^98d59b27b5 --pretty=oneline | wc -l`), from the same release tag `release/0.14` on sdapi; and openVDS version tagged as `release/0.15` is missing 5 commits on the corresponding `release/0.15` (`git rev-list 04d68a061c3311c041d0ace4c222880032172065 ^3633f2030 --pretty=oneline`).
I understand that those might not be "feature commits" irrelevant for the functionalities, but can we have some clarity on which seismic-store/sdapi version is supported in a given openVDS release?
Lastly, what is the desired flow of reporting issues emerging at sdapi level, when using openVDS SDK with a given OSDU DP release? Should the tickets be created for openVDS, or directly in [sd-api repository](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-cpp-lib) with reference to openVDS version used?https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/109Issue with two parallel processes writing to same VDS source.2022-02-07T10:52:20ZMichał MurawskiIssue with two parallel processes writing to same VDS source.
I was trying to implement workflow of multiple parallel processes which are writing to separate chunks of the same VDS source. Unfortunately I encountered some issues in the moment of reading modified VDS source.
Here is code example...
I was trying to implement workflow of multiple parallel processes which are writing to separate chunks of the same VDS source. Unfortunately I encountered some issues in the moment of reading modified VDS source.
Here is code example that I am running
```
from multiprocessing import Process
import openvds
import numpy as np
def unlock_dataset(sd_path):
"Disclosed implementation"
return
def write_zero_pages(accessor):
chunks_count = accessor.getChunkCount()
for c in range(chunks_count):
page = accessor.createPage(c)
buf = np.array(page.getWritableBuffer(), copy=False)
buf[:, :, :] = np.zeros(buf.shape, dtype=float)
page.release()
accessor.commit()
def create_vds(
path,
connection_string,
shape=None,
databrick_size=openvds.VolumeDataLayoutDescriptor.BrickSize.BrickSize_128,
access_mode=openvds.IVolumeDataAccessManager.AccessMode.AccessMode_Create,
components=openvds.VolumeDataChannelDescriptor.Components.Components_1,
format=openvds.VolumeDataChannelDescriptor.Format.Format_R32,
create_and_write_pages=True,
):
layout_descriptor = openvds.VolumeDataLayoutDescriptor(
brickSize=databrick_size,
lodLevels=openvds.VolumeDataLayoutDescriptor.LODLevels.LODLevels_1,
brickSize2DMultiplier=4,
options=openvds.VolumeDataLayoutDescriptor.Options.Options_None,
negativeMargin=0,
positiveMargin=0,
fullResolutionDimension=0,
)
metadata_container = openvds.MetadataContainer()
axis_descriptors = []
for i, size in enumerate(shape):
axis_descriptors.append(
openvds.VolumeDataAxisDescriptor(
size,
f"X{i}",
"unitless",
-1000.0,
1000.0,
)
)
channel_descriptors = [
openvds.VolumeDataChannelDescriptor(
format=format,
components=components,
name=f"Channel0",
unit="unitless",
valueRangeMin=0.0,
valueRangeMax=1000.0,
)
]
vds = openvds.create(
path,
connection_string,
layout_descriptor,
axis_descriptors,
channel_descriptors,
metadata_container,
)
access_manager = openvds.getAccessManager(vds)
accessor = access_manager.createVolumeDataPageAccessor(
dimensionsND=openvds.DimensionsND.Dimensions_012,
accessMode=access_mode,
lod=0,
channel=0,
maxPages=8,
chunkMetadataPageSize=1024,
)
chunks_count = accessor.getChunkCount()
if create_and_write_pages:
write_zero_pages(accessor)
openvds.close(vds)
return chunks_count
def writing_process(path, connection_string, chunks_range, number):
vds = openvds.open(path, connection_string)
manager = openvds.getAccessManager(vds)
accessor = manager.createVolumeDataPageAccessor(
dimensionsND=openvds.DimensionsND.Dimensions_012,
lod=0,
channel=0,
maxPages=8,
accessMode=openvds.IVolumeDataAccessManager.AccessMode.AccessMode_ReadWrite,
chunkMetadataPageSize=1024,
)
for c in range(chunks_range[0], chunks_range[1]):
page = accessor.createPage(c)
buf = np.array(page.getWritableBuffer(), copy=False)
buf[:, :, :] = np.reshape(np.array([float(number)] * buf.size), buf.shape)
page.release()
accessor.commit()
# openvds.close(vds)
def get_data(path, connection_string):
with openvds.open(path, connection_string) as vds_source:
layout = openvds.getLayout(vds_source)
axis_descriptors = [
layout.getAxisDescriptor(dim) for dim in range(layout.getDimensionality())
]
begin_slice = [0, 0, 0, 0, 0, 0]
end_slice = (
int(axis_descriptors[0].numSamples),
int(axis_descriptors[1].numSamples),
int(axis_descriptors[2].numSamples),
1,
1,
1,
)
accessManager = openvds.VolumeDataAccessManager(vds_source)
req = accessManager.requestVolumeSubset(
begin_slice, # start slice
end_slice, # end slice
format=openvds.VolumeDataChannelDescriptor.Format.Format_R32,
lod=0,
replacementNoValue=0.0,
channel=0,
)
if req.data is None:
err_code, err_msg = accessManager.getCurrentDownloadError()
print(err_code)
print(err_msg)
raise RuntimeError("requestVolumeSubset failed!")
dims = (
end_slice[2] - begin_slice[2],
end_slice[1] - begin_slice[1],
end_slice[0] - begin_slice[0],
)
return req.data.reshape(*dims)
if __name__ == "__main__":
path = "sd://osdu/example/dataset-4"
connection_string = "sd_authority_url=https://example.com/api/seismic-store/v3;sd_api_key=xxx;auth_token_url=https://example.com/oauth2/token;sdtoken=SDTOKEN;client_id=CLIENTID;refresh_token=REFRESH_TOKEN;scopes=openid email;LogLevel=100"
numer_of_processes = 2
processes = []
chunks_ranges = []
chunks_count = create_vds(path, connection_string, shape=(512, 512, 512))
a = chunks_count // numer_of_processes
r = chunks_count % numer_of_processes
for i in range(numer_of_processes):
if i == numer_of_processes - 1:
chunks_ranges.append((i * a, (i + 1) * a + r))
else:
chunks_ranges.append((i * a, (i + 1) * a))
print(chunks_ranges)
unlock_dataset(path)
for i, chunks_range in enumerate(chunks_ranges):
p = Process(
target=writing_process,
args=(
path,
connection_string,
chunks_range,
i,
),
)
processes.append(p)
p.start()
for p in processes:
p.join()
print("finished")
vds = openvds.open(path, connection_string)
openvds.close(vds)
data = get_data(path, connection_string)
```
[code.py](/uploads/14209e46c690704b192278aecac37b1c/code.py)
Here is output
```
-- sdapi 3.14.0 - Fri Feb 4 13:40:40 2022 -- Write Block Dimensions_012LOD0/ChunkMetadata/0 --- 0.750 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:41 2022 -- Write Block Dimensions_012LOD0/ChunkMetadata/0 --- 0.716 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:41 2022 -- Write Block LayerStatus --- 0.730 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:41 2022 -- Write Block LayerStatus --- 0.737 s
finished
-- sdapi 3.14.0 - Fri Feb 4 13:40:43 2022 -- Open Dataset sd://osdu/mergingtests4/dataset-4 in ReadOnly mode --- 1.722 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:45 2022 -- Get Block Size --- 1.320 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:45 2022 -- Read Block VolumeDataLayout --- 0.599 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:46 2022 -- Get Block Size --- 0.590 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:46 2022 -- Read Block LayerStatus --- 0.596 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:47 2022 -- Close Dataset sd://osdu/mergingtests4/dataset-4 --- 0.412 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:48 2022 -- Open Dataset sd://osdu/mergingtests4/dataset-4 in ReadOnly mode --- 1.272 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:49 2022 -- Get Block Size --- 0.592 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:49 2022 -- Read Block VolumeDataLayout --- 0.575 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:50 2022 -- Get Block Size --- 0.643 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:50 2022 -- Read Block LayerStatus --- 0.579 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:51 2022 -- Get Block Size --- 0.582 s
-- sdapi 3.14.0 - Fri Feb 4 13:40:52 2022 -- Read Block Dimensions_012LOD0/ChunkMetadata/0 --- 0.600 s
-1
Missing data for chunk: Dimensions_012LOD0/0
-- sdapi 3.14.0 - Fri Feb 4 13:40:52 2022 -- Close Dataset sd://osdu/mergingtests4/dataset-4 --- 0.355 s
Traceback (most recent call last):
File "simple.py", line 208, in <module>
data = get_data(path, connection_string)
File "simple.py", line 159, in get_data
raise RuntimeError("requestVolumeSubset failed!")
RuntimeError: requestVolumeSubset failed!
```
[logs_from_sd_path.log](/uploads/316c5d6bce889df37d29a44f45124f67/logs_from_sd_path.log)
I executed code using seismic store and s3 with the same result. Looks like data are getting corupted when multiple processes writing to the same VDS source.
Can I receive some guidance on this problem?
Is my implementation bad or there is something wrong inside OpenVDS?
I am using python lib openvds 2.2.0 and sd api 3.14.0https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/issues/7GCS: Object name ends with "/0"2021-07-16T11:36:31ZYan Sushchynski (EPAM)GCS: Object name ends with "/0"As I can see, the object's name ends with "0".
Does it somehow matter?
What if we change this "0" to `dataset.name`.
I stumbled across a problem when OpenVDS Converter tried to download the file using `<dataset.gcsurl> + "/" + <data...As I can see, the object's name ends with "0".
Does it somehow matter?
What if we change this "0" to `dataset.name`.
I stumbled across a problem when OpenVDS Converter tried to download the file using `<dataset.gcsurl> + "/" + <dataset.name>` path. And if the file was uploaded to the bucket with the name "0", the parser couldn't find it.
https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/blob/master/sdlib/api/providers/google/storage_service.py#L231
Example of how I resolved this issue:
https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/commit/9d6ff0c0e2fd310bb26cb495d0653795b63c3807https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/33Parallel write support and example2024-03-07T09:17:38ZMorten OfstadParallel write support and exampleFor R3, the HPC group is interested in developing an example of how to do parallel write (e.g. using MPI). This probably requires adding some support functions to make it easier to send metadata to a central coordinator and write the chu...For R3, the HPC group is interested in developing an example of how to do parallel write (e.g. using MPI). This probably requires adding some support functions to make it easier to send metadata to a central coordinator and write the chunk-metadata pages separately.Morten OfstadMorten Ofstadhttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/8File backend2020-08-07T12:37:29ZMorten OfstadFile backendA file backend based on the huebds library for manipulating data-store files should be added. This needs to have a compatibility layer so it can read files written with the commercial library (translating the serialized objects into JSON...A file backend based on the huebds library for manipulating data-store files should be added. This needs to have a compatibility layer so it can read files written with the commercial library (translating the serialized objects into JSON that is similar to the objects found in an object store version of VDS). The handling of chunk-metadata and metadata-pages is done quite differently for the file format, so there is some refactoring work to be done to make this backend possible.Version 2.0https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/233CRS - Problem when the data is displayed in different UTM zones at the same p...2024-03-07T09:15:19ZJuliana Fernandesjuliana.fernandes@iesbrazil.com.brCRS - Problem when the data is displayed in different UTM zones at the same project.Hello,
IesBrazil team is testing OpenVDS+ with CRS and one of the steps were to QC the data using Headwave from Bluware.
The team noticed a problem and we did a documentation on the tests that I will present below:
**Goal of the Test...Hello,
IesBrazil team is testing OpenVDS+ with CRS and one of the steps were to QC the data using Headwave from Bluware.
The team noticed a problem and we did a documentation on the tests that I will present below:
**Goal of the Tests:** Check if OpenVDS+ is adding correctly the CRS into the VDS file,<br>
**Methodology:** Convert SEGY to VDS using OpenVDS+/Headwave and QC the data using Headwave,<br>
**Data used:** Volve and Brazilian data (Volve doesn't present any problem). From Brazil we used 4 files from Solimões Basin and 1 file from Amazonas Basin, provided by ANP. The data can find [HERE](https://reate.cprm.gov.br/anp/TERRESTRE), below you can download directly all the files used in this test (from Brazil, that is where we identified the problem):
* [0233_LESTE_URUCU.3D.MIG_FIN.1.sgy](https://reate.cprm.gov.br/arquivos/index.php/s/DKD0oj9FsZAU8tI/download?path=%2FSISMICA_3D%2F0233_LESTE_URUCU%2FTEMPO%2FSISMICA&files=0233_LESTE_URUCU.3D.MIG_FIN.1.sgy) - Solimões Basin, SAD69/UTM 20S, EPSG:29190
* [0237_AEROPORTO.3D.MIG_FIN.1.sgy](https://reate.cprm.gov.br/arquivos/index.php/s/DKD0oj9FsZAU8tI/download?path=%2FSISMICA_3D%2F0237_AEROPORTO%2FTEMPO%2FSISMICA&files=0237_AEROPORTO.3D.MIG_FIN.1.sgy) - Solimões Basin, SAD69/UTM 20S, EPSG:29190
* [0237_IGARAPE_MARTA.3D.MIG_FIN.2.sgy](https://reate.cprm.gov.br/arquivos/index.php/s/DKD0oj9FsZAU8tI/download?path=%2FSISMICA_3D%2F0237_IGARAPE_MARTA%2FTEMPO%2FSISMICA&files=0237_IGARAPE_MARTA.3D.MIG_FIN.2.sgy) - Solimões Basin, SAD69/UTM 20S, EPSG:29190
* [R0300_3D_CHIBATA_PSTM.3D.PSTM.1.sgy](https://reate.cprm.gov.br/arquivos/index.php/s/DKD0oj9FsZAU8tI/download?path=%2FSISMICA_3D%2FR0300_3D_CHIBATA%2FTEMPO%2FSISMICA&files=R0300_3D_CHIBATA_PSTM.3D.PSTM.1.sgy) - Solimões Basin, SIRGAS 2000/UTM 20S, EPSG:31980
* [R0300_2D_AM_URUCARA.3D.PSTM.1.sgy](https://reate.cprm.gov.br/arquivos/index.php/s/IPNA8z7hO1vHsxI/download?path=%2FSISMICA_3D%2FR0300_3D_AM_URUCARA%2FTEMPO%2FSISMICA&files=R0300_3D_AM_URUCARA.3D.PSTM.1.sgy) - Amazonas Basin, SAD69/UTM 21S, EPSG:29191<br>
**Shapefile:** Georeferenced polygons of exploratory blocks in geographic coordinates and datum SAD69, available [HERE](https://geomaps.anp.gov.br/geoanp/),<br>
**Problem:** When the project has a different zone from the data (e.g: the project is located at SAD69/ UTM 20S and the data is located at SAD69/ UTM 21S), the file is wrongly spatially positioned (We used a VDS converted by Headwave and the original SEGY to compare),<br>
**OpenVDS+ Version:** 3.3.0,<br>
**Comparative Scenario:**
* SEGY with Original CRS
* SEGY with WGS84 CRS
* VDS from HW with Original CRS
* VDS from HW with WGS84 CRS
* VDS from OpenVDS+ with WGS84 CRS (only option available)
### First Scenario - SEGY with Original CRS
The project is under the coordinate reference system CRS SAD69/ UTM 20S (EPSG:29190) that includes most of the data of the test.<br>
In this test the team uploaded all the segy files, listed in the "Data used" topic, under the Original CRS (also informed with the data list) and displayed. The polygon in red at the superior right corner in the image is the shapefile for the block R0300_3D_AM_URUCARA.<br>
**Result: All the data are at the expected spatial position.**
![SEGY_with_Original_CRS](/uploads/285ab921116f92658bbf43007924bdbb/SEGY_with_Original_CRS.png)
### Second Scenario - SEGY with WGS84 CRS
The project is under the coordinate reference system CRS SAD69/ UTM 20S (EPSG:29190) that includes most of the data of the test.<br>
In this test the team uploaded all the segy files, listed in the "Data used" topic, under the CRS WGS84 and displayed. The polygon in red at the superior right corner in the image is the shapefile for the block R0300_3D_AM_URUCARA.<br>
**Result: All the data are at the expected spatial position.**
![SEGY_with_Original_CRS](/uploads/285ab921116f92658bbf43007924bdbb/SEGY_with_Original_CRS.png)
### Third Scenario - VDS from HW with Original CRS
The project is under the coordinate reference system CRS SAD69/ UTM 20S (EPSG:29190) that includes most of the data of the test.<br>
In this test the team converted to vds, using Headwave, all the segy files, listed in the "Data used" topic, under the Original CRS (also informed with the data list) and displayed. The polygon in red at the superior right corner in the image is the shapefile for the block R0300_3D_AM_URUCARA.<br>
**Result: All the data are at the expected spatial position.**
![SEGY_with_Original_CRS](/uploads/285ab921116f92658bbf43007924bdbb/SEGY_with_Original_CRS.png)
### Fourth Scenario - VDS from HW with WGS84 CRS
The project is under the coordinate reference system CRS SAD69/ UTM 20S (EPSG:29190) that includes most of the data of the test.<br>
In this test the team converted to vds, using Headwave, all the segy files, listed in the "Data used" topic, under the CRS WGS84 and displayed. The polygon in red at the superior right corner in the image is the shapefile for the block R0300_3D_AM_URUCARA.<br>
**Result: All the data are at the expected spatial position.**
![SEGY_with_Original_CRS](/uploads/285ab921116f92658bbf43007924bdbb/SEGY_with_Original_CRS.png)
### Fifth Scenario - VDS from OpenVDS+ with WGS84 CRS (only option available)
The project is under the coordinate reference system CRS SAD69/ UTM 20S (EPSG:29190) that includes most of the data of the test.<br>
In this test the team converted to vds, using OpenVDS+, all the segy files, listed in the "Data used" topic, under the CRS WGS84 and displayed. The polygon in red at the superior right corner in the image is the shapefile for the block R0300_3D_AM_URUCARA.<br>
**Result: The file R0300_3D_AM_URUCARA that is located under a different zone from the project (21S) is wrongly spatially positioned (Should be at the same position that the red polygon is).**
![VDS_with_WGS84_Open](/uploads/fc6638d20fffea098cd2fde1ee44dcac/VDS_with_WGS84_Open.png)
We are available for any additional information needed.
Regards,
Julianahttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/209Adding CRS to a VDS generated by Openvds+2023-11-16T12:21:10ZJuliana Fernandesjuliana.fernandes@iesbrazil.com.brAdding CRS to a VDS generated by Openvds+Hello,
I was taking a look into the doccumentation in order to add CRS to the VDS I'm generating with Openvds+.
In the doccumentation I saw the command "–crs-wkt <string>". The WKT is a Well-known Text and seems to be a geographical c...Hello,
I was taking a look into the doccumentation in order to add CRS to the VDS I'm generating with Openvds+.
In the doccumentation I saw the command "–crs-wkt <string>". The WKT is a Well-known Text and seems to be a geographical coordinate. There is a way to add a UTM coordinate to the data?
Regards,
Julianahttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/205fail to copy file from cloud server2023-09-19T10:20:18Znanting liufail to copy file from cloud serveri try to read vds from minio(a open-source, S3 compatible object store)
at first step,i try to read a vds file with _OpenVDS.open()_,but failed,
exception is "Error on downloading VolumeDataLayout object: Http error response: 404 -> http...i try to read vds from minio(a open-source, S3 compatible object store)
at first step,i try to read a vds file with _OpenVDS.open()_,but failed,
exception is "Error on downloading VolumeDataLayout object: Http error response: 404 -> https://endpoint/bucket-name/test.vds/VolumeDataLayout: The specified key does not exist.".
then,i realized that _open()_ can not read a vds file directly, cause the file uploaded manually.
and the second step,l try to use VDSCopy to copy the VDS file to the cloud environment,still fail! with error "Error on uploading VolumeDataLayout object: unexpected AWS signing failure",here is my command `VDSCopy.exe E:\PPCoef.vds s3://endpoint/bucket-name/testVDS -d "Region=us-west-rack-2;SecretKey=xxx;SecretAccessKey=xxx"`,my SecretKey&SecretAccessKey is correct,but l dont know why print this...
Could you please help me figure out how to deal with this situation?https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/189Error uploading VDS (VDSCopy) into a OSDU subproject2023-08-29T14:41:36ZJuliana Fernandesjuliana.fernandes@iesbrazil.com.brError uploading VDS (VDSCopy) into a OSDU subprojectHello,
I'm trying to use OpenVDS+ 3.2.6 to upload a vds file into a OSDU subproject at AWS.
The upload goes fine until 84.21% and them stop the progress. I left the process run more than 12 hours without any updates. The VDS file h...Hello,
I'm trying to use OpenVDS+ 3.2.6 to upload a vds file into a OSDU subproject at AWS.
The upload goes fine until 84.21% and them stop the progress. I left the process run more than 12 hours without any updates. The VDS file has around 5.7GB. I can get the dataset info in postman, but the path postman indicates does not exist into the console.
Regards,
Juliana Fernandes
![VDSCopy](/uploads/4b8652d47846bc60428d4d13500de65a/VDSCopy.png)
![dataset_info_postman](/uploads/875e9ae95b1839bc6e5639054e6f5cdb/dataset_info_postman.png)
![aws_console_subproject](/uploads/69272e0e2863e94b1e765d2d130c36c4/aws_console_subproject.png)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/188Compressed and Uncompressed size of a VDS2024-02-26T20:09:18ZJørgen Lindjorgen.lind@3lc.aiCompressed and Uncompressed size of a VDSIt would be nice to be able to get the compressed and uncompressed size of a VDS. It would also be very handy if this was exposed in VDSInfoIt would be nice to be able to get the compressed and uncompressed size of a VDS. It would also be very handy if this was exposed in VDSInfohttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/173Updating information stored in a VDS (Are VDS datasets mutable?)2023-03-27T06:48:15ZAlexander JaustUpdating information stored in a VDS (Are VDS datasets mutable?)I was playing around with OpenVDS to figure out whether and to what extent VDS datasets are mutable or not. My big question is: What parts of a VDS dataset are mutable. If there are any parts that are mutable, what is the correct way to ...I was playing around with OpenVDS to figure out whether and to what extent VDS datasets are mutable or not. My big question is: What parts of a VDS dataset are mutable. If there are any parts that are mutable, what is the correct way to change these parts.
Either way it has different implications for certain workflows (to me). This concerns metadata as well as channel data stored within the VDS.
1. If a VDS dataset is always immutable, I can be sure that nobody with accidentally change/break a VDS dataset.
2. If a VDS dataset is mutable, I could update some fields if, e.g., `SEGYImport` does not accept certain names/units during ingestion and/or update data within the VDS, e.g. add fast slice, an additional channel or update data within a channel without recreating the VDS dataset from scratch.
I potentially do some "stupid" things here, but I also try to model a worst case scenario here like "what is the worst thing somebody can do wrong?".
I found an [older issue](https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/28) which mentions the addition of LOD levels to a VDS, but it does not specify whether this would happen in-place or would create a new VDS dataset.
## Observations / Experiments
1. I am able to add additional channel data to an existing channel of a VDS dataset. The additional data seems to hide the initially available channel data.
For testing I created a small Python script: [create_and_change_vds_inplace_small.py](/uploads/72d056826ce7bf455f1b69ddf023fec2/create_and_change_vds_inplace_small.py). The script creates a small artifical VDS dataset with one channel. The script carries out the following steps.
- Create VDS dataset with all values in the channel are set to `1`.
- Close file hande such that it can be written to disk.
- Open the file and extract a slice, copy the slice data and close the file.
- Open the file and get an AccessManager, write the value `2*old_value` (`2` in this case) to the VDS dataset and close the file.
- Open the file and extract a slice, copy the slice data and close the file.
- Plot the slice data.
When I run the script that the data extracted from the VDS indeed changes. For the first slice I get constant `1` values and constant `2` values for the second time I extract a slice. I am not sure if one can still access the "old" data. The file size increases so it appears to me as if the old data is still stored in the VDS dataset.
Is this behavior intended? If so, how can I access the "old" data? Would it be possible to actually update the data without increasing the file size?
2. I tried to update the metadata. For that I wrote a [small C++ code](/uploads/013acdfdfe460bef0d5dbec4b517ee98/update_metadata.cpp) as C++ seemed to have (more?) direct access to the `MetadataWriteAccess` object than Python. The code basically opens a specified VDS dataset and replaces the `ImportTimeStamp` values with some unrealistic time stamp.
The code executes, but gives a segmentation fault (`Segmentation fault: 11`). From debugging I concluded that the segmentation fault rises when the VDS dataset is being closed. Calling the `SetMetadataString` function seems to be fine.
Is this behavior intended? I guess the segmentation fault should never happen, but to me it is not clear if that is an error when updating the VDS dataset or it is a side effect of illegally writing the metadata.
## Platform
* Apple Arm M1 Max
* MacOS 13.1
* OpenVDS 3.0.3 (compiled from source)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/issues/22sdutil - problem uploading large file (token expiration)2023-05-09T09:52:15ZDebasis Chatterjeesdutil - problem uploading large file (token expiration)Recent communication with @sacha . cc - @chad
Hi Debasis,
I did some digging, and this is likely to be an issue specific to each cloud provider.
I tested on our system, and I can upload data beyond the expiry of the initial auth token...Recent communication with @sacha . cc - @chad
Hi Debasis,
I did some digging, and this is likely to be an issue specific to each cloud provider.
I tested on our system, and I can upload data beyond the expiry of the initial auth token. I uploaded 23Gb in 1h33, with the first token only valid for 1h. Sdutil refreshed the token before patching the dataset. Note that this is with our own internal auth provider.
Here's an example of the sdutil code for Azure, I see that it will not try to refresh the token if it is expired. That seems wrong. I'll inform the folks at Microsoft.
https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/seismic-dms-suite/seismic-store-sdutil/-/blob/master/sdlib/auth/providers/azure/oauth2.py#L66M18 - Release 0.21Deepa KumariDeepa Kumarihttps://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-zgy/-/issues/24CMake build support2023-02-01T16:39:06ZJon JenssenCMake build supportAny plans to support building the library using CMake? That would make it easier to integrate and use it in other open source projects, as well as simplify using the library in cross-platform products.Any plans to support building the library using CMake? That would make it easier to integrate and use it in other open source projects, as well as simplify using the library in cross-platform products.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/154Crash in Segy import while eject flash drive with segy file (or disconnect la...2023-03-28T15:10:48ZArtem ВCrash in Segy import while eject flash drive with segy file (or disconnect lan/wi-fi with source segy file)It is extremely hard to reproduce, but bug is already exists in all releases of open-vds.
Let's discusse two different cases:
1. Move source segy file to flash drive
2. Import segy file (via SEGImport utils)
3. Eject flash drive while p...It is extremely hard to reproduce, but bug is already exists in all releases of open-vds.
Let's discusse two different cases:
1. Move source segy file to flash drive
2. Import segy file (via SEGImport utils)
3. Eject flash drive while processing
4. Crash without any dump
Another case:
1. Move source segy file to external network drive (google drive, smb or any other)
2. Import segy file (via SEGImport utils)
3. Unplug ethernet cable (disconnect from wifi/lan, close your corporate vpn connection or any other) while processing
4. Crash without any dump
I found, that main reason of crash inside `TraceDataManager`
Crash will be here:
```
const char* header = traceDataManager.getTraceData(trace, error);
if (error.code)
{
outputPrinter.printWarning("IO", "Failed when reading data", fmt::format("{}", error.code), error.string);
break;
}
const void* data = header + SEGY::TraceHeaderSize;
int primaryTest = fileInfo.Is2D() ? 0 : SEGY::ReadFieldFromHeader(header, fileInfo.m_primaryKey, fileInfo.m_headerEndianness),
secondaryTest;
```
![image](/uploads/6b6a1d4b0140865730491d87a35c71da/image.png)
As you can see `TraceDataManager::getTraceData` imp is safe.
```
const char
* basePtr = static_cast<const char *>(pageView->Pointer(error));
```
but... You loose nullptr check while return address.
I added required comparator for MVP:
![image](/uploads/a31740e7589fad71c1f61cea788424e4/image.png)
And yes - `basePtr` is nullptr.
Return ptr with any offset will crash application.
---
Solution: Add code snippet
```
const char *
getTraceData(int64_t traceNumber, OpenVDS::Error & error) const
{
...
// Additional check
if (basePtr == nullptr)
{
error.code = 1;
error.string = "Failed to acquire pageView pointer";
return nullptr;
}
return basePtr + (traceNumber - pageTrace) * m_traceByteSize;
}
```
---
How to 100% reproduce:
0. Start SEGYImport with debug session
1. Copy segy to flash, start importing and waith for 10-20-N% done.
2. Set breakpoint to line: ``` for (int64_t trace(firstTrace); trace <= segment->m_traceStop && error.code == 0; ++trace, ++tertiaryIndex)```
3. Eject flash drive
4. Iterate over loop up to crash :)https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/152Broken Win ErrorToString implementation for non-english system lang2023-04-04T19:09:53ZArtem ВBroken Win ErrorToString implementation for non-english system langLet's explain minimum code sample:
```
OpenVDS::Error error;
OpenVDS::SetIoError(8,error );
```
We want to get description with system lang on Windows.
```
error {code=8 string="???????????? ???????? ?????? ??? ????????? ???? ??...Let's explain minimum code sample:
```
OpenVDS::Error error;
OpenVDS::SetIoError(8,error );
```
We want to get description with system lang on Windows.
```
error {code=8 string="???????????? ???????? ?????? ??? ????????? ???? ???????.\r\n" } OpenVDS::VDSError
```
*Non-english system lang* will return unreadable description. For example - I use Kazakh lang.
Solution:
```
std::string ws2s(const std::wstring& s)
{
int len;
int slength = (int)s.length() + 1;
len = WideCharToMultiByte(CP_UTF8, 0, s.c_str(), slength, 0, 0, 0, 0);
std::string r(len, '\0');
WideCharToMultiByte(CP_UTF8, 0, s.c_str(), slength, &r[0], len, 0, 0);
return r;
}
std::string ErrorToString(DWORD error)
{
wchar_t buf[256];
FormatMessageW(FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS,
NULL, error, MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT),
buf, (sizeof(buf) / sizeof(wchar_t)), NULL);
std::wstring ws(&buf[0]);
return ws2s(ws);
}
```
Or any equivalent of wchar usage in FormatMessage WinAPI call.https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/seismic/open-vds/-/issues/150Create pkg-config configuration (.pc file) for consumption by other projects2022-10-28T07:22:45ZAlexander JaustCreate pkg-config configuration (.pc file) for consumption by other projectsIt would be nice if OpenVDS could generate a [pkg-config](https://www.freedesktop.org/wiki/Software/pkg-config/) configuration file (`.pc` file) for easy consumption of the OpenVDS project in other projects. This would especially help wi...It would be nice if OpenVDS could generate a [pkg-config](https://www.freedesktop.org/wiki/Software/pkg-config/) configuration file (`.pc` file) for easy consumption of the OpenVDS project in other projects. This would especially help with projects that are not using CMake as build system. One example would be projects using Go and its build system. Go has explicit support for pkg-config in the [CGO package](https://pkg.go.dev/cmd/cgo#hdr-Using_cgo_with_the_go_command) to obtain the needed flags for compiling and linking of applications to other packages. Would this be something one could add?
I made a quick test with a minimal template for the pkg-config configuration file which worked in my setup. I have attached the patch. It creates a `openvds.pc` file from the `openvds.pc.in` template file stored in `CMake` via a `configure_file` step in the CMake build process. The configuration file is installed into `TARGET_DIR/lib/pkgconfig/`. If the patch would be already acceptable or only small extra work, I would submit it as merge request.
[0001-Add-pkg-config-configuration-file-generation.patch](/uploads/612c2ede69944f24f2e8cb4ec83d8927/0001-Add-pkg-config-configuration-file-generation.patch)