Update home - object storage and rock and fluid model authored by Daniel Perna's avatar Daniel Perna
......@@ -342,6 +342,12 @@ As of Jan. 2025:
There was little to no value in keeping this endpoint and these schemas, so in late January, early February 2025, they were removed from RAFS DDMS v.2.
In Fall of 2025, preparations are being made soon to deploy the following endpoints in the RAFS DDMS:
1. Reservoir Simulation Rock Physics Model (with content schema)
2. Saturation Function Set (with content schema)
3. Fluid Model (WPC only; content schema is delayed)
#### More information on the business concept of "PVT Model"
To be clear, "PVT Model" is _not_ the same as "PVT Analysis". "PVT Model" is another name for, or a component of "Fluid Model". In a business workflow (see OSDU forum example [here](https://gitlab.opengroup.org/osdu/subcommittees/data-def/projects/Petrophysics/docs/-/blob/master/Design%20Documents/Pau%20France%20F2F%20Planning%202024-Apr/OSDU%20RAFS%20EandP%20workflows%20and%20data%20flows-Main%20Business%20Process.drawio.png)), the major tasks follow this order:
......@@ -399,9 +405,20 @@ You can read more about how the catalog schemas using this "meta" block and how
### 13. Does the RAFS DDMS store its content (Parquet files) in the OSDU generic/core storage or the RAFS DDMS's local storage?
The storage is all in the same cluster as the rest of the OSDU deployment. But the RAFS DDMS sets up its own "bucket". Credentials come from the OSDU Partition Service. (This is the same approach as the Wellbore DDMS, as of March 2025.)
The object storage is all in the same cluster as the rest of the OSDU deployment. However, historically, there have been some variations:
From **2022-present** the following have been true, both in the CSP-oriented deployments and the CSP-agnostic Community Implementation (C.I.):
* Use Dataset schema? - Yes
* Use Dataset Service? - Yes
In **2024-2025 (M25)**, the following was **_also_** true for the **Azure** implementation:
* A configuration flag _bypassed_ both the Dataset schema and Dataset Service to write direct to the CSP BLOB storage. This is in the same cluster as the rest of the OSDU deployment, but the RAFS DDMS set up its own "bucket". Credentials came from the OSDU Partition Service. (This is the same approach as the Wellbore DDMS, as of March 2025.) This variation was partially due to performance, and partially due to DDMS architectural principles (i.e., that DDMS content should only be accessible via the DDMS).
**Post-2025** in order to uphold the principle that DDMS content should only be accessible via the DDMS (not through Dataset catalog schemas), it is possible that the RAFS DDMS will not use a Dataset record, but will still use an appropriate OSDU core service (e.g. Dataset Service, File DDMS). Stay tuned.
Note that in some previous experimental versions of the RAFS DDMS, this DDMS took a different approach: It used the "Dataset" group-type schemas to store the RAFS DDMS content using the OSDU core services. However, partially due to performance, and partially due to DDMS architectural principles (i.e., that DDMS content should only be accessible via the DDMS), this approach was abandoned.
### 14. For OSDU M25 (March 2025), the RAFS DDMS has an endpoint called "dev-samplesanalysis". What is this, and how does it relate to the main "samplesanalysis" (v2) endpoint?
......
......