OSDU Software issueshttps://community.opengroup.org/groups/osdu/-/issues2021-12-13T06:52:48Zhttps://community.opengroup.org/osdu/platform/pre-shipping/-/issues/104AWS - R3M8 - Manifest-based Ingestion - Frame of Reference2021-12-13T06:52:48ZDebasis ChatterjeeAWS - R3M8 - Manifest-based Ingestion - Frame of ReferencePlease check enclosed file for details. I do not see values being converted from "ft" to "m".
[AWS-Ingest-Master-SeismicAcquisitionSurvey-ST0202R08-DC-2Oct-steps.txt](/uploads/936d11727d51090b157b8053d02285fb/AWS-Ingest-Master-SeismicAc...Please check enclosed file for details. I do not see values being converted from "ft" to "m".
[AWS-Ingest-Master-SeismicAcquisitionSurvey-ST0202R08-DC-2Oct-steps.txt](/uploads/936d11727d51090b157b8053d02285fb/AWS-Ingest-Master-SeismicAcquisitionSurvey-ST0202R08-DC-2Oct-steps.txt)
Also see issue #99 by @yanbinzhanghttps://community.opengroup.org/osdu/platform/system/partition/-/issues/20Update the partitionListCache without rebuild on partition create, delete2022-11-24T11:47:11ZDmitrii GerashchenkoUpdate the partitionListCache without rebuild on partition create, deleteFor now, the `partitionListCache` is purged on `createPartition` or `deletePartition` invocation that leads to the unnecessary request to storage on the next invocation of `getAllPartitions` method.
To optimize this behavior the `partit...For now, the `partitionListCache` is purged on `createPartition` or `deletePartition` invocation that leads to the unnecessary request to storage on the next invocation of `getAllPartitions` method.
To optimize this behavior the `partitionListCache` could be updated on `createPartition` or `deletePartition` invocation without request to storage.https://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-workflow/-/issues/129Fixed get workflow status request by workflowName2021-10-11T09:44:52ZRiabokon Stanislav(EPAM)[GCP]Fixed get workflow status request by workflowNameGET "/{workflow_name}/workflowRun/{runId}" request used 'dagName' instead of 'workflowName' underhood during airflow call.
In case of different 'dagName' and 'workflowName' it leads error:
```
{
"code": 404,
"reason": "Failed to send...GET "/{workflow_name}/workflowRun/{runId}" request used 'dagName' instead of 'workflowName' underhood during airflow call.
In case of different 'dagName' and 'workflowName' it leads error:
```
{
"code": 404,
"reason": "Failed to send request.",
"message": "Unable to send request to Airflow. 404 NOT FOUND_{\"error\":\"Dag id workflow_name not found in DagModel\"}_"
}
```M9 - Release 0.12Riabokon Stanislav(EPAM)[GCP]Riabokon Stanislav(EPAM)[GCP]https://community.opengroup.org/osdu/platform/domain-data-mgmt-services/well-delivery/well-delivery/-/issues/3status on CSPs APIs2021-11-16T00:43:22ZYunhua Koglinstatus on CSPs APIs@openai @ChrisZhang @Wibben Anyone knows who I should contact with about this service? Especially someone who could answer questions related APIs that provider need to implement? Thanks@openai @ChrisZhang @Wibben Anyone knows who I should contact with about this service? Especially someone who could answer questions related APIs that provider need to implement? Thankshttps://community.opengroup.org/osdu/platform/data-flow/real-time/streams/stream-admin-service/-/issues/7Admin / Create Stream - Source2021-10-28T21:57:42ZSunil GargAdmin / Create Stream - SourceCreate a Logical Stream for the data Source - Register all the information required for initialization of the resources, setup the Kafka topics to make sure that the source stream can be received by the parser and parser can further feed...Create a Logical Stream for the data Source - Register all the information required for initialization of the resources, setup the Kafka topics to make sure that the source stream can be received by the parser and parser can further feed the messages to the Kafka source topicStephen NimmoStephen Nimmohttps://community.opengroup.org/osdu/platform/data-flow/ingestion/segy-to-vds-conversion/-/issues/5Provide hook to Catalog information (ex: SeismicTraceData work-product compon...2022-01-21T14:36:10ZDebasis ChatterjeeProvide hook to Catalog information (ex: SeismicTraceData work-product component)I envisage workflow such as create work-product, work-product component (SeismicTraceData), Dataset, FileCollection as needed for a SegY file. Currently, there is no such tool provided by OSDU Forum to create JSON files for this requirem...I envisage workflow such as create work-product, work-product component (SeismicTraceData), Dataset, FileCollection as needed for a SegY file. Currently, there is no such tool provided by OSDU Forum to create JSON files for this requirement. So, for now, let us assume that the Data Loader inspects the SegY file by using some open source tool and prepares the JSON files suitably.
Then when launching the conversion (SegY to oVDS), it should benefit from information as found from metadata in various OSDU Data Platform records as above.
Next, after successful conversion, it should also provide a link to the SegY file and declare the VDS file as its artefact (similar to oZgy flow).
Such as when SegY-to-zgy conversion is launched, it uses work-product and FileCollection as inputs.
```
"filecollection_segy_id": "odesprod:dataset--FileCollection.SEGY:dc-01oct-dataset:",
"work_product_id": "odesprod:work-product--WorkProduct:dc-01oct-wp:",
```
And after completion, work-product-component is updated with artefact information.
```
"Artefacts": [
{
"ResourceID": "odesprod:dataset--FileCollection.Slb.OpenZGY:481671525e464e9889290e250e2258be",
"ResourceKind": "osdu:wks:dataset--FileCollection.Slb.OpenZGY:1.0.0",
"RoleID": "odesprod:reference-data--ArtefactRole:ConvertedContent:"
}
],
```
cc - @mstormo , @ChrisZhang for informationM10 - Release 0.13https://community.opengroup.org/osdu/platform/system/search-service/-/issues/69ADR: Common discovery within and across kinds2023-07-13T09:46:54Zashley kelhamADR: Common discovery within and across kinds## Status
- [X] Proposed
- [X] Under review
- [X] Approved
- [ ] Retired
## Context & Scope
Today a single schema can define multiple properties for geospatial data. For example Wellbore schema defines both the _GeographicBottomHoleLoca...## Status
- [X] Proposed
- [X] Under review
- [X] Approved
- [ ] Retired
## Context & Scope
Today a single schema can define multiple properties for geospatial data. For example Wellbore schema defines both the _GeographicBottomHoleLocation_ and _ProjectedBottomHoleLocation_ properties.
The json key used for spatial data is also not consistent across schemas.
This causes issues for common consumption workflows like finding all entities that exist within a given area. This is because I don't know what property to query against for each type so to find all entities in a given area is complicated.
Looking beyond spatial data this is a common problem across different data types, for instance in a Wellbore schema the name is represented by the property 'FacilityName' however this key is not used for the name in other schemas.
We want to define a standard to allow indexing properties in a common way across types. This will provide
- A common property(s) to be searchable against across Kinds
- A priority list of schema properties that this can be populated from
- A way for these common properties to define relationships
## Trade-off Analysis
We could declare a single property to use on each schema to use as the common property. However there are schemas where multiple properties could be used and instances of entities where a specific property is not defined and another one is. Therefore no single property will ever be correct.
We could re-use the property key defined in the schema for indexing. However This causes consumers problems as they have to understand what property to use for each schema when discovering/running analytics across kinds. Defining a common property between schemas that can be used by consumers solves this concern.
We could define the standard directly in the schema only. This follows existing patterns with the indexing hints used [here](https://community.opengroup.org/search?search=x-osdu-indexing&group_id=218&project_id=91&scope=&search_code=true&snippets=false&repository_ref=master&nav_source=navbar). However this solution is inflexible to clients being able to provide their own mappings for OSDU schemas.
It does however allow for the standards to be maintained in the schema allowing control to be maintained by the schema authority. Therefore a solution that supports this whilst also providing flexibility to clients to provide their own mappings is preferable.
A separate ADR is proposed to allow for Schema extensions using the virtual property defined in this ADR.
## Decision
We are proposing a new optional attribute in schemas to define a common property mapping.
For OSDU schemas we propose to introduce a new property `x-osdu-virtual-properties`, with a dictionary of currently only one key `DefaultLocation`. This lists the path to the property and the order defines the priority. The first item in the list has highest priority. If that property does not exist or is not populated, the next get precedent.
`x-osdu-virtual-properties` can be used to map any properties to a new property name that can be used for consumption. Schemas can then declare the same virtual property to allow easier cross schema consumption.
The decision is backed by OSDU Data Definitions as per [Core Concepts meeting July 6, 2021](https://gitlab.opengroup.org/osdu/subcommittees/data-def/projects/core-concepts/docs/-/blob/master/Meeting%20Minutes/2021/DataDefinitionsCoreConcepts_MeetingMinutes-2021-07-06.md#1-decisions).
The virtual property declared is never added to the record however is made use of by consumption services like indexer/search to create an indexed entry for it and so make the data discoverable based on this property.
#### Example use case: Assigning virtual properties within a schema
```json
{
"x-osdu-Virtual-properties":{
"data.VirtualProperties.DefaultLocation": {
"type": "object",
"priority": [
{ "path": "data.ProjectedBottomHoleLocation" },
{ "path": "data.GeographicBottomHoleLocation" },
{ "path": "data.SpatialLocation" }
]}
}
}
```
The above example is prepared for Wellbore, which comes with three potential shapes. The projected representation is preferred over the geographic coordinates. Last priority is the standard shape contributed by the `AbstractFacility`.
For now we should restrict it so every key created through this must be prefixed with the following
```data.VirtualProperties.```
The `DefaultLocation` key name does not clash with any existing entity type property. It becomes relevant in generic search queries across different types including spatial conditions, for example:
```json
{
"kind": "*:*:*:*",
"spatialFilter": {
"field": "data.VirtualProperties.DefaultLocation",
"byGeoPolygon": {
"points": [
{"longitude":-90.65, "latitude":28.56},
{"longitude":-90.65, "latitude":35.56},
{"longitude":-85.65, "latitude":35.56},
{"longitude":-85.65, "latitude":28.56},
{"longitude":-90.65, "latitude":28.56}
]
}
}
```
There's also an _optional_ `isType` key you can apply to the priorities object. This restricts the selection based on the type of data the property points to which can be different per Record instance.
For example datasets and artifacts referenced by a record are generic schemas and so is dependent on the record instance. In the below example the `data.dataset[].filepath` property is only mapped if it points to a GeoJson type ekse it then checks if it is a Raster file type. The `isType` value is not restricted.
```json
{
"x-osdu-virtual-properties":{
"data.VirtualProperties.MyLocation": {
"type": "object",
"priority": [
{
"path": "data.dataset[].filepath",
"isType": "GeoJson"
},
{
"path": "data.dataset[].filepath",
"isType": "Raster"
}
]}
}
}
```
The ```x-osdu-virtual-property``` section also supports an _optional_ ```x-osdu-relationship``` block to describe a relationship this virtual property may have. See the example below.
The OSDU Data Definitions team ensures that canonical, well-known schemas contain a populated `x-osdu-virtual-properties`.
The report will then look like:
|Kind|Default Priority|Comment|
|----|----|----|
|→ [osdu:wks:master-data--SeismicProcessingProject:1.0.0](https://gitlab.opengroup.org/osdu/subcommittees/data-def/work-products/schema/-/blob/237-ambiguous-locations/E-R/master-data/SeismicProcessingProject.1.0.0.md) | data.SpatialLocation | Undefined x-osdu-virtual-properties definition; Unique Location |
|→ [osdu:wks:master-data--Well:1.0.0](https://gitlab.opengroup.org/osdu/subcommittees/data-def/work-products/schema/-/blob/237-ambiguous-locations/E-R/master-data/Well.1.0.0.md) | data.SpatialLocation | Undefined x-osdu-virtual-properties definition; Unique Location |
|→ [osdu:wks:master-data--Wellbore:1.0.0](https://gitlab.opengroup.org/osdu/subcommittees/data-def/work-products/schema/-/blob/237-ambiguous-locations/E-R/master-data/Wellbore.1.0.0.md) | 1: data.ProjectedBottomHoleLocation<br>2: data.GeographicBottomHoleLocation<br>3: data.SpatialLocation | Schema Controlled Order|
The first two kinds are reported as undefined, the third reports a proper order definition via the schema.
Keeping the x-osdu-virtual-properties mapping within the schema allows the data definitions team in OSDU to maintain control and order of how properties are mapped. However we still need to allow flexibility for specific client consumption workflows. This will be provided by Schema extensions.
#### Example use case: Describing relationships with virtual properties
It is also possible to tag virtual properties as relationships to achieve specific processing/indexing of relationships. The tagging is performed exactly the same as on standard OSDU schemas using the `x-osdu-` custom tags.
Here a simple relationship 'replication' example - the property `PetrelProjectID` refers to a record id of a record kind `slb:petrel:master-data--PetrelProject:*.*.*`. As a result, the property previously not visible to the indexer becomes declared and visible.
```
{
"kind": "osdu:wks:master-data--Well:1.0.0",
"x-osdu-extensions": {
"authority": "SLB",
"x-osdu-virtual-properties": {
"data.ExtensionProperties.PetrelProjectID": {
"type": "object",
"priority": [
{
"path": "data.ExtensionProperties.PetrelProjectID",
"isType": "string",
"type": "string",
"x-osdu-relationship": [
{
"GroupType": "master-data",
"EntityType": "PetrelProject"
}
]
}
]
}
}
}
}
```
Unconstrained or open relationships to unspecified types are declared as `"x-osdu-relationship": []`.
The next example demonstrates a new relationship by means of a virtual property with prioritized sources:
```
{
"kind": "osdu:wks:master-data--Well:1.0.0",
"x-osdu-extensions": {
"authority": "SLB",
"x-osdu-virtual-properties": {
"data.VirtualProperties.ApplicationProjectID": {
"type": "object",
"priority": [
{
"path": "data.ExtensionProperties.TechlogExtensions.TechlogProjectID",
"isType": "string",
"type": "string",
"x-osdu-relationship": [
{
"EntityType": "TechlogProject"
}
]
},
{
"path": "data.ExtensionProperties.PetrelProjectID",
"isType": "string",
"type": "string",
"x-osdu-relationship": [
{
"GroupType": "master-data",
"EntityType": "PetrelProject"
}
]
}
]
}
}
}
}
```
It demonstrates the 'virtual merge' of a relationship for a given record. The `data.VirtualProperties.VirtualApplicationProjectID` is expected to carry a relationship to either a Petrel project (kind `*:*:master-data--PetrelProject:*`) or a `*:*:*TechlogProject:*`. Should the Wellbore record contain both property values as defined in the two `path` values, the first one, the `TechlogProjectID` is taken.
## Consequences
- All existing OSDU schemas should be updated that define spatial data with a new ```DefaultLocation``` virtual property
- Data Definitions team validates that all spatial entity types are properly tagged with `"x-osdu-virtual-properties"`.
- Indexer needs to support `"x-osdu-virtual-properties"`
- Indexer needs to re-index based on all schema creation/change notificationsM10 - Release 0.13https://community.opengroup.org/osdu/platform/data-flow/ingestion/csv-parser/csv-parser/-/issues/56GSM Integration2021-10-06T10:32:28ZFernando Nahu Cantera RubioGSM IntegrationCSV Parser Integration with GSM, now we can get the details of the failure for records as well as jobs for all the CSV Ingestion runs, with proper error message and errorcodesCSV Parser Integration with GSM, now we can get the details of the failure for records as well as jobs for all the CSV Ingestion runs, with proper error message and errorcodesM9 - Release 0.12Fernando Nahu Cantera RubioFernando Nahu Cantera Rubiohttps://community.opengroup.org/osdu/platform/pre-shipping/-/issues/105AWS - R3M8 - Manifest-based Ingestion - Master data - Well2021-12-14T16:39:37ZDebasis ChatterjeeAWS - R3M8 - Manifest-based Ingestion - Master data - Well@sje7253bp reported issue with this load manifest/JSON file.
[Steve_AWS_Master_Data_body_example-problem.txt](/uploads/077a37c05219a6b3d4e53235dfc7028b/Steve_AWS_Master_Data_body_example-problem.txt)
When he runs this, it does not show a...@sje7253bp reported issue with this load manifest/JSON file.
[Steve_AWS_Master_Data_body_example-problem.txt](/uploads/077a37c05219a6b3d4e53235dfc7028b/Steve_AWS_Master_Data_body_example-problem.txt)
When he runs this, it does not show any error in any of the log files (Airflow).
Yet, it does not perform desired task of creating Master record Well.
I took his load manifest and could recreate the same issue in AWS.
I also took his load manifest and used in GCP (after adjusting some of the environment variables). In GCP, it does the job properly and creates a Well record.
What we are baffled with is - why do we not see any error from any of the logs in AWS?
Can you please check this and advise?
Thank you
cc - @WibbenM10 - Release 0.13https://community.opengroup.org/osdu/platform/security-and-compliance/legal/-/issues/16Remove usage of deprecated header2021-11-12T18:04:55ZRostislav Vatolinvatolinrp@gmail.comRemove usage of deprecated headerLegal service should not use deprecated header: account-idLegal service should not use deprecated header: account-idRostislav Vatolinvatolinrp@gmail.comRostislav Vatolinvatolinrp@gmail.comhttps://community.opengroup.org/osdu/platform/data-flow/ingestion/energistics/witsml-parser/-/issues/43IBM R3M8 - Failure to ingest Wellbore data from WITSML source2022-08-23T13:29:38ZDebasis ChatterjeeIBM R3M8 - Failure to ingest Wellbore data from WITSML sourceReported by @epeysson .
From Airflow log, we see the following as reason of failure.
```
"Missing referential id: {
'opendes:reference-data--VerticalMeasurementType:TotalDepth:',
'opendes:reference-data--VerticalMeasurementPath:Measur...Reported by @epeysson .
From Airflow log, we see the following as reason of failure.
```
"Missing referential id: {
'opendes:reference-data--VerticalMeasurementType:TotalDepth:',
'opendes:reference-data--VerticalMeasurementPath:MeasuredDepth:',
'opendes:reference-data--VerticalMeasurementPath:TrueVerticalDepth:',
```
Input source [Etienne-Wellbore.xml](/uploads/dcc04d22a77be52599a40ef3ae6f487d/Etienne-Wellbore.xml)
After some investigation, it was determined that OSDU Reference value convention has changed to use code instead.
reference-data--VerticalMeasurementType:TD instead of TotalDepth.
reference-data--VerticalMeasurementPath:TVD instead of TrueVerticalDepth
reference-data--VerticalMeasurementPath:MD instead of MeasuredDepth
One possible solution is to change code for such changes.
Another alternative can be to hold field mapping in some configuration file so that it is easy to handle this kind of change in the future.
For other data types (Well, Marker, Log, Trajectory, Tubular), there will be potentially other mismatches too.
Copying to @janas712 for her input on the subject
Also adding @gehrmann for his awareness
cc - @ChrisZhang , @chad , @Keith_Wall for informationM10 - Release 0.13etienne peyssonetienne peyssonhttps://community.opengroup.org/osdu/platform/system/reference/unit-service/-/issues/28Use HPA for kubernetes service2021-10-12T14:00:05ZRostislav Vatolinvatolinrp@gmail.comUse HPA for kubernetes serviceImplement practices described here: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/Implement practices described here: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/https://community.opengroup.org/osdu/platform/system/reference/crs-catalog-service/-/issues/15Use HPA for kubernetes service2021-10-12T13:57:53ZRostislav Vatolinvatolinrp@gmail.comUse HPA for kubernetes serviceImplement practices described here: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/Implement practices described here: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/https://community.opengroup.org/osdu/platform/system/partition/-/issues/21Use HPA for kubernetes service2021-11-05T18:56:36ZRostislav Vatolinvatolinrp@gmail.comUse HPA for kubernetes serviceImplement practices described here: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/Implement practices described here: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/https://community.opengroup.org/osdu/platform/security-and-compliance/legal/-/issues/17Legal service overloading BlobStore of storage container2021-11-05T17:34:05ZRostislav Vatolinvatolinrp@gmail.comLegal service overloading BlobStore of storage containerEndpoints `/legaltags:validate` and `/jobs/updateLegalTagStatus` are requesting a fresh version of Legal_COO.json every time they validate a legal tag. For example, in case a data partition contains 10000 legal tags, the job will request...Endpoints `/legaltags:validate` and `/jobs/updateLegalTagStatus` are requesting a fresh version of Legal_COO.json every time they validate a legal tag. For example, in case a data partition contains 10000 legal tags, the job will request Legal_COO.json 10000 times.
Excessive reads happen here:
1) https://community.opengroup.org/osdu/platform/security-and-compliance/legal/-/blob/master/legal-core/src/main/java/org/opengroup/osdu/legal/tags/validation/rules/DefaultRule.java#L44
2) https://community.opengroup.org/osdu/platform/security-and-compliance/legal/-/blob/master/legal-core/src/main/java/org/opengroup/osdu/legal/tags/validation/OtherRelevantDataCountriesValidator.java#L29
As an option, the endpoints could request a Legal_COO.json file once per request.Rostislav Vatolinvatolinrp@gmail.comRostislav Vatolinvatolinrp@gmail.comhttps://community.opengroup.org/osdu/ui/data-loading/wellbore-ddms-data-loader/-/issues/21TypeError: 'type' object is not subscriptable2021-11-09T07:43:17ZChad LeongTypeError: 'type' object is not subscriptableThis error is observed when Python 3.8 is used instead of 3.9.
```
(env) C:\Users\cleong4\OneDrive - Schlumberger\OSDU\wellbore-ddms-las-loader\src>pipenv run python -m lascli fileload -h
Courtesy Notice: Pipenv found itself running wi...This error is observed when Python 3.8 is used instead of 3.9.
```
(env) C:\Users\cleong4\OneDrive - Schlumberger\OSDU\wellbore-ddms-las-loader\src>pipenv run python -m lascli fileload -h
Courtesy Notice: Pipenv found itself running within a virtual environment, so it will automatically use that environment, instead of creating its own for any project. You can set PIPENV_IGNORE_VIRTUALENVS=1 to force pipenv to ignore that environment and create its own instead. You can set PIPENV_VERBOSITY=-1 to suppress this warning.
'type' object is not subscriptable
Traceback (most recent call last):
File "C:\Users\cleong4\OneDrive - Schlumberger\OSDU\wellbore-ddms-las-loader\env\lib\site-packages\knack\cli.py", line 231, in invoke
cmd_result = self.invocation.execute(args)
File "C:\Users\cleong4\OneDrive - Schlumberger\OSDU\wellbore-ddms-las-loader\env\lib\site-packages\knack\invocation.py", line 153, in execute
parsed_args = self.parser.parse_args(args)
File "C:\Users\cleong4\OneDrive - Schlumberger\OSDU\wellbore-ddms-las-loader\env\lib\site-packages\knack\parser.py", line 260, in parse_args
return super().parse_args(args)
File "C:\Python38\lib\argparse.py", line 1768, in parse_args
args, argv = self.parse_known_args(args, namespace)
File "C:\Python38\lib\argparse.py", line 1800, in parse_known_args
namespace, args = self._parse_known_args(args, namespace)
File "C:\Python38\lib\argparse.py", line 1988, in _parse_known_args
positionals_end_index = consume_positionals(start_index)
File "C:\Python38\lib\argparse.py", line 1965, in consume_positionals
take_action(action, args)
File "C:\Python38\lib\argparse.py", line 1874, in take_action
action(self, namespace, argument_values, option_string)
File "C:\Python38\lib\argparse.py", line 1159, in __call__
subnamespace, arg_strings = parser.parse_known_args(arg_strings, None)
File "C:\Python38\lib\argparse.py", line 1800, in parse_known_args
namespace, args = self._parse_known_args(args, namespace)
File "C:\Python38\lib\argparse.py", line 2006, in _parse_known_args
start_index = consume_optional(start_index)
File "C:\Python38\lib\argparse.py", line 1946, in consume_optional
take_action(action, args, option_string)
File "C:\Python38\lib\argparse.py", line 1874, in take_action
action(self, namespace, argument_values, option_string)
File "C:\Python38\lib\argparse.py", line 1044, in __call__
parser.print_help()
File "C:\Python38\lib\argparse.py", line 2494, in print_help
self._print_message(self.format_help(), file)
File "C:\Users\cleong4\OneDrive - Schlumberger\OSDU\wellbore-ddms-las-loader\env\lib\site-packages\knack\parser.py", line 247, in format_help
self.cli_help.show_help(self.prog.split()[0],
File "C:\Users\cleong4\OneDrive - Schlumberger\OSDU\wellbore-ddms-las-loader\env\lib\site-packages\knack\help.py", line 728, in show_help
else self.group_help_cls(self, delimiters, parser)
File "C:\Users\cleong4\OneDrive - Schlumberger\OSDU\wellbore-ddms-las-loader\env\lib\site-packages\knack\help.py", line 253, in __init__
child.load(options)
File "C:\Users\cleong4\OneDrive - Schlumberger\OSDU\wellbore-ddms-las-loader\env\lib\site-packages\knack\help.py", line 198, in load
description = getattr(options, 'description', None)
File "C:\Users\cleong4\OneDrive - Schlumberger\OSDU\wellbore-ddms-las-loader\env\lib\site-packages\knack\parser.py", line 240, in __getattribute__
self.description = self._description() \
File "C:\Users\cleong4\OneDrive - Schlumberger\OSDU\wellbore-ddms-las-loader\env\lib\site-packages\knack\commands.py", line 261, in description_loader
return extract_full_summary_from_signature(CLICommandsLoader._get_op_handler(operation))
File "C:\Users\cleong4\OneDrive - Schlumberger\OSDU\wellbore-ddms-las-loader\env\lib\site-packages\knack\commands.py", line 274, in _get_op_handler
op = import_module(mod_to_import)
File "C:\Python38\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "C:\Users\cleong4\OneDrive - Schlumberger\OSDU\wellbore-ddms-las-loader\src\lasloader\commands\file_load.py", line 5, in <module>
from lasloader.file_loader import LasParser, LocalFileLoader
File "C:\Users\cleong4\OneDrive - Schlumberger\OSDU\wellbore-ddms-las-loader\src\lasloader\file_loader.py", line 49, in <module>
class JsonLoader:
File "C:\Users\cleong4\OneDrive - Schlumberger\OSDU\wellbore-ddms-las-loader\src\lasloader\file_loader.py", line 57, in JsonLoader
def load(self, path: str) -> dict[str, any]:
TypeError: 'type' object is not subscriptable
```Greg HarrisNiall McDaidGreg Harrishttps://community.opengroup.org/osdu/ui/data-loading/wellbore-ddms-data-loader/-/issues/22Data loading - Allow user to provide welllog ID and writing bulk data to that...2021-10-18T13:30:16ZChad LeongData loading - Allow user to provide welllog ID and writing bulk data to that welllog id### Description
Currently we allow two scenarios for users to create welllog bulk data.
1. User wants to create a new wellbore record, new welllog record and load the bulk data.
2. User already ingested wellbore record, welllog record....### Description
Currently we allow two scenarios for users to create welllog bulk data.
1. User wants to create a new wellbore record, new welllog record and load the bulk data.
2. User already ingested wellbore record, welllog record. User will provide the welllog record id and only want to write bulk data directly to the already ingested welllog record id.
This issue is about the scenario 2.
![image](/uploads/524c595851d392e566cec01d71e6a4ae/image.png)
### Steps
1. User provide welllog id
2. Retrieve the welllog record, and wellbore record and check for available curves.
3. Retrieve well name / UWI from wellbore record.
4. Parse LAS for validation, check if well name or UWI exists, check if curves exist in LAS.
5. If all check passed, write bulk data with the provided welllog id.Niall McDaidNiall McDaidhttps://community.opengroup.org/osdu/platform/data-flow/real-time/streams/stream-admin-service/-/issues/11Implement DeploymentAdminService2021-10-29T17:57:48ZDmitry KniazevImplement DeploymentAdminServiceImplement Java SpringBoot service `org.opengroup.osdu.streaming.service.DeploymentAdminService.java` used by StreamingAdminService to perform k8s deployments operations using [kubernetes java client](https://github.com/kubernetes-client/...Implement Java SpringBoot service `org.opengroup.osdu.streaming.service.DeploymentAdminService.java` used by StreamingAdminService to perform k8s deployments operations using [kubernetes java client](https://github.com/kubernetes-client/java/wiki/3.-Code-Examples):
- [x] createDeployment method should created the new deployment using the YAML/JSON deployment definition provided as an argument, set replicas to 0 and return
- [x] deleteDeployment method should delete the deployment using deployment id provided as an argument
- [x] startDeployment method should set the replicas of the deployment to 1 (or more), ensure the pods have started and return
- [x] stopDeployment method should set the replicas of the deployment to 0, ensure the pods have stopped and return
- [x] test for every method aboveStephen NimmoStephen Nimmohttps://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/issues/87Entitlements service requires handling of IOException to deal with "Broken pi...2021-10-19T19:03:44ZRostislav Vatolinvatolinrp@gmail.comEntitlements service requires handling of IOException to deal with "Broken pipe" issueSame fix as this one: https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/272Same fix as this one: https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/272Rostislav Vatolinvatolinrp@gmail.comRostislav Vatolinvatolinrp@gmail.comhttps://community.opengroup.org/osdu/platform/security-and-compliance/legal/-/issues/18Legal service requires handling of IOException to deal with "Broken pipe" issue2021-10-11T12:39:07ZRostislav Vatolinvatolinrp@gmail.comLegal service requires handling of IOException to deal with "Broken pipe" issueSame fix as this one: https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/272Same fix as this one: https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/272Rostislav Vatolinvatolinrp@gmail.comRostislav Vatolinvatolinrp@gmail.com