Storage issueshttps://community.opengroup.org/osdu/platform/system/storage/-/issues2020-08-06T19:26:15Zhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/20GCP - Bulk Update for ACLs on Records2020-08-06T19:26:15Zethiraj krishnamanaiduGCP - Bulk Update for ACLs on RecordsBulk Update for ACLs on Records
Please review the details : [Requirement](https://community.opengroup.org/osdu/platform/system/storage/-/issues/10)Bulk Update for ACLs on Records
Please review the details : [Requirement](https://community.opengroup.org/osdu/platform/system/storage/-/issues/10)M1 - Release 0.1ethiraj krishnamanaiduethiraj krishnamanaiduhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/155GCP failing with core-common v0.18.0-rc42023-01-02T11:18:05ZMina OtgonboldGCP failing with core-common v0.18.0-rc4osdu-gcp-anthos-test integration tests are consistently failing when the core-common version is upgraded to v0.18.0-rc4.
Currently, gcp consumes 0.17.0 version of core-common which contains vulnerable libraries. The storage MR "Update ...osdu-gcp-anthos-test integration tests are consistently failing when the core-common version is upgraded to v0.18.0-rc4.
Currently, gcp consumes 0.17.0 version of core-common which contains vulnerable libraries. The storage MR "Update Storage to be Collaboration Context Aware" needs to consume a new version of core-common that exposes collaboration context. It is a blocker for this storage MR to be merged. As a quick fix for gcp test failure, we created a core-common that has collaboration context off of 0.17.0 version of core-common. The pipeline is passing with this version, which indicates that the gcp test failure is coming from the core-common version upgrade from 0.17.0 to 0.18.0-rc4.
References
* [Associated storage MR](https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/546)
* [Core-common MR](https://community.opengroup.org/osdu/platform/system/lib/core/os-core-common/-/merge_requests/183)
* [ADR for the storage and core-common MRs](https://community.opengroup.org/osdu/platform/system/storage/-/issues/149)Yauhen Shaliou [EPAM/GCP]Yauhen Shaliou [EPAM/GCP]https://community.opengroup.org/osdu/platform/system/storage/-/issues/212GeoJson validation2024-03-15T14:11:20ZAdam ChengGeoJson validationThis is a linked issue between the Storage API and Search API.
When I ingest an new object witha invalid GeoJSON (e.g. polygon is is not close). It will pass the Storage API as it mainly check types. But it will silently failed indexing...This is a linked issue between the Storage API and Search API.
When I ingest an new object witha invalid GeoJSON (e.g. polygon is is not close). It will pass the Storage API as it mainly check types. But it will silently failed indexing and never show up on Search API.
A related issue: currently it take up to 30 seconds before a newly ingested object shows up on the Search API. It makes a bit challenging for a near real-time application.
Possible solution:
An additional query param on the PUT `/records` endpoint. If the param is set, the operation will only be successful when it finished indexing.
It would be ideal for ingestion and indexing/discovery operations to be atomichttps://community.opengroup.org/osdu/platform/system/storage/-/issues/54Get record errors out if the record id contains % character2021-03-17T13:26:11ZKrishna Nikhil VedurumudiGet record errors out if the record id contains % character**Steps to reproduce**
- Create a record with id having `%` character. Sample `"id": "opendes:wellbore:foobar%20baz"`
- Invoke GET `/record/opendes%3Awellbore%3Afoobar%2520baz` - The id has been URL encoded due to the presence of specia...**Steps to reproduce**
- Create a record with id having `%` character. Sample `"id": "opendes:wellbore:foobar%20baz"`
- Invoke GET `/record/opendes%3Awellbore%3Afoobar%2520baz` - The id has been URL encoded due to the presence of special characters.
**Expected result**
- Record Content with Status Code - 200
**Actual result**
- Forbidden - 403
**Reasoning**
The Default HttpFirewall of Spring does not allow certain special characters in request URLs to avoid security related exploits.
Those characters include semi-colon ; encoded back-slash and percent symbols.
**Ask**
Identify if % is a valid character in record ids and add support for it
(or)
Disallow record ids having special characters such as %, ; etc.https://community.opengroup.org/osdu/platform/system/storage/-/issues/181GET: /records/{recordID}/{version} - ERROR 5002024-01-01T08:47:32ZSiarhei Khaletski (EPAM)GET: /records/{recordID}/{version} - ERROR 500**Context**
GET: /records/{recordID}/{version} fails with error 500 if an invalid version is provided (see the attachment)
We noticed an odd behavior of the service:
List of existing versions of the following record: `opendes:work-pro...**Context**
GET: /records/{recordID}/{version} fails with error 500 if an invalid version is provided (see the attachment)
We noticed an odd behavior of the service:
List of existing versions of the following record: `opendes:work-product-component--SamplesAnalysis:e9f02f48f43149a8b69606ff7597f391`
![image](/uploads/3d75fd80a57f5558c7d0eb00a4d795eb/image.png)
If request unexisting version `1` - status error 500
![image](/uploads/d3dc228f70263bd24ff7d09975baa63c/image.png)
Meanwhile, if request unexisting version `1234` - status 404
![image](/uploads/e82da89c3673b643aaa26845f0eb0c81/image.png)
**Azure GLab Logs**
![image](/uploads/8d54b1addcbc1835b4ea3c90135072b6/image.png)
**Expected Behavior**
404 - status codeM22 - Release 0.25Siarhei Khaletski (EPAM)Chad LeongSiarhei Khaletski (EPAM)https://community.opengroup.org/osdu/platform/system/storage/-/issues/193How to troubleshoot? Field missed from Search response although we see it fro...2023-12-16T16:45:21ZDebasis ChatterjeeHow to troubleshoot? Field missed from Search response although we see it from Storage response.Companion issue in Preship site is here -
https://community.opengroup.org/osdu/platform/pre-shipping/-/issues/649
This is not a case of anything linked to conversion (using Meta block) or typo error in the field name.
The real question...Companion issue in Preship site is here -
https://community.opengroup.org/osdu/platform/pre-shipping/-/issues/649
This is not a case of anything linked to conversion (using Meta block) or typo error in the field name.
The real question is - how to troubleshoot this kind of problem?
cc @nthakur and @gehrmannhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/21IBM - Bulk Update for ACLs on Records2020-09-09T13:07:11Zethiraj krishnamanaiduIBM - Bulk Update for ACLs on RecordsBulk Update for ACLs on Records
Please review the details : [Requirement](https://community.opengroup.org/osdu/platform/system/storage/-/issues/10)Bulk Update for ACLs on Records
Please review the details : [Requirement](https://community.opengroup.org/osdu/platform/system/storage/-/issues/10)M1 - Release 0.1Wladmir FrazaoWladmir Frazao2020-09-18https://community.opengroup.org/osdu/platform/system/storage/-/issues/116In Azure environment the end point to query the data with limit is not working2022-08-26T11:59:00ZKamlesh TodaiIn Azure environment the end point to query the data with limit is not workingIn the Azure environment, the end point to query the data with a limit is not working.
e.g. https://osdu-ship.msft-osdu-test.org/api/storage/v2/query/kinds?limit=10
Response: 400 Bad Request
{
"code": 400,
"reason": "Limit not suppo...In the Azure environment, the end point to query the data with a limit is not working.
e.g. https://osdu-ship.msft-osdu-test.org/api/storage/v2/query/kinds?limit=10
Response: 400 Bad Request
{
"code": 400,
"reason": "Limit not supported",
"message": "The limit is invalid"
}
@debasisc @sehuboy @kumar_vaibav @ChrisZhangM10 Patch - Release 0.13 patchKrishna Nikhil VedurumudiKrishna Nikhil Vedurumudihttps://community.opengroup.org/osdu/platform/system/storage/-/issues/96Incomplete Impersonalization: End user access depends on his rights to physic...2021-12-28T15:20:53ZRostislav Dublin (EPAM)Incomplete Impersonalization: End user access depends on his rights to physical storageOSDU offers a robust RBAC based Authentication and Authorization model provided by the Entitlements service.
Entitlements allows to assign the User with a limited set of roles, sufficient for the precise definition of his powers within t...OSDU offers a robust RBAC based Authentication and Authorization model provided by the Entitlements service.
Entitlements allows to assign the User with a limited set of roles, sufficient for the precise definition of his powers within the system. Having an ACL mechanism at the Record level ensures that these permissions are accurately projected onto the data. And with the activated connection with the Policy Service and correctly configured policies, the verification of rights becomes even more sophisticated.
It is obvious that any duplication of this mechanism is as redundant as it is harmful.
However, this duplication is currently identified at the level of additional verification of the rights of the end user account to buckets and objects of the Blob storage. First of all, this is relevant for the GCP CSP, where such behavior is found, at least, in the code of the Storage service. There are the following flaws here:
1. Impersonalization (using a service account instead of an end user account) is not always used when accessing buckets and objects of the Blob storage and depends on the "isEnableImpersonalization" boolean, which is "false" by default. This means that impersonalization is not performed and GCS requests are made on behalf of the end user account.
The logic behind preserving and maintaining this functionality is not entirely clear. After all, this contradicts the very principle of microservice architecture, implying the encapsulation of procedures for working with data within a microservice. The end user account has nothing to do with the physical storage access capabilities of the microservice itself, which must be unconditionally provided to the microservice's service account .
2. The problem persists even when the "isEnableImpersonalization" boolean is set to "true" (although this boolean is not raised to "true" anywhere in our test and production environments). In this case, requests to GCS are made on behalf of a service account (datafier or another dedicated), which removes the problem of service access to data...
But! Integration tests related to checking user authorization for data manipulation are starting to fail. This happens because the hasAccess(...) method is incorrectly implemented in the code of the GoogleCloudStorage repository, and it delegates some of the checks to the GCS level, but does not perform the necessary checks at the ACL level.
This leads to the fact that in the tests that check the lack of access of the test "no-data-access-tester" user to manipulate data, his false authorization occurs.
We should get rid of this by carrying out the following refactoring based on the tasks:
- For developers:
- refuse to use the "isEnableImpersonalization" boolean and destroy the very mention of it in the program code, thereby making impersonalization the only available mode, since all requests to data will come only from the service account under which the service is running:
- GoogleCloudStorage.class (and new ObmStorage.class) - multiple places;
- GoogleCloudStorageTest.class - one place;
- provider/storage-gcp/application.properties - "osdu.gcp.storage.gcs.enable-impersonalization" property definition
- insert an Entitlements+ACL check of end-user rights to operations with Records in those places of the program code where it was unfairly omitted (in the hope of reliable verification of the end-user rights to physical storage). Correct validations are easily added using the methods of the DataAuthorizationService class (validateOwnerAccess, validateViewerOrOwnerAccess, hasAccess) and/or using the private method GoogleCloudStorage#validateMetadata(), which are already used for this purpose, but not everywhere.
At a minimum, checks need to be added in the methods:
- GoogleCloudStorage # read (RecordMetadata record, Long version, boolean checkDataInconsistency);
- GoogleCloudStorage # hasAccess (RecordMetadata ... records)
- DevOps:
- stop the practice of giving end-user accounts any rights to physical storage.
- check the configurations of the business user accounts in the IAM and revoke all privileges to cloud resources.M10 - Release 0.13Rostislav Dublin (EPAM)Rostislav Dublin (EPAM)https://community.opengroup.org/osdu/platform/system/storage/-/issues/120Inconsistent behavior of storage PUT when skipdupes is passed as true2022-08-26T10:06:09ZMandar KulkarniInconsistent behavior of storage PUT when skipdupes is passed as trueStorage PUT API has an optional query parameter called [skipdupes](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/docs/tutorial/StorageService.md#using-skipdupes)
Current behavior of storage PUT API to update...Storage PUT API has an optional query parameter called [skipdupes](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/docs/tutorial/StorageService.md#using-skipdupes)
Current behavior of storage PUT API to update existing record is:
If skipdupes is passed as true, if the data, meta blocks in the input request are same as the existing record content, then the record update is skipped.
When skipdupes is passed as true, the record update is skipped in a scenario when the user has passed different legal, acl, tags blocks content in the input request, but data and meta block content is same as that of the existing record.
(This happens because when skipdupes is passed as true, the storage service compares only data and meta blocks of the incoming and existing records and not all the blocks in the record.)
Expected behavior is :
If skipdupes is passed as true, both data and meta blocks should be compared. If data block is same but legal, acl, tags blocks are different, then the same record should be updated. To keep the behavior in-sync with PATCH API, the record version should not be updated in case only tags, legal or acl blocks are being changed.https://community.opengroup.org/osdu/platform/system/storage/-/issues/215Increase timeout for storage service requests2024-02-01T12:46:24ZSudesh TagadpallewarIncrease timeout for storage service requestsWhen registering dataset using `/registerDataset` some users are getting 400 error. As per the Logs this request is timing out(with the error- **Unexpected error sending to URL http://storage/api/storage/v2/records METHOD PUT error java....When registering dataset using `/registerDataset` some users are getting 400 error. As per the Logs this request is timing out(with the error- **Unexpected error sending to URL http://storage/api/storage/v2/records METHOD PUT error java.net.SocketTimeoutException: Read timed out**) when it tries to upsertRecord in the Storage.
We have found out that when dataset service is calling storage service and it is taking more than 5 seconds which results in a SocketTimeoutException.
When creating `StorageService` instance using `StorageFactory`, new `HttpClient()` instance is used which has default timeout of 5 seconds. Instead of using new `HttpClient` instance `HttpClientHandler` instance should have been used which has 60 seconds timeout. This code is present in the core-common library. See attached image for reference. ![storage](/uploads/5d81a52c9a968975ad40a538088a57dc/storage.JPG)https://community.opengroup.org/osdu/platform/system/storage/-/issues/98Indexer failures with 5XX error codes should be searchable2022-02-28T12:56:35ZLarissa PereiraIndexer failures with 5XX error codes should be searchableSome of the records indexing fails due to unknown (5xx) error. As of today, these records are then not searchable, not identified as errors during indexing. Records failed with 5XX errors should be searchable with default indexing fields...Some of the records indexing fails due to unknown (5xx) error. As of today, these records are then not searchable, not identified as errors during indexing. Records failed with 5XX errors should be searchable with default indexing fields.
## Context
Currently, when the schema or storage service throws a 500 error, indexer stops right away. After a few retries, indexer ignores this record. Thus, the records that failed with index status 500 are not searchable.
## Proposed solution
We can use the record changed/updated event to also contain the acl and legal tag section of the record. This will enable indexer to index the failed record (assuming no schema is exists) with the the right metadata, thereby making the record searchable. Indexer will use the id, kind, acl and tags properties to index this record and the trace will be available to view as a result of the query "index.status = 500".https://community.opengroup.org/osdu/platform/system/storage/-/issues/153Indexer fetch records requests should not be checked via OPA/Policy (Or any o...2023-03-06T10:20:12ZRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comIndexer fetch records requests should not be checked via OPA/Policy (Or any other service, that sends internal requests)**Problem:**
Currently, the Storage service will evaluate policies for service requests of the Indexer service, which doesn't make sense since the indexer should be able to fetch any record ingested to the platform.
Indexer fetch reque...**Problem:**
Currently, the Storage service will evaluate policies for service requests of the Indexer service, which doesn't make sense since the indexer should be able to fetch any record ingested to the platform.
Indexer fetch requests use common requests authentication flow when OPA integration is enabled:
https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/storage-core/src/main/java/org/opengroup/osdu/storage/opa/service/OPAServiceImpl.java#L104
~~~
http://localhost:8181/v1/data/osdu/partition/osdu/dataauthz/records
{
"input": {
"operation": "view",
"token": "indexer-service-token",
"datapartitionid": "osdu",
"records": [{
"id": "osdu:master-data--Well:999907686759",
"kind": "osdu:wks:master-data--Well:1.0.0",
"legal": {
"legaltags": ["osdu-demo-legaltag"],
"otherRelevantDataCountries": ["US"],
"status": "compliant"
},
"acls": {
"viewers": ["data.default.viewers@osdu.osdu-gcp.go3-nrg.projects.epam.com"],
"owners": ["data.default.owners@osdu.osdu-gcp.go3-nrg.projects.epam.com"]
}
}
]
}
}
~~~
And it is possible that Indexer will not be authorized to fetch records:
~~~
HttpResponse(headers = {
null = [HTTP / 1.1 200 OK],
Content - Length = [305],
Date = [Tue, 29 Nov 2022 10: 58: 31 GMT],
Content - Type = [application / json]
}, body = {
"result": [{
"errors": [{
"code": 401,
"id": "osdu:master-data--Well:999907686759",
"message": "Legal response 401 {\"code\":401,\"reason\":\"Unauthorized\",\"message\":\"The user is not authorized to perform this action\"}",
"reason": "Error from compliance service"
}
],
"id": "osdu:master-data--Well:999907686759"
}
]
}, contentType = application / json, responseCode = 200, exception = null, request = http: //localhost:8181/v1/data/osdu/partition/osdu/dataauthz/records, httpMethod=POST, latency=812)
~~~
And will receive an empty response:
~~~
{
"records": [],
"notFound": [
"osdu:master-data--Well:999907686759"
],
"conversionStatuses": []
}
~~~
Which left records not indexed, and not searchable. Scenarios, when this occurrence happens, look quite easy to achieve, for example when the record uses ACLs that don't belong to the Service token.
**Solution:**
We need to bypass OPA\Policy authentication for internal service requests.M16 - Release 0.19Rustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comRiabokon Stanislav(EPAM)[GCP]Rustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/34InMemroy cache of schema resulting in un-expected behavior2020-12-15T13:53:56ZKishore BattulaInMemroy cache of schema resulting in un-expected behaviorSchemas are cached in memory of the running application. Because of this inmemory caching the following scenario will fail. Let us assume we have 2 instances of application running I1 and I2.
1. Create schema - Lands on I1
2. Get schema ...Schemas are cached in memory of the running application. Because of this inmemory caching the following scenario will fail. Let us assume we have 2 instances of application running I1 and I2.
1. Create schema - Lands on I1
2. Get schema - Lands on I1. This schema is cached on I1
3. Delete schema - Lands on I2. This will try to delete any cache entry for that schema in I2. So far the cache entry on I1 is still intact.
4. Get schema - Lands on I1. As the cache entry is not cleared it will return 200 instead of 404 not found.
The below test along with other tests are failing intermittently because of the above mentioned issue.
- should_createSchema_and_returnHttp409IfTryToCreateItAgain_and_getSchema_and_deleteSchema_when_providingValidSchemaInfo
If the service is running on multiple VMs or Pods, this issue will be prominent and will block pipelines and results in un-expected behaviorhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/47Integrate storage service with policy service2021-03-08T20:52:57ZHrvoje MarkovicIntegrate storage service with policy serviceThis issue captures task of integrating storage service with policy service as described in [ADR](https://community.opengroup.org/osdu/platform/security-and-compliance/home/-/issues/46).This issue captures task of integrating storage service with policy service as described in [ADR](https://community.opengroup.org/osdu/platform/security-and-compliance/home/-/issues/46).https://community.opengroup.org/osdu/platform/system/storage/-/issues/177Integration test coverage for users.data.root2023-07-20T11:05:00ZRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comIntegration test coverage for users.data.rootChanges to data authentication were recently introduced with the merge request: https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/694. However, we currently lack integration test cases to cover these modificat...Changes to data authentication were recently introduced with the merge request: https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/694. However, we currently lack integration test cases to cover these modifications.
It is essential to ensure that these changes won't disrupt the current flow and that `users.data.root` will consistently have access to ingested data.
To address this, we need to implement integration test cases to cover the new data authentication mechanisms.M20 - Release 0.23Rustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/56Intermittent get record api failure2022-08-23T11:19:22ZNeelesh ThakurIntermittent get record api failureWe are seeing intermittent get record request failures even though record exist in CosmosDB. The issue is intermittent, cannot be consistently re-produced. Same record that exists cannot be retrieved initially but starts working after so...We are seeing intermittent get record request failures even though record exist in CosmosDB. The issue is intermittent, cannot be consistently re-produced. Same record that exists cannot be retrieved initially but starts working after some time.
This also has potential impact on Indexer service. We have integrated workflow - all records that are ingested (via storage serivce) must be indexed and discovered via search service. If we have these intermittent failures than indexer service must be configured to re-try in these scenario, otherwise it will lead to inconsistent state.https://community.opengroup.org/osdu/platform/system/storage/-/issues/66[Intermittent] Record Metadata is available in Cosmos but the Blob store retu...2022-09-27T11:10:14ZKrishna Nikhil Vedurumudi[Intermittent] Record Metadata is available in Cosmos but the Blob store returns a 404.If record metadata exist and the actual record doesn't exist in BlobStore, FetchBatchRecords API is going to return a 500 with following response
```
{
"code": 500,
"reason": "Unable to process parallel blob download",
"mess...If record metadata exist and the actual record doesn't exist in BlobStore, FetchBatchRecords API is going to return a 500 with following response
```
{
"code": 500,
"reason": "Unable to process parallel blob download",
"message": "AppException(error=AppError(code=404, reason=Specified blob was not found, message=Status code 404, \"<?xml version=\"1.0\" encoding=\"utf-8\"?><Error><Code>BlobNotFound</Code><Message>The specified blob does not exist._RequestId:580b9915-f01e-0009-2c0a-3c65a8000000_Time:2021-04-28T08:45:41.2917696Z</Message></Error>\", errors=null, debuggingInfo=null, originalException=com.azure.storage.blob.models.BlobStorageException: Status code 404, \"<?xml version=\"1.0\" encoding=\"utf-8\"?><Error><Code>BlobNotFound</Code><Message>The specified blob does not exist._RequestId:580b9915-f01e-0009-2c0a-3c65a8000000_Time:2021-04-28T08:45:41.2917696Z</Message></Error>\"), originalException=com.azure.storage.blob.models.BlobStorageException: Status code 404, \"<?xml version=\"1.0\" encoding=\"utf-8\"?><Error><Code>BlobNotFound</Code><Message>The specified blob does not exist._RequestId:580b9915-f01e-0009-2c0a-3c65a8000000_Time:2021-04-28T08:45:41.2917696Z</Message></Error>\")"
}
```
Couple of issues to investigate / fix
- The PersistentServiceImpl ensures that if the blob write has failed, the cosmos db update will not happen. How did we run into this inconsistency.
- If one blob does not exist, the entire FetchBatchRecords call should not fail with a 500.
- Error message for 5xx should always be standard. So, a 500 in this case should be Internal Server Error.https://community.opengroup.org/osdu/platform/system/storage/-/issues/107Intermittent record not found errors in Storage batch API2022-11-21T09:50:21ZAn NgoIntermittent record not found errors in Storage batch APIError has been reported on Storage query/records:batch API where the user sometimes is not able to retrieve a few records. The same records could be fetched at a later time. Storage service is responding with record not found error, and ...Error has been reported on Storage query/records:batch API where the user sometimes is not able to retrieve a few records. The same records could be fetched at a later time. Storage service is responding with record not found error, and is impacting 1% of the request.
**Job details to reproduce the error: **
- Number of records: 14K
- Storage batch size: 20
- Number of threads: 10
- Record size: few KBs (well head information)Neelesh ThakurNeelesh Thakurhttps://community.opengroup.org/osdu/platform/system/storage/-/issues/170Invalidate derived data when parent record is deleted2023-03-31T10:02:02ZAn NgoInvalidate derived data when parent record is deletedDerived data (records with ancestry/parent) inherit the legal tags from the parent record(s).
So when at least one of the parent records is deleted, then the children records are no longer valid. Without this step, there are records wit...Derived data (records with ancestry/parent) inherit the legal tags from the parent record(s).
So when at least one of the parent records is deleted, then the children records are no longer valid. Without this step, there are records with invalid legal tags (or no legal tag) still exists in the system.