cloud issueshttps://community.opengroup.org/groups/osdu/platform/system/lib/cloud/-/issues2024-02-27T08:23:54Zhttps://community.opengroup.org/osdu/platform/system/lib/cloud/gcp/os-core-lib-gcp/-/issues/7ADR: OSDU API Versioning Strategy from Service Integration Perspective.2024-02-27T08:23:54ZRustam Lotsmanenko (EPAM)rustam_lotsmanenko@epam.comADR: OSDU API Versioning Strategy from Service Integration Perspective.TBDTBDhttps://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/issues/32Undelete (restore) of soft deleted blob doesn't work anymore2023-07-19T19:28:41ZAlok JoshiUndelete (restore) of soft deleted blob doesn't work anymorePlease refer to [this](https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/merge_requests/216) original implementation of the feature.
We started seeing issues with the restore blob functionality in ...Please refer to [this](https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/merge_requests/216) original implementation of the feature.
We started seeing issues with the restore blob functionality in our envs. We are seeing an unexpected error of type `com.fasterxml.jackson.core.JsonParseException` with every restore request. Investigation narrowed down the root cause to be [this](https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/merge_requests/278) change, particularly the upgrade in `azure-core` version.
Steps to reproduce the issue (use Storage service APIs):
- Create a record
- Manually soft-delete only the record blob (don't touch cosmos metadata)
- Get record
The expected behavior is that Storage service attempts to restore the blob and getRecord API returns successfully. This isn't happening atm due to the mentioned issue.
None of the other later versions of `azure-core` fixes this. It seems this issue persists with any version `1.35.0` and higher
There are no observed vulnerabilities with `1.34.0`, therefore suggesting to revert back to this version.
~~[MR](https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/merge_requests/300) for fix (abandoned)~~
After some discussion, we decided to make this [fix](https://community.opengroup.org/osdu/platform/system/storage/-/merge_requests/726) exclusively in StorageM19 - Release 0.22Alok JoshiAlok Joshihttps://community.opengroup.org/osdu/platform/system/lib/cloud/aws/os-core-lib-aws/-/issues/8Improper return value prevents local mode operation2023-07-11T14:51:14ZMark ChanceImproper return value prevents local mode operationWith LOCAL_MODE true, the Search Service (and possibly others) does not start due to NullPointerException in this library.
In org.opengroup.osdu.core.aws.ssm.K8sLocalParameterProvider:93
```java
// this is for credentials, credentials...With LOCAL_MODE true, the Search Service (and possibly others) does not start due to NullPointerException in this library.
In org.opengroup.osdu.core.aws.ssm.K8sLocalParameterProvider:93
```java
// this is for credentials, credentials mounted by CSI is a json string
// if in local mode, it returns an empty HashMap, it is the responsibility of end user to getDefault
public Map<String, String> getCredentialsAsMap(String parameterKey) throws K8sParameterNotFoundException, JsonProcessingException {
if (localMode) {
return null;
}
return objectMapper.readValue(this.getParameterAsString(parameterKey), typeRef);
```
Despite the comment to the contrary, the return value causes problems for example in
org.opengroup.osdu.core.aws.entitlements.ServiceAccountJwtAwsClientImpl:50
```java
private void init() {
K8sLocalParameterProvider provider = new K8sLocalParameterProvider();
try {
client_credentials_clientid = provider.getParameterAsString("CLIENT_CREDENTIALS_ID");
client_credentials_secret = provider.getCredentialsAsMap("CLIENT_CREDENTIALS_SECRET").get("client_credentials_client_secret");
...
```
If we change "return null;" to "return new HashMap<>();" it works better.Yong ZengYong Zenghttps://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/issues/30use of RequestMappingHandlerMapping in Slf4jMDCFilter2023-03-09T18:33:59ZNeelesh Thakuruse of RequestMappingHandlerMapping in Slf4jMDCFilter`Slf4jMDCFilter` is being used in quite a few services and it's invoked on each API request. It's throwing following exception:
![image](/uploads/f955b40cbc4d8652ca927fa26c00a111/image.png)
Based on traffic, we are seeing millions of s...`Slf4jMDCFilter` is being used in quite a few services and it's invoked on each API request. It's throwing following exception:
![image](/uploads/f955b40cbc4d8652ca927fa26c00a111/image.png)
Based on traffic, we are seeing millions of such exceptions are thrown across different services in several environments.https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/issues/29Minimize Request Units (RUs) consumption on CosmosStore pageable query method2023-03-09T18:37:49ZYurii KondakovMinimize Request Units (RUs) consumption on CosmosStore pageable query methodCurrently, using the existing synchronous CosmosClient in a CosmosStore#queryItemsPage method, a certain number of extra (prefetch) queries are performed when retrieve records from the database page by page. For example, when we retrieve...Currently, using the existing synchronous CosmosClient in a CosmosStore#queryItemsPage method, a certain number of extra (prefetch) queries are performed when retrieve records from the database page by page. For example, when we retrieve 5000 records at 1000 records per page, the total number of queries that will be executed on the database will be 15 (not 5 as we expect).
Increasing the number of records we will have the following number of queries:
- 10000 records at 1000 records per page -> 45 requests (35 requests are unnecessary)
- 20000 records at 1000 records per page -> 105 requests (85 requests are unnecessary)
This increases the consumption of Cosmos DB Requests Units.
To prevent extra (prefetch) queries, we need to add an asynchronous client and an appropriate method for retrieving records from the database using this asynchronous client.
This solution to the problem was proposed in the course of e-mail correspondence with the Microsoft Azure team.
https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/merge_requests/273Yurii KondakovYurii Kondakovhttps://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/issues/28Improve performance on CosmosStore pageable query method2022-09-28T15:48:40ZRafael FreireImprove performance on CosmosStore pageable query methodCurrently, CosmosStore has a queryItemsPage which interact with a BlockingIterable object twice per call. This object uses a Thread-sleep blocking way to iterate through page results causing a slow performance for heavy queries. Its impl...Currently, CosmosStore has a queryItemsPage which interact with a BlockingIterable object twice per call. This object uses a Thread-sleep blocking way to iterate through page results causing a slow performance for heavy queries. Its implementation can be improved by substituting the check then use style for a for-each approachRafael FreireRafael Freirehttps://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/issues/25Exception handling2022-07-28T19:09:58ZArsen GrigoryanException handlingWhen we called the method getIdToken and pass invalid parameters especially "clientSecret", throws NullPointerException for resolving this issue was added logic that checks, that if the response status is not 200 then returned AppExcept...When we called the method getIdToken and pass invalid parameters especially "clientSecret", throws NullPointerException for resolving this issue was added logic that checks, that if the response status is not 200 then returned AppException.Arsen GrigoryanArsen Grigoryanhttps://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/issues/24Make system token generation more robust2022-08-23T15:11:30ZArsen GrigoryanMake system token generation more robust**Summary**: There was a problem in generating a credential to access partition service from cached secret because of rotation.
The Service principal is used to access Partition service for retrieving partition-specific credentials to E...**Summary**: There was a problem in generating a credential to access partition service from cached secret because of rotation.
The Service principal is used to access Partition service for retrieving partition-specific credentials to ECK.
Java service reads environment variable once at start-up. If service principal is changed (rotated) after service startup, service is unable to access Partition service. Change of service principal requires restart of service (restart of all pods) to force re-read of service principal value. This causes unnecessary down-time (returning errors or permission failures back to client).
**Proposed solution**: Service must be more robust to pick up new service principal value when old value fails. The token generation should attempt to retrieve the key from source if generation returns 401 after using the cached secret.https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/issues/22Lack of retries control in AbstractMessageHandler2022-06-07T12:36:26ZYauheni LesnikauLack of retries control in AbstractMessageHandlerIn AbstractMessageHandler whe have some retry mechanism, but the current implementation contains one drawback:
after failing processing of the message we abandon the one and in this case we can't controll repeated consumption and there i...In AbstractMessageHandler whe have some retry mechanism, but the current implementation contains one drawback:
after failing processing of the message we abandon the one and in this case we can't controll repeated consumption and there is high chance that message will be received immediate.
We are doing the thread sleep for exponential backoff behavior, but it is innaficient because thread appears busy during the sleeping time. On the other hand, we unable to control max delivery count from the code, because it is an attribute of the topic subscribtion, not our services.
It would be good if we could have more control for retry behavior using service bus client infrastructure principally.Yauheni LesnikauYauheni Lesnikauhttps://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/issues/200.13.0 Getting NoSuchMethodError at com.azure.storage.blob.specialized.BlobAs...2022-08-30T11:52:13ZTsvetelina Ivanova0.13.0 Getting NoSuchMethodError at com.azure.storage.blob.specialized.BlobAsyncClientBase.lambda$downloadStreamWithResponse$22When using version 0.13.0 we get exception NoSuchMethodError at com.azure.storage.blob.specialized.BlobAsyncClientBase.lambda$downloadStreamWithResponse$22 when try to read files from blob storage.
In version 0.13.0 a new version of azu...When using version 0.13.0 we get exception NoSuchMethodError at com.azure.storage.blob.specialized.BlobAsyncClientBase.lambda$downloadStreamWithResponse$22 when try to read files from blob storage.
In version 0.13.0 a new version of azure-storage-blob is introduced 12.13.0. In class BlobAsyncClientBase on line 1069 a call to FluxUtil.createRetriableDownloadFlux() is made.
In class FluxUtil method createRetriableDownloadFlux() does not exists. (This class is in library azure-core).
**Stack Trace:**
java.lang.NoSuchMethodError: com.azure.core.util.FluxUtil.createRetriableDownloadFlux(Ljava/util/function/Supplier;Ljava/util/function/BiFunction;IJ)Lreactor/core/publisher/Flux;
at com.azure.storage.blob.specialized.BlobAsyncClientBase.lambda$downloadStreamWithResponse$22(BlobAsyncClientBase.java:1069)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:113)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2194)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onSubscribe(Operators.java:2068)
at reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:192)
at reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107)
at reactor.core.publisher.FluxDoOnEach$DoOnEachSubscriber.onNext(FluxDoOnEach.java:173)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.FluxDelaySubscription$DelaySubscriptionMainSubscriber.onNext(FluxDelaySubscription.java:189)
at reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:99)
at reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:99)
at reactor.core.publisher.FluxTimeout$TimeoutMainSubscriber.onNext(FluxTimeout.java:180)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:127)
at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:127)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
at reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:99)
at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.onNext(FluxRetryWhen.java:174)
at reactor.core.publisher.Operators$MonoInnerProducerBase.complete(Operators.java:2664)
at reactor.core.publisher.MonoSingle$SingleSubscriber.onComplete(MonoSingle.java:180)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2400)
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onSubscribeInner(MonoFlatMapMany.java:150)
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onNext(MonoFlatMapMany.java:189)
at reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:99)
at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.onNext(FluxRetryWhen.java:174)
at reactor.core.publisher.MonoCreate$DefaultMonoSink.success(MonoCreate.java:165)
at reactor.netty.http.client.HttpClientConnect$HttpIOHandlerObserver.onStateChange(HttpClientConnect.java:414)
at reactor.netty.ReactorNetty$CompositeConnectionObserver.onStateChange(ReactorNetty.java:671)
at reactor.netty.resources.DefaultPooledConnectionProvider$DisposableAcquire.onStateChange(DefaultPooledConnectionProvider.java:201)
at reactor.netty.resources.DefaultPooledConnectionProvider$PooledConnection.onStateChange(DefaultPooledConnectionProvider.java:457)
at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:637)
at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:93)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:311)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:432)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1371)
at io.netty.handler.ssl.SslHandler.decodeNonJdkCompatible(SslHandler.java:1245)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1285)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.kqueue.AbstractKQueueStreamChannel$KQueueStreamUnsafe.readReady(AbstractKQueueStreamChannel.java:544)
at io.netty.channel.kqueue.AbstractKQueueChannel$AbstractKQueueUnsafe.readReady(AbstractKQueueChannel.java:381)
at io.netty.channel.kqueue.KQueueEventLoop.processReady(KQueueEventLoop.java:211)
at io.netty.channel.kqueue.KQueueEventLoop.run(KQueueEventLoop.java:289)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)M14 - Release 0.17https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/issues/19Unit service fails to start.2022-02-14T05:41:51ZRostislav Vatolinvatolinrp@gmail.comUnit service fails to start.Unit service fails to start. Exception:
Application run failed org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'buildDataLakeClientFactory' defined in class path resource \[org/opengroup/o...Unit service fails to start. Exception:
Application run failed org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'buildDataLakeClientFactory' defined in class path resource \[org/opengroup/osdu/azure/datalakestorage/DataLakeProvider.class\]: Unsatisfied dependency expressed through method 'buildDataLakeClientFactory' parameter 1; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'partitionServiceClient': Unsatisfied dependency expressed through field 'partitionFactory'; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'partitionServiceFactory': Unsatisfied dependency expressed through field 'partitionServiceConfiguration'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'partitionServiceConfiguration': Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder 'PARTITION_API' in value "${PARTITION_API}"
Please make sure DataLakeProvider spring bean is a conditional bean, because it causes the error.https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/issues/18Version Incompatibility with azure blob storage library in multiple services2022-07-11T19:53:59Zharshit aggarwalVersion Incompatibility with azure blob storage library in multiple servicesMultiple services (storage, legal known) are facing issues after upgrading to latest core lib version due to failures while connecting with blob store
Potential issue seems to be recent version upgrade in blob libraries
https://communi...Multiple services (storage, legal known) are facing issues after upgrading to latest core lib version due to failures while connecting with blob store
Potential issue seems to be recent version upgrade in blob libraries
https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/merge_requests/175/diffs#442292b8a7efeabbe4cc176709b833b1792140ec
cc: @kibattulHarshit SaxenaHarshit Saxenahttps://community.opengroup.org/osdu/platform/system/lib/cloud/gcp/osm/-/issues/1Upgrade to Log4J 2.17.1 to address CVE-2021-448322022-08-23T21:24:06ZDavid Diederichd.diederich@opengroup.orgUpgrade to Log4J 2.17.1 to address CVE-2021-44832Apache Log4j2 versions 2.0-beta7 through 2.17.0 (excluding security fix releases 2.3.2 and 2.12.4) are vulnerable to a remote code execution (RCE) attack when a configuration uses a JDBC Appender with a JNDI LDAP data source URI when an ...Apache Log4j2 versions 2.0-beta7 through 2.17.0 (excluding security fix releases 2.3.2 and 2.12.4) are vulnerable to a remote code execution (RCE) attack when a configuration uses a JDBC Appender with a JNDI LDAP data source URI when an attacker has control of the target LDAP server.
This issue is fixed by limiting JNDI data source names to the java protocol in Log4j2 versions 2.17.1, 2.12.4, and 2.3.2.
_(Description from [nist.gov](https://nvd.nist.gov/vuln/detail/CVE-2021-44832) for CVE-2021-44832)_https://community.opengroup.org/osdu/platform/system/lib/cloud/gcp/oqm/-/issues/1Upgrade to Log4J 2.17.1 to address CVE-2021-448322022-08-23T21:24:23ZDavid Diederichd.diederich@opengroup.orgUpgrade to Log4J 2.17.1 to address CVE-2021-44832Apache Log4j2 versions 2.0-beta7 through 2.17.0 (excluding security fix releases 2.3.2 and 2.12.4) are vulnerable to a remote code execution (RCE) attack when a configuration uses a JDBC Appender with a JNDI LDAP data source URI when an ...Apache Log4j2 versions 2.0-beta7 through 2.17.0 (excluding security fix releases 2.3.2 and 2.12.4) are vulnerable to a remote code execution (RCE) attack when a configuration uses a JDBC Appender with a JNDI LDAP data source URI when an attacker has control of the target LDAP server.
This issue is fixed by limiting JNDI data source names to the java protocol in Log4j2 versions 2.17.1, 2.12.4, and 2.3.2.
_(Description from [nist.gov](https://nvd.nist.gov/vuln/detail/CVE-2021-44832) for CVE-2021-44832)_M10 - Release 0.13https://community.opengroup.org/osdu/platform/system/lib/cloud/gcp/obm/-/issues/1Upgrade to Log4J 2.17.1 to address CVE-2021-448322022-08-23T21:24:22ZDavid Diederichd.diederich@opengroup.orgUpgrade to Log4J 2.17.1 to address CVE-2021-44832Apache Log4j2 versions 2.0-beta7 through 2.17.0 (excluding security fix releases 2.3.2 and 2.12.4) are vulnerable to a remote code execution (RCE) attack when a configuration uses a JDBC Appender with a JNDI LDAP data source URI when an ...Apache Log4j2 versions 2.0-beta7 through 2.17.0 (excluding security fix releases 2.3.2 and 2.12.4) are vulnerable to a remote code execution (RCE) attack when a configuration uses a JDBC Appender with a JNDI LDAP data source URI when an attacker has control of the target LDAP server.
This issue is fixed by limiting JNDI data source names to the java protocol in Log4j2 versions 2.17.1, 2.12.4, and 2.3.2.
_(Description from [nist.gov](https://nvd.nist.gov/vuln/detail/CVE-2021-44832) for CVE-2021-44832)_M10 - Release 0.13https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/issues/17Issue in Publisher Facade2022-01-21T09:59:41ZAbhishek Kumar (SLB)Issue in Publisher FacadeServices using Publisher Fascare from this MR.
They would fail with below error:
```
***************************
APPLICATION FAILED TO START
***************************
Description:
Field pubSubAttributesBuilder in org.opengroup.osdu....Services using Publisher Fascare from this MR.
They would fail with below error:
```
***************************
APPLICATION FAILED TO START
***************************
Description:
Field pubSubAttributesBuilder in org.opengroup.osdu.azure.publisherFacade.EventGridPublisher required a bean of type 'org.opengroup.osdu.azure.publisherFacade.models.PubSubAttributesBuilder' that could not be found.
The injection point has the following annotations:
- @org.springframework.beans.factory.annotation.Autowired(required=true)
Action:
Consider defining a bean of type 'org.opengroup.osdu.azure.publisherFacade.models.PubSubAttributesBuilder' in your configuration.
```
**Root cause:**
Autowiring beans which are not declared as Spring bean:<br>
`@Autowired
private PubSubAttributesBuilder pubSubAttributesBuilder;
`
<br>
`
@Lazy
@Builder
public class PubSubAttributesBuilder {
`
**Solution:**
Remove unused reference of `private PubSubAttributesBuilder pubSubAttributesBuilder` from `src/main/java/org/opengroup/osdu/azure/publisherFacade/EventGridPublisher.java` & `src/main/java/org/opengroup/osdu/azure/publisherFacade/ServiceBusPublisher.java`Nikhil Singh[MicroSoft]Nikhil Singh[MicroSoft]https://community.opengroup.org/osdu/platform/system/lib/cloud/ibm/os-core-lib-ibm/-/issues/4Upgrade to Log4J 2.17.1 to address CVE-2021-448322022-01-18T19:13:03ZDavid Diederichd.diederich@opengroup.orgUpgrade to Log4J 2.17.1 to address CVE-2021-44832Apache Log4j2 versions 2.0-beta7 through 2.17.0 (excluding security fix releases 2.3.2 and 2.12.4) are vulnerable to a remote code execution (RCE) attack when a configuration uses a JDBC Appender with a JNDI LDAP data source URI when an ...Apache Log4j2 versions 2.0-beta7 through 2.17.0 (excluding security fix releases 2.3.2 and 2.12.4) are vulnerable to a remote code execution (RCE) attack when a configuration uses a JDBC Appender with a JNDI LDAP data source URI when an attacker has control of the target LDAP server.
This issue is fixed by limiting JNDI data source names to the java protocol in Log4j2 versions 2.17.1, 2.12.4, and 2.3.2.
_(Description from [nist.gov](https://nvd.nist.gov/vuln/detail/CVE-2021-44832) for CVE-2021-44832)_M10 - Release 0.13https://community.opengroup.org/osdu/platform/system/lib/cloud/gcp/os-test-core-lib-gcp/-/issues/4Upgrade to Log4J 2.17.1 to address CVE-2021-448322022-01-18T19:12:57ZDavid Diederichd.diederich@opengroup.orgUpgrade to Log4J 2.17.1 to address CVE-2021-44832Apache Log4j2 versions 2.0-beta7 through 2.17.0 (excluding security fix releases 2.3.2 and 2.12.4) are vulnerable to a remote code execution (RCE) attack when a configuration uses a JDBC Appender with a JNDI LDAP data source URI when an ...Apache Log4j2 versions 2.0-beta7 through 2.17.0 (excluding security fix releases 2.3.2 and 2.12.4) are vulnerable to a remote code execution (RCE) attack when a configuration uses a JDBC Appender with a JNDI LDAP data source URI when an attacker has control of the target LDAP server.
This issue is fixed by limiting JNDI data source names to the java protocol in Log4j2 versions 2.17.1, 2.12.4, and 2.3.2.
_(Description from [nist.gov](https://nvd.nist.gov/vuln/detail/CVE-2021-44832) for CVE-2021-44832)_https://community.opengroup.org/osdu/platform/system/lib/cloud/gcp/os-core-lib-gcp/-/issues/4Upgrade to Log4J 2.17.1 to address CVE-2021-448322022-01-18T19:12:59ZDavid Diederichd.diederich@opengroup.orgUpgrade to Log4J 2.17.1 to address CVE-2021-44832Apache Log4j2 versions 2.0-beta7 through 2.17.0 (excluding security fix releases 2.3.2 and 2.12.4) are vulnerable to a remote code execution (RCE) attack when a configuration uses a JDBC Appender with a JNDI LDAP data source URI when an ...Apache Log4j2 versions 2.0-beta7 through 2.17.0 (excluding security fix releases 2.3.2 and 2.12.4) are vulnerable to a remote code execution (RCE) attack when a configuration uses a JDBC Appender with a JNDI LDAP data source URI when an attacker has control of the target LDAP server.
This issue is fixed by limiting JNDI data source names to the java protocol in Log4j2 versions 2.17.1, 2.12.4, and 2.3.2.
_(Description from [nist.gov](https://nvd.nist.gov/vuln/detail/CVE-2021-44832) for CVE-2021-44832)_M10 - Release 0.13https://community.opengroup.org/osdu/platform/system/lib/cloud/azure/os-core-lib-azure/-/issues/16Upgrade to Log4J 2.17.1 to address CVE-2021-448322022-01-18T19:23:41ZDavid Diederichd.diederich@opengroup.orgUpgrade to Log4J 2.17.1 to address CVE-2021-44832Apache Log4j2 versions 2.0-beta7 through 2.17.0 (excluding security fix releases 2.3.2 and 2.12.4) are vulnerable to a remote code execution (RCE) attack when a configuration uses a JDBC Appender with a JNDI LDAP data source URI when an ...Apache Log4j2 versions 2.0-beta7 through 2.17.0 (excluding security fix releases 2.3.2 and 2.12.4) are vulnerable to a remote code execution (RCE) attack when a configuration uses a JDBC Appender with a JNDI LDAP data source URI when an attacker has control of the target LDAP server.
This issue is fixed by limiting JNDI data source names to the java protocol in Log4j2 versions 2.17.1, 2.12.4, and 2.3.2.
_(Description from [nist.gov](https://nvd.nist.gov/vuln/detail/CVE-2021-44832) for CVE-2021-44832)_M10 - Release 0.13