Professional OPC
Development Tools

logos

Online Forums

Technical support is provided through Support Forums below. Anybody can view them; you need to Register/Login to our site (see links in upper right corner) in order to Post questions. You do not have to be a licensed user of our product.

Please read Rules for forum posts before reporting your issue or asking a question. OPC Labs team is actively monitoring the forums, and replies as soon as possible. Various technical information can also be found in our Knowledge Base. For your convenience, we have also assembled a Frequently Asked Questions page.

Do not use the Contact page for technical issues.

Memory leak in QuickOPC nuget

More
23 Nov 2022 13:12 #11232 by janko.mihelic@adnet.hr
We started our application on 18.11 at 18:26 and its memory climbed from 57,9 Mb to 89Mb.
On monday 21.11 we started test console application provided in first github repo but with [MTAThread] attribute whose memory also raised with same speed as our appliaction.
We are connected to Prosys OPC UA simulation server and are only subscribed to events. We are receiving one event approximately every 20 seconds.
Everything is running on a virtual machine with more then enough memory and processing power using Windows Server 2022 Standard.
We are attaching images of almost 48 hours period, each image has date and time when it was taken in its name. Red line is our application which is using a bit more memory because it was started 3 days before and is much bigger by itself. The green line is test console application with just client and subscription.

We are continuing the test further and will reply with additional data within few days. We are posting this in hope of finding the solution sooner. If you have any idea to what could be the problem any advice is welcome. We don't think that after stopping any communication between client and server the memory consumption should stay the same. And that in normal conditions with only events where one event happens every 20 seconds our memory consumption rises for approximately 0.25Mb an hour for 5 days with no signs of falling.

Please Log in or Create an account to join the conversation.

More
21 Nov 2022 12:46 #11222 by support
I am sorry but again, the charts cannot be used to draw a conclusion. Their range goes to maximum of tens of minutes.

I need to see the chart at least 36 hours long, ideally longer.

Regards.

Please Log in or Create an account to join the conversation.

More
21 Nov 2022 12:18 #11221 by janko.mihelic@adnet.hr
Here we have 2 application running, one is console application and the other is windows service app, both are just subscribed to events events no processing. Red one is console app and the green one is service.

-At 12:21:25 we started sending events
-At 12:25:35 we stopped sending events
-At 12:30:05 we started sending events again
-At 12:33:35 we stopped sending events

Images IMG1, IMG2 and IMG3 are showing memory in this situation.

Images IMG4, IMG5, IMG6 are showing situation where there were no changes to console application but in windows service app we implemented timer which will be called every 5 minutes and it will call this code
if ( client != null )
{
	client.UnsubscribeAllMonitoredItems();
	client.Dispose();
	client = null;
}
Thread.Sleep( 1000 );
GC.Collect( GC.MaxGeneration, GCCollectionMode.Forced );
subscribe(); //Create new client and SubscribeEvent

Please Log in or Create an account to join the conversation.

More
21 Nov 2022 10:36 #11218 by support
Hello.

Thanks for additional information. No, I have not tried to reproduce the problem. And I am not going to try it out, until I am confident that what you are observing is actually an incorrect behavior.

From the way you describe things, it looks like that there is some fundamental misconception about how .NET memory management works. The purpose of GC is *not* to "free all unused memory so that the consumption goes back to where it was", and it *cannot* be forced to do so.

When you write "...in both situations memory consumption didn't fall", my response is: Memory consumption is not expected to fall. An incorrect behavior would only be if the memory consumption kept growing and growing and GC would never free it. But you have not reported that, at least the way I understand your posts.

Can you post charts from PerfMon showing the behavior?

Regards

Please Log in or Create an account to join the conversation.

More
21 Nov 2022 09:21 #11217 by janko.mihelic@adnet.hr
Our client tested for 3 days and we tested for 4 days in both situations memory consumption didn't fall. We agree that garbage collector could decide to collect after one week but we don't think it is the case here.

We run 3 tests in parallel, one with our main application which is a windows service, one with console application with [MTAThread] attribute which code we provided in the first GitHub link and one with the test windows service which we provided as second GitHub link but we changed the code a bit to call garbage collector by force on all generations. In our main app GC was called when user stopped connection to the server, in console app it was called when someone would input character 'c' and in our test service application it was called every 5 min by System.Threading.Timer. Even after the GC was called like in example below memory stayed the same:
GC.Collect( GC.MaxGeneration, GCCollectionMode.Forced );

Even if we weren't receiving any events for more then 10 minutes. But we found out that calling GC after we dispose of EasyUAClient like this:
if(client != null)
{
     client.UnsubscribeAllMonitoredItems();
     client.Dispose();
     client = null;
}
GC.Collect( GC.MaxGeneration, GCCollectionMode.Forced );

The memory consumption falls back to normal, calling just client.UnsubscribeAllMonitoredItems() does not help either.
We also tested with one event every 25 seconds which should be completely normal amount of events and after 3 days memory consumption climbed by 18,6 Mb.
We also noticed that when we send one event every 10ms memory consumption raised much significantly in our main app that has to process many other things before processing next event. Other 2 apps that just received events with no processing had approximately the same memory consumption raise but lower then the main app.

For now our only solutions is to stop communication after some period of time, dispose EasyUAClient, call GC and then start connection again which is not really the best solution.

All test from the beginning where conduct with PerfMon alone or PerfMon + additional software.

Have you tried reproducing the problem? Is it possible that only our company out of all your customers is running the communication for months or even years without stopping and no one noticed it? We are hoping that we are the problem and that it could be fixed on our side but we just don't get what are we doing wrong

Please Log in or Create an account to join the conversation.

More
16 Nov 2022 15:03 #11205 by support
Hello.

What was the longest time period over which you have observed the memory consumption of the program?
We have seen cases in which the memory consumption has grown over almost a week, before .NET Garbage Collector decided it is good time to free it.

I am afraid that you might be making conclusions based on a time period that is too short.
I am not arguing that there might be a problem. But if you have only observed the memory for 3 minutes or so, it is definitely too short for making a reliable conclusion.

Ideally, use PerfMon to visualize the consumption.

I have not noticed anything wrong in a brief review of your code, it appears that you are using it the right way.

Regards

Please Log in or Create an account to join the conversation.

More
16 Nov 2022 09:56 #11204 by janko.mihelic@adnet.hr
We tried adding [MTAThread] attribute but nothing changes process memory keeps growing. Error was originally found in windows service application, this console application was used just to reproduce the behavior. We are using .Net framework 4.7.2 in both console application and our main service application. We managed to capture memory accumulation even when sending one event every 500ms. In environment with fewer events memory accumulates much slower but we are using opc UA communication in remote stations which are running for few years without user interference and we are afraid that in real world situation the problem will be visible after few months. We created an environment in which we believe you will be able to reproduce the error with Microsoft opc server that can be started with docker and test windows service application with included installer. Memory accumulation can be seen within few minutes.

Docker command: docker run --rm -it -p 50000:50000 -p 8080:8080 --name opcplc mcr.microsoft.com/iotedge/opc-plc:latest --pn=50000 --autoaccept --sph --sn=5 --sr=10 --st=uint --fn=5 --fr=1 --ft=uint --ctb --scn --lid --lsn --ref --gn=5 --alm --ses --er=100

github repo: github.com/lermix/OpcBugServiceApp

Test service will create one txt file at C:\OpcTest\EventsStarted.txt with one line "Opc test, Event collecting started" when first event is received so we now event collecting started. We tried to minimize the app as much as it is possible. It can also be started from debug.

In the meantime is there anything more we can try and are we using the library the right way?

Best regards

Please Log in or Create an account to join the conversation.

More
15 Nov 2022 09:36 #11203 by support
Hello.

Please try applying the [MTAThread] attribute to your Main method (learn.microsoft.com/en-us/dotnet/api/system.mtathreadattribute?view=net-7.0 ),
OR try the same code (without console apps) inside other host than a console app.

We have seen cases where Garbage Collector does not run at all in some console apps; it is somehow tied to the Windows message pump. and in STA model the STA thread must pump the messages, but that does not happen in console app unless explicitly invoked.

Which .NET framework/runtime eversion are you building for/running under?

Regards

Please Log in or Create an account to join the conversation.

More
15 Nov 2022 09:08 #11202 by janko.mihelic@adnet.hr
First of all, thank you for your quick and detailed response.

Regarding the points you made here is our experience:

Removing Console.WriteLine changed nothing within our test console application. Stopping the OPC-UA server we are polling changes nothing, memory just stands where it was. We also tried to unsubscribe all monitored items, dispose of client and force the garbage collector but with no luck of reducing process memory.

During the testing phase with our client within our non-reduced larger system we compared UaExpert events with events in our database and compared to UaExpert, and there was no lag. Even after stopping generating events no new events showed up and our database and UaExpert were in sync.

Setting the queue size to 100 changes nothing, we even tried setting it to 10 which caused errors about queue overflow but notifications were coming (at least some of them) and even in this situation the memory kept growing. Turning off the queuing also changes nothing

We tired to diagnose the problem with :
  • ANTS Memory Profiler 7 which could see the rise in total heap memory but private bytes were at same level
  • Tracking the program with windows performance monitor the rise of private bytes is easily seen when examining 3 or more minutes period
  • Task manager can also see rise in process memory
  • Software verify memory validator for c# and c++ which could also see rise in private memory but all we could get out of it was that memory is being allocated as bytes which is of no use
  • Visual studio performance profiler which is the only one that gave some maybe useful information. Comparing 87 second after start and 245 second count of these 3 classes rises:
    NotifyCollectionChangedEventHandler – from 5840 to 8460 objects
    OpcLabs.BaseLib.OperationModel.ValueResult – from 5840 to 8460 objects
    OpcLabs.EasyOpc.UA.AddressSpace.UANodeId – from 1168 to 1692

    The longer the program is up the more objects of those 3 classes are created

We run our program on
  • our clients PC – unknown specs
  • company laptop – lenovo l14 with i7-10510U CPU 2.3GHz and 32Gb ram
  • Virtual machine with unlimited access to 41.87 GHz processors and 4Gb of assigned ram

All 3 showed the same behavior.

Is there anything more we can try?

Best regards

Please Log in or Create an account to join the conversation.

More
14 Nov 2022 17:22 #11201 by support
Hello.

The reason could be simply that the notifications from the server are coming in faster than your code can process them. By default, QuickOPC queues the calls to your notification handlers, so that they cannot easily block the internal processing - such as to block the responses the client needs to send to the server for each notification.

If your code would not be "coping" with the rate of incoming notifications, the queue length will grow bigger and bigger, which looks like memory leak.

You can do various tests and experiments to see if this is the case, for example:

1) Make your notification handlers faster (e.g. remove the Console.WriteLine calls), and see if the memory growth becomes slower.
2) When the memory has accumulated, stop the server. If it the memory consumption goes back, it would mean that your code now got the chance to consume the notifications that were in the queue.
3) As the notifications are coming, compare their timestamps with the "true" wall-clock time. If they start lagging behind and the difference increases, it is a proof of the problem.
4) The maximum queue size is 100000 by default. Reduce it to much lower value (100), and see what happens. The property name is EasyUAClient.CallbackQueueCapacity.
5) Turn off the queuing: Set the EasyUAClient.QueueCallback property to 'false'.

Also note that .NET memory handling is not deterministic and there is no "contract" that would compel the .NET runtime to free the previously allocated and now unused memory (although it normally happens), unless it is needed for something else.

Best regards

Please Log in or Create an account to join the conversation.

Moderators: support
Time to create page: 0.073 seconds