Online Forums
Technical support is provided through Support Forums below. Anybody can view them; you need to Register/Login to our site (see links in upper right corner) in order to Post questions. You do not have to be a licensed user of our product.
Please read Rules for forum posts before reporting your issue or asking a question. OPC Labs team is actively monitoring the forums, and replies as soon as possible. Various technical information can also be found in our Knowledge Base. For your convenience, we have also assembled a Frequently Asked Questions page.
Do not use the Contact page for technical issues.
- Forum
- Discussions
- QuickOPC-Classic in .NET
- Reading, Writing, Subscriptions, Property Access
- Faster Queries (OPC read performance)
Faster Queries (OPC read performance)
28 Jun 2012 06:58 #906
by support
Faster Queries (OPC read performance) was created by support
From: K.
Sent: Tuesday, June 26, 2012 1:45 PM
To: Zbynek Zahradnik
Subject: Re: Faster Queries
Hello,
thanks a lot for your help. I may have found our bottleneck, we have a bad storage system for all the OPC data and it seems that it slows down everything.
Yours sincerely,
K.
2012/6/24 Zbynek Zahradnik
Hello,
It should be said that OPC is optimized for subscriptions, and not so much for one-time operations. On average hardware, when subscriptions are used, our component can usually transfer at least 4,000 items *each second*. Single-shot reading is slower, but it shouldn’t be as slow as you are observing. We are shipping an example with the product that demonstrates this with our simulation server. This is one of the examples under the DocExamples project under C# examples solution. Here is the code, for illustration:
// $Header: $
// Copyright (c) CODE Consulting and Development, s.r.o., Plzen. All rights reserved.
// ReSharper disable CheckNamespace
#region Example
// This example measures the time needed to read 2000 items all at once, and in 20 groups by 100 items.
using System;
using System.Diagnostics;
using System.Threading;
using OpcLabs.EasyOpc.DataAccess;
namespace DocExamples
{
namespace _EasyDAClient
{
partial class ReadMultipleItems
{
const int NumberOfGroups = 100;
const int ItemsInGroup = 20;
private const int TotalItems = NumberOfGroups*ItemsInGroup;
// Main method
public static void TimeMeasurements()
{
// Make the measurements 10 times; note that first time the times might be longer.
for (int i = 1; i <= 10; i++)
{
// Pause - we do not want the component to use the values it has in memory
Thread.Sleep(2*1000);
// Read all items at once, and measure the time
var stopwatch1 = new Stopwatch();
stopwatch1.Start();
ReadAllAtOnce();
stopwatch1.Stop();
Console.WriteLine("ReadAllAtOnce has taken (milliseconds): {0}", stopwatch1.ElapsedMilliseconds);
// Pause - we do not want the component to use the values it has in memory
Thread.Sleep(2 * 1000);
// Read items in groups, and measure the time
var stopwatch2 = new Stopwatch();
stopwatch2.Start();
ReadInGroups();
stopwatch2.Stop();
Console.WriteLine("ReadInGroups has taken (milliseconds): {0}", stopwatch2.ElapsedMilliseconds);
}
// Example output (measured with Version 5.20, Release configuration):
/*
ReadAllAtOnce has taken (milliseconds): 3432
ReadInGroups has taken (milliseconds): 1563
ReadAllAtOnce has taken (milliseconds): 539
ReadInGroups has taken (milliseconds): 1625
ReadAllAtOnce has taken (milliseconds): 579
ReadInGroups has taken (milliseconds): 1594
ReadAllAtOnce has taken (milliseconds): 638
ReadInGroups has taken (milliseconds): 1610
...
*/
// Note that Version 5.12 and earlier were yielding much larger penalty to repeated reads.
// Example output (measured with Version 5.12, Release configuration):
/*
ReadAllAtOnce has taken (milliseconds): 4241
ReadInGroups has taken (milliseconds): 8094
ReadAllAtOnce has taken (milliseconds): 269
ReadInGroups has taken (milliseconds): 7813
ReadAllAtOnce has taken (milliseconds): 285
ReadInGroups has taken (milliseconds): 7813
ReadAllAtOnce has taken (milliseconds): 283
ReadInGroups has taken (milliseconds): 7844
...
*/
}
// Read all items at once
private static void ReadAllAtOnce()
{
var easyDAClient = new EasyDAClient();
// Create an array of item descriptors for all items
var itemDescriptors = new DAItemDescriptor[TotalItems];
int index = 0;
for (int iLoop = 0; iLoop < NumberOfGroups; iLoop++)
for (int iItem = 0; iItem < ItemsInGroup; iItem++)
itemDescriptors[index++] = new DAItemDescriptor(
String.Format("Simulation.Incrementing.Copy_{0}.Phase_{1}", iLoop + 1, iItem + 1));
// Perform the OPC read
DAVtqResult[] vtqResults = easyDAClient.ReadMultipleItems("OPCLabs.KitServer.2", itemDescriptors);
// Count successful results
int successCount = 0;
for (int iItem = 0; iItem < TotalItems; iItem++)
if (vtqResults[iItem].Succeeded) successCount++;
if (successCount != TotalItems)
Console.WriteLine("Warning: There were some failures, success count is {0}", successCount);
}
// Read items in groups
private static void ReadInGroups()
{
var easyDAClient = new EasyDAClient();
int successCount = 0;
for (int iLoop = 0; iLoop < NumberOfGroups; iLoop++)
{
// Create an array of item descriptors for items in one group
var itemDescriptors = new DAItemDescriptor[ItemsInGroup];
for (int iItem = 0; iItem < ItemsInGroup; iItem++)
itemDescriptors[iItem] = new DAItemDescriptor(
String.Format("Simulation.Incrementing.Copy_{0}.Phase_{1}", iLoop + 1, iItem + 1));
// Perform the OPC read
DAVtqResult[] vtqResults = easyDAClient.ReadMultipleItems("OPCLabs.KitServer.2", itemDescriptors);
// Count successful results (totalling to previous value)
for (int iItem = 0; iItem < ItemsInGroup; iItem++)
if (vtqResults[iItem].Succeeded) successCount++;
}
if (successCount != TotalItems)
Console.WriteLine("Warning: There were some failures, success count is {0}", successCount);
}
}
}
}
#endregion
// ReSharper restore CheckNamespace
Look at comments in the code that give example measurements. First reading takes much longer, because there are client- and server startup/initial tasks to be done, but still we do not need more than 5 seconds for 2000 items (there are also measurement for Version 5.20 which is not yet released and is even faster). Subsequent readings are much faster.
The measurements show that reading the same set of items in smaller chunks is slower.
Performance will usually highly depend on whether the server can provide the values from cache, or has to read them from device (those two are different methods that OPC provides). Reading from device is much slower with many servers. On our client side, you can control the way the reading is done by setting different values into following parameters:
EasyDAClient.ClientMode.DataSource, www.opclabs.com/onlinedocs/QuickOpcClassic/5.12/Reference/Qu...986-f1a9-5e11-4d64ab497f95.htm - default is ByValueAge
EasyDAClient.ClientMode.DesiredValueAge, www.opclabs.com/onlinedocs/QuickOpcClassic/5.12/Reference/Qu...48e-717f-2373-0d87abf215b0.htm - default is 1000 milliseconds.
Try setting the DataSource to Cache to see if it helps.
Best regards,
Zbynek Zahradnik, OPC Labs
From: K.
Sent: Thursday, June 21, 2012 5:09 PM
To: Zbynek Zahradnik
Subject: Re: Faster Queries
Dear Sir,
thank you for your response.
Currently the connection is local, but in the future there will be remote connection, too.
For debugging reasons the local machine is a normal computer, running Siemens WinCC Flexible in Simulation Mode, so its a average powerful computer, but the software will be running on a SPS or something embedded like it later, therefore i need a perfect result on the debug computer.
What should be a average or good number of items per minute/second (local connection, normal modern computer)?
Yours sincerely,
K.
2012/6/21 Zbynek Zahradnik
Dear Sir,
Thank you for your interest in our products.
For any large number of items, the proper method is 2), i.e. with ReadMultipleItems.
The final speed is a combination of performance on the server side and on the client side (+ any communication network). There might be some "tunings" to be done.
I will make a test first using some simulation server here, and let you know then.
Which OPC server are you connecting to, and is it local, or remote?
Thank you, and best regards,
Zbynek Zahradnik, OPC Labs
Original Message
From: .....
Sent: Wednesday, June 20, 2012 12:17 PM
To: Zbynek Zahradnik
Subject: Faster Queries
Dear Sir / Madam,
I am trying to implement your nice QuickOPC.NET dll into one of our C# programs.
With about 3000 to 6000 entries we have a huge amount of values. I want to read those as fast as possible, but it takes several minutes to read them (up to half an hour).
How is the quickest way to read all those values? I can't find a solution in your help files.
At the moment I have 2 ways implemented:
1) Picking them one by one (OPCClient.ReadItem(choosenMachine, server, item)).
2) Making a Array (OPCClient.ReadMultipleItems(server, itemArray)).
Both have the same (slow) speed.
Can you help me, so we have a good usage of your software?
Yours sincerely,
K.
Sent: Tuesday, June 26, 2012 1:45 PM
To: Zbynek Zahradnik
Subject: Re: Faster Queries
Hello,
thanks a lot for your help. I may have found our bottleneck, we have a bad storage system for all the OPC data and it seems that it slows down everything.
Yours sincerely,
K.
2012/6/24 Zbynek Zahradnik
Hello,
It should be said that OPC is optimized for subscriptions, and not so much for one-time operations. On average hardware, when subscriptions are used, our component can usually transfer at least 4,000 items *each second*. Single-shot reading is slower, but it shouldn’t be as slow as you are observing. We are shipping an example with the product that demonstrates this with our simulation server. This is one of the examples under the DocExamples project under C# examples solution. Here is the code, for illustration:
// $Header: $
// Copyright (c) CODE Consulting and Development, s.r.o., Plzen. All rights reserved.
// ReSharper disable CheckNamespace
#region Example
// This example measures the time needed to read 2000 items all at once, and in 20 groups by 100 items.
using System;
using System.Diagnostics;
using System.Threading;
using OpcLabs.EasyOpc.DataAccess;
namespace DocExamples
{
namespace _EasyDAClient
{
partial class ReadMultipleItems
{
const int NumberOfGroups = 100;
const int ItemsInGroup = 20;
private const int TotalItems = NumberOfGroups*ItemsInGroup;
// Main method
public static void TimeMeasurements()
{
// Make the measurements 10 times; note that first time the times might be longer.
for (int i = 1; i <= 10; i++)
{
// Pause - we do not want the component to use the values it has in memory
Thread.Sleep(2*1000);
// Read all items at once, and measure the time
var stopwatch1 = new Stopwatch();
stopwatch1.Start();
ReadAllAtOnce();
stopwatch1.Stop();
Console.WriteLine("ReadAllAtOnce has taken (milliseconds): {0}", stopwatch1.ElapsedMilliseconds);
// Pause - we do not want the component to use the values it has in memory
Thread.Sleep(2 * 1000);
// Read items in groups, and measure the time
var stopwatch2 = new Stopwatch();
stopwatch2.Start();
ReadInGroups();
stopwatch2.Stop();
Console.WriteLine("ReadInGroups has taken (milliseconds): {0}", stopwatch2.ElapsedMilliseconds);
}
// Example output (measured with Version 5.20, Release configuration):
/*
ReadAllAtOnce has taken (milliseconds): 3432
ReadInGroups has taken (milliseconds): 1563
ReadAllAtOnce has taken (milliseconds): 539
ReadInGroups has taken (milliseconds): 1625
ReadAllAtOnce has taken (milliseconds): 579
ReadInGroups has taken (milliseconds): 1594
ReadAllAtOnce has taken (milliseconds): 638
ReadInGroups has taken (milliseconds): 1610
...
*/
// Note that Version 5.12 and earlier were yielding much larger penalty to repeated reads.
// Example output (measured with Version 5.12, Release configuration):
/*
ReadAllAtOnce has taken (milliseconds): 4241
ReadInGroups has taken (milliseconds): 8094
ReadAllAtOnce has taken (milliseconds): 269
ReadInGroups has taken (milliseconds): 7813
ReadAllAtOnce has taken (milliseconds): 285
ReadInGroups has taken (milliseconds): 7813
ReadAllAtOnce has taken (milliseconds): 283
ReadInGroups has taken (milliseconds): 7844
...
*/
}
// Read all items at once
private static void ReadAllAtOnce()
{
var easyDAClient = new EasyDAClient();
// Create an array of item descriptors for all items
var itemDescriptors = new DAItemDescriptor[TotalItems];
int index = 0;
for (int iLoop = 0; iLoop < NumberOfGroups; iLoop++)
for (int iItem = 0; iItem < ItemsInGroup; iItem++)
itemDescriptors[index++] = new DAItemDescriptor(
String.Format("Simulation.Incrementing.Copy_{0}.Phase_{1}", iLoop + 1, iItem + 1));
// Perform the OPC read
DAVtqResult[] vtqResults = easyDAClient.ReadMultipleItems("OPCLabs.KitServer.2", itemDescriptors);
// Count successful results
int successCount = 0;
for (int iItem = 0; iItem < TotalItems; iItem++)
if (vtqResults[iItem].Succeeded) successCount++;
if (successCount != TotalItems)
Console.WriteLine("Warning: There were some failures, success count is {0}", successCount);
}
// Read items in groups
private static void ReadInGroups()
{
var easyDAClient = new EasyDAClient();
int successCount = 0;
for (int iLoop = 0; iLoop < NumberOfGroups; iLoop++)
{
// Create an array of item descriptors for items in one group
var itemDescriptors = new DAItemDescriptor[ItemsInGroup];
for (int iItem = 0; iItem < ItemsInGroup; iItem++)
itemDescriptors[iItem] = new DAItemDescriptor(
String.Format("Simulation.Incrementing.Copy_{0}.Phase_{1}", iLoop + 1, iItem + 1));
// Perform the OPC read
DAVtqResult[] vtqResults = easyDAClient.ReadMultipleItems("OPCLabs.KitServer.2", itemDescriptors);
// Count successful results (totalling to previous value)
for (int iItem = 0; iItem < ItemsInGroup; iItem++)
if (vtqResults[iItem].Succeeded) successCount++;
}
if (successCount != TotalItems)
Console.WriteLine("Warning: There were some failures, success count is {0}", successCount);
}
}
}
}
#endregion
// ReSharper restore CheckNamespace
Look at comments in the code that give example measurements. First reading takes much longer, because there are client- and server startup/initial tasks to be done, but still we do not need more than 5 seconds for 2000 items (there are also measurement for Version 5.20 which is not yet released and is even faster). Subsequent readings are much faster.
The measurements show that reading the same set of items in smaller chunks is slower.
Performance will usually highly depend on whether the server can provide the values from cache, or has to read them from device (those two are different methods that OPC provides). Reading from device is much slower with many servers. On our client side, you can control the way the reading is done by setting different values into following parameters:
EasyDAClient.ClientMode.DataSource, www.opclabs.com/onlinedocs/QuickOpcClassic/5.12/Reference/Qu...986-f1a9-5e11-4d64ab497f95.htm - default is ByValueAge
EasyDAClient.ClientMode.DesiredValueAge, www.opclabs.com/onlinedocs/QuickOpcClassic/5.12/Reference/Qu...48e-717f-2373-0d87abf215b0.htm - default is 1000 milliseconds.
Try setting the DataSource to Cache to see if it helps.
Best regards,
Zbynek Zahradnik, OPC Labs
From: K.
Sent: Thursday, June 21, 2012 5:09 PM
To: Zbynek Zahradnik
Subject: Re: Faster Queries
Dear Sir,
thank you for your response.
Currently the connection is local, but in the future there will be remote connection, too.
For debugging reasons the local machine is a normal computer, running Siemens WinCC Flexible in Simulation Mode, so its a average powerful computer, but the software will be running on a SPS or something embedded like it later, therefore i need a perfect result on the debug computer.
What should be a average or good number of items per minute/second (local connection, normal modern computer)?
Yours sincerely,
K.
2012/6/21 Zbynek Zahradnik
Dear Sir,
Thank you for your interest in our products.
For any large number of items, the proper method is 2), i.e. with ReadMultipleItems.
The final speed is a combination of performance on the server side and on the client side (+ any communication network). There might be some "tunings" to be done.
I will make a test first using some simulation server here, and let you know then.
Which OPC server are you connecting to, and is it local, or remote?
Thank you, and best regards,
Zbynek Zahradnik, OPC Labs
Original Message
From: .....
Sent: Wednesday, June 20, 2012 12:17 PM
To: Zbynek Zahradnik
Subject: Faster Queries
Dear Sir / Madam,
I am trying to implement your nice QuickOPC.NET dll into one of our C# programs.
With about 3000 to 6000 entries we have a huge amount of values. I want to read those as fast as possible, but it takes several minutes to read them (up to half an hour).
How is the quickest way to read all those values? I can't find a solution in your help files.
At the moment I have 2 ways implemented:
1) Picking them one by one (OPCClient.ReadItem(choosenMachine, server, item)).
2) Making a Array (OPCClient.ReadMultipleItems(server, itemArray)).
Both have the same (slow) speed.
Can you help me, so we have a good usage of your software?
Yours sincerely,
K.
Please Log in or Create an account to join the conversation.
Moderators: support
- Forum
- Discussions
- QuickOPC-Classic in .NET
- Reading, Writing, Subscriptions, Property Access
- Faster Queries (OPC read performance)
Time to create page: 0.084 seconds