How can I prevent BufferManager / PooledBufferManager in my WCF client app from wasting memory?

Analyzing a WCF client application (that I did not write and still do not know too much about) that talks to a bunch of services via SOAP and after running for a couple of days will throw an OutOfMemoryException, I found out that .net's PooledBufferManager will never ever release unused buffers, even when the app runs out of memory, leading to OOMEs.

This of course being in accordance with the spec: http://msdn.microsoft.com/en-us/library/ms405814.aspx

The pool and its buffers are [...] destroyed when the buffer pool is reclaimed by garbage collection.

Please feel free to answer to only a single of the questions below, as I have a bunch of questions, some of a more general nature, and some specific to our app's use of the BufferManager.

First a couple of general questions about the (default Pooled)BufferManager:

1) In a environment where we have GC, why would we need a BufferManager that will hold on to unused memory, even when that leads to OOME? I know, there is BufferManager.Clear(), which you can use to manually get rid off all buffers - if you have access to the BufferManager, that is. See further down for why I don't seem to have access.

2) Despite of MS' claim that "This process is much faster than creating and destroying a buffer every time you need to use one.", shouldn't they leave that up to the GC (and its LOH for example) and optimize the GC instead?

3) When doing a BufferManager.Take(33 * 1024 * 1024), I will get a buffer of 64M, as the PooledBufferManager will cache that buffer for later reuse, which might - well, in my case it isn't and therefore it's pure waste of memory - be that, say, 34M, or 50M, or 64M, are needed. So was it wise to create a potentially very wasteful BufferManager like this, that is used (by default, I assume) by HttpsChannelFactory? I'm failing to see how the performance for memory allocation should matter, especially when we are talking about WCF and network services that the application will talk to every 10 seconds TOPS, normally many more seconds or even minutes.

Now some more specific questions related to our application's use of BufferManagers. The app connects to a couple of different WCF services. For each of them we maintain a connection pool for the http connections, as connections may occur concurrently.

Inspecting the single biggest object in one heap dump, a 64M byte array that had only been used once in our app at initialization time and is not needed afterwards, as the response from the service is that big only at initialization time, which btw. is typical for many applications I have used, even though that could be subject to opimization (caching to disk etc.). A GC root analysis in WinDbg yields the following (I sanitized the names of our proprietary classes to 'MyServiceX', etc.):

0:000:x86> !gcroot -nostacks 193e1000 DOMAIN(00B8CCD0):HANDLE(Pinned):4d1330:Root:0e5b9c50(System.Object[])-> 035064f0(MyServiceManager)-> 0382191c(MyHttpConnectionPool`1[[MyServiceX, MyLib]])-> 03821988(System.Collections.Generic.Queue`1[[MyServiceX, MyLib]])-> 038219a8(System.Object[])-> 039c05b4(System.Runtime.Remoting.Proxies.__TransparentProxy)-> 039c0578(System.ServiceModel.Channels.ServiceChannelProxy)-> 039c0494(System.ServiceModel.Channels.ServiceChannel)-> 039bee30(System.ServiceModel.Channels.ServiceChannelFactory+ServiceChannelFactoryOverRequest)-> 039beea4(System.ServiceModel.Channels.HttpsChannelFactory)-> 039bf2c0(System.ServiceModel.Channels.BufferManager+PooledBufferManager)-> 039c02f4(System.Object[])-> 039bff24(System.ServiceModel.Channels.BufferManager+PooledBufferManager+BufferPool)-> 039bff44(System.ServiceModel.SynchronizedPool`1[[System.Byte[], mscorlib]])-> 039bffa0(System.ServiceModel.SynchronizedPool`1+GlobalPool[[System.Byte[], mscorlib]])-> 039bffb0(System.Collections.Generic.Stack`1[[System.Byte[], mscorlib]])-> 12bda2bc(System.Byte[][])-> 193e1000(System.Byte[])

Looking at gc roots for other byte arrays managed by a BufferManager reveals that other services (not 'MyServiceX') have different BufferPool instances, so each one is wasting their own memory, they are not even sharing the waste.

4) Are we doing something wrong here? I'm not a WCF expert by any means, so could we make the various HttpsChannelFactory instances all use the same BufferManager?

5) Or maybe even better, could we just tell all HttpsChannelFactory instances NOT to use BufferManagers at all and ask the GC to do its god-damn job, which is 'managing memory'?

6) If questions 4) and 5) can't be answered, could I get access to the BufferManager of all HttpsChannelFactory instances and manually call .Clear() on them - this is far from on optimal solution, but it would already help, in my case, it would free not only the aformentioned 64M, but 64M + 32M + 16M + 8M + 4M + 2M just in one service instance! So that alone would make my app last much longer without running into memory problems (and no, we don't have a memory leak issue, other than BufferManager, although we do consume a lot of memory and accumulate a lot of data over the course of many days, but that's not the issue here)

-------------Problems Reply------------

4) Are we doing something wrong here? I'm not a WCF expert by any means, so could we make the various HttpsChannelFactory instances all use the same BufferManager?

5) Or maybe even better, could we just tell all HttpsChannelFactory instances NOT to use BufferManagers at all and ask the GC to do its god-damn job, which is 'managing memory'?

I guess one way of addressing those 2 questions could be changing the TransferMode from 'buffered' to 'streamed'. Will have to investigate, as 'streamed' mode has a couple of limitations and I might not be able to use it.

Update: It actually works great! My memory consumption in buffered mode during startup of the app was 630M at peak times, and reduced to 470M when fully loaded. After switching to streamed mode, memory consumption does not show a temporary peak and when fully loaded, consumption is at only 270M!

Btw., this was a one-line change in the client app code for me. I just had to add this line:

httpsTransportBindingElement.TransferMode = TransferMode.StreamedResponse;

I believe I have answer to your question #5:

5) Or maybe even better, could we just tell all HttpsChannelFactory instances NOT to use BufferManagers at all and ask the GC to do its god-damn job, which is 'managing memory'?

There is a MaxBufferPoolSize binding parameter, which controls max size of buffers in BufferManager. Setting it to 0 will disable buffering, and GCBufferManager will be created instead of pooled one - and it will GC allocated buffers as soon as message is processed, as in your question.

This article discusses WCF memory buffer management in greater detail.

As John says it would be easier to answer only one question instead of writing an essay. But here is what I think

1) In a environment where we have GC, why would we need a BufferManager

You seem to misunderstand the concept of GC and buffers. GC works with reference typed objects and frees memory if it detects that an object is a vertex (point or node) of a graph and it doesn't have any valid edges (lines or connections) to other vertices. Buffers are just some temporary storage for temporary array of raw data. For example, if you need to send a WCF application level message and its current size is larger than the transport level message size, WCF will do it in a few transport messages. On the receiver size, WCF will wait until the full application level message arrives and only then it would pass the message for processing (unless it's a streaming binding). The temporary transport messages are buffered - stored somewhere in memory on the receiver's end. As creating of new buffers for any new messages in this example can get very expansive, .NET provides you with a buffer management class that is responsible for pooling and sharing buffers.

2) Despite of MS' claim that "This process is much faster than creating and destroying a buffer every time you need to use one.", shouldn't they leave that up to the GC (and its LOH for example) and optimize the GC instead?

No they should not. Buffers and GC have nothing in common (unless you want to destroy the buffer every time, in the context of the sample, that is a design flaw). They have different responsibilities and address different problems.

3) The app connects to a couple of different WCF services. For each of them we maintain a connection pool for the http connections

HTTP binding is not design to handle large payload like 64Mb, consider changing the binding to more appropriate one. If you use the message of that sie, WCF will not pass it unless the whole 64Mb are fully received. Thus if you have 10 concurrent connections your buffer size will be 640Mb.

For your other questions, please post another question on SO with some code and your WCF configuration. It will be easier to find where the problem is. Maybe the buffers aren't cleared because they are used inappropriately, you should consider the amount of testing that has been done on GC and WCF and the amount of testing that's been performed on a legacy project - follow the Occam's razor.

Category:c# Views:3 Time:2011-08-31

Related post

Copyright (C) dskims.com, All Rights Reserved.

processed in 0.181 (s). 11 q(s)