Originally Posted by MDO
Bert
Here is what I mean by memory leak

http://en.wikipedia.org/wiki/Memory_leak

Mukesh


If you are referring to SQL Server, then I would have to differ.

In a lot of ways, it is how you look at it, but a program that allocates memory generally does it for the benefit of the program. For example, there is a reason SQL Server Standard is better than SQL Express (in many ways). But, one huge difference is that it can use much more memory than Express which is limited to its paltry 1GB of RAM. If you have only 3.2GB of RAM, then losing a GB to SQL, coupled with all of the processes, etc. that need RAM, then you will notice significant server slowing. But, if you have enough RAM, then SQL will run efficiently and your server will not notice it.

Some reboot the server freeing up all the RAM, but then you defeat the purpose of SQL obtaining the RAM, because it uses RAM due to it storage of data pages. So, all those are now gone. You would do better to set a limit on SQL, rather than continue to reboot your server.

This is just SQL. I just have an issue (please don't take offense) with its being referred to as a memory leak. The memory is going exactly where Microsoft designed it to go.

A good example would be SQL Server 2005 standard, which has memory limits of what the OS has. So, you could have 48GBs of RAM, in which case SQL would be extremely efficient since it would not have to use the user database on the hard drive very often and, instead, would use the buffer cache. But, even with 48GBs of RAM, eventually your server would crash or come to a standstill if limits were not placed on SQL Server. SQL would grab the memory at a certain rate dependent on how many users and how much data was being pulled.


Bert
Pediatrics
Brewer, Maine