Home » SQL ServerRSS

sql 2000 Ent memory issue

sql 2000 ent frequently get insufficient memory error when backing up on tlog or do a bulk import, even now increased to 8GB from original 2GB.

After restart sql server service, then able to run bulk import, noticed buffer increased around 260MB. of cause tlog can be backup too after sql restart.

Don't have any clue now, can someone give some advice or hint? OS is w2k3 Ent 32bit.

 

 

23 Answers Found

 

Answer 1

Ho much memory  Windows Server have? In order to use more than 4 GB memory you need to enable AWE and set MAX Memory option
 

Answer 2

One advice is when you do BULK insertion using BCP or BULK INSERT commands. it will use transaction log of that user database and TEMPDB database.

So suggestion is to use BULK RECOVERY model for it and make it to 1000 rows batch , so your first 1000 rows will commit and clear TLOG space.

What is procedure you are using for bulk insertion.

 

Answer 3

MAX 4GB. yes AWE enabled.

 

Answer 4

If you have 8GB set MAX to 6GB . if you run DBCC LOGINFO(dbid) , how any rows does it retuern?
 

Answer 5

Bulk operation processed by an medipay software. Before the bulk operation, the server already has the problem as the tlog backup failed so the tlog has no bulk import records yet. But if restart sqlserv service, the bulk operation runs ok and tlog backup ok for some days, there is daily full backup, so the bulk import logs shouldn't have affect, should they?

I tried the bulk recovery model before, it had the same problem, also according to Microsoft, Bulk recovery model is for temporarily use, I dont' think it is a solution to switch between bulk and full recovery model for every time user do the bulk import operation.

 

Answer 6

I thought backup tlog doesn't use sql  server's memory  (max part), does it? that is why I limit it to 4GB, but I will try to set to 6GB definitely.

I run the dbcc loginfo got 66 rows.

fyi, my log file is not big at all, it is only 18MB.

 

Answer 7

Could you please provide the complete error message that you get in the log?

What sp level are youre SS2000 running in?

 

- Raoul

 

Answer 8

Memory errors: 701 - There is insufficient system memory  to run this query

sp4 SQL2000 Ent

I tried the suggestions posted on this forum, but still didn't get it resolved.

 

Answer 9

I think this is a known bug and i think this hotfix will help you.

 

http://support.microsoft.com/kb/895123/

 

- Raoul

 

Answer 10

Tried the hotfix, after 2 weeks it got error again, insufficient memory  to backup tlog, but ok to back up database.

 

Answer 11

Hi

Do you enabled /3GB key on boot.ini (located in c drive with hidden)?

If /3GB enabled please remove it and try.

Also try to avoid applications other than SQL server in your server.


Ramesh S
www.sequalserver.com
 

Answer 12

going off the history of the post the problem could be with TempDB; did you have to restart SQL/the Server after applying the hot fix?  If so then you may have resized tempdb and it has then taken a couple of weeks for the problem to represent itself.

Check how much room there is on the disks where tempdb is stored.  Could you create additional files for tempdb on other drives?

Check how much room there is on the disks where the Transaction Log is stored.  What is the "Auto Growth" method for the Transaction Log?  You may need to set it to an Mb value rather than a % value.

Are you maintaining the size of the Transaction Log?  E.g. after full backups are you Truncating the Log?

Are there any Page Files on any of the disks where Transction Logs/TempDB reside?

 

Answer 13

Tony,

1. it didn't ask for reboot after applying hot fix

2. tempdb drive has 27G free

3. 20GB free on drive where log stored

4. log will be truncated after full backup or log backup by default, will it?

5. no page files

also, the Database is 36MB, and log file is 17MB, both are very small comparing to the disk drive free space.

also the memory  usage is 1.53GB totally, and there is no other applications on this server. no /3GB

 

Answer 14

Tony,

1. it didn't ask for reboot after applying hot fix

2. tempdb drive has 27G free

3. 20GB free on drive where log stored

4. log will be truncated after full backup or log backup by default, will it?

5. no page files

also, the Database is 36MB, and log file is 17MB, both are very small comparing to the disk drive free space.

also the memory  usage is 1.53GB totally, and there is no other applications on this server. no /3GB


Hmm...

I've just been looking at the Hot Fix and it points towards a problem with mixing 32 Bit SQL with 64 Bit OS; which you don't have.

However it does appear to be a Memory issue.  Have you checked to see how much RAM SQL is actually using?

In SQL Server properties, is Hyper threading turned on?  If it is it may be worthwhile turning this off.

Are you performing the bulk inserts into Tables that have any Triggers/Indexes?

It might also be worth considering to not perform a Transaction Log backup until the bulk inserts are completed.

 

Answer 15

there is only 70MB that sqlserv is using currently. there is no bulk operation for 2 weeks, still get the insufficient memory  error. no hyperthreading turned on, (not find in the sql  server properties, where is the option?)
 

Answer 16

the backup buffers comes form MEM2Leave whic is 384 MB in size and not from buffer pool.

Looks like your MEM2leave is not able to accomodate the backup buffers.use -g384 switch in the startup parameters .this will increase the M2L to 512 MB (but at the cost of reducing the buffer pool).Also get the status of DBCC memory  status for better picture .To me it looks like something is consuming the M2L in such a wy that its getting chocked .I have seen this many times when we use 3rd party tools and not native backups .You might have to try and reduce the buffer size and more settings in the backup tool yo uare using .I have seen such isues with backupexec and a couple of more tools .

Further try to find out who is consuming the maximum buffers in buffer pool

------------------------------------------------------------------------------------------

/*This will run for 24 hours and 48 times */

create procedure Check_mem_status
As
declare @var int
set @var =0
while @var <48
begin
select objtype, sum (pagesused)as pagesused from syscacheobjects group by objtype
                select * from syscacheobjects
waitfor delay '00:30:00'
set @var=@var+1
end
exec Check_mem_status

----Otherwise

When , you are sure that the issue  is happening run these commands separately :

select objtype, sum (pagesused)as pagesused from syscacheobjects group by objtype
select * from syscacheobjects
dbcc memorystatus

Also

1) make sure that Max server memory is set to 6 GB

2) make sure /PAE is enabled in boot.ini

3) Make sure set working set size is set to 0

4) make sure that make sure that lock pages in memory (secpol.msc) has SQL Sever service account in it.

5) Check the errorlog to see if SQL is really using AWE .

 

If nothing helps you then as a last step ,take a dump :

1) Run DBCC DUMPTRIGGER ('set',701)
2) In some time you will will be having the dumps generated .
3) Run DBCC DUMPTRIGGER ('clear',701) after ensuring that the dumps have been generated.
4) Send the dump to Microsoft after opening the case.

Regards

Abhay

 

Answer 17

objtype  pagesused  
-------- -----------
View     6
UsrTab   1
Prepared 97
Default  2
Adhoc    97
SysTab   7
Proc     379

bucketid    cacheobjtype      objtype  objid       dbid   dbidexec uid    refcounts   usecounts   pagesused   lasttime             maxexectime          avgexectime          lastreads            lastwrites           setopts     langid dateformat status      sqlbytes    sql                                                                                                                                                                                                                                                             
----------- ----------------- -------- ----------- ------ -------- ------ ----------- ----------- ----------- -------------------- -------------------- -------------------- -------------------- -------------------- ----------- ------ ---------- ----------- ----------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
29497       Executable Plan   Prepared 821807480   4      0        1      1           2           1           0                    0                    0                    0                    0                    187         0      1          0           536         (@PSTATMAN varbinary(4))SELECT statman([plan_id],@PSTATMAN)
FROM (SELECT TOP 100 PERCENT [plan_id] FROM [dbo].[sysdbmaintplan_history] WITH(READUNCOMMITTED,SAMPLE 1.000000e+002 PERCENT) ORDER BY [plan_id]) AS _MS_UPDSTATS_TBL OPTION (BYPASS OPTIMIZER_QUEU
29497       Compiled Plan     Prepared 821807480   4      0        1      2           1           1           0                    0                    0                    0                    0                    187         0      1          0           536         (@PSTATMAN varbinary(4))SELECT statman([plan_id],@PSTATMAN)
FROM (SELECT TOP 100 PERCENT [plan_id] FROM [dbo].[sysdbmaintplan_history] WITH(READUNCOMMITTED,SAMPLE 1.000000e+002 PERCENT) ORDER BY [plan_id]) AS _MS_UPDSTATS_TBL OPTION (BYPASS OPTIMIZER_QUEU
29047       Compiled Plan     Adhoc    112225307   1      0        -2     1           1           3           0                    0                    0                    0                    0                    187         0      1          0           234         DELETE msdb.dbo.sysdbmaintplan_history WHERE plan_id = '7C4F06E4-E7A5-4096-9537-ADBEEAD523F6' AND sequence_id < 15960
29045       Executable Plan   Proc     805577908   4      4        1      1           743         1           0                    0                    0                    0                    0                    187         0      1          0           36          sp_verify_job_date
29045       Compiled Plan     Proc     805577908   4      4        1      2           1           1           0                    0                    0                    0                    0                    187         0      1          0           36          sp_verify_job_date
28447       Executable Plan   Prepared 62006381    1      0        -2     1           279         1           0                    0                    0                    0                    0                    187         0      1          0           620         (@1 varchar(100),@2 nvarchar(50),@3 nvarchar(15),@4 nvarchar(15),@5 nvarchar(50),@6 int,@7 int,@8 int,@9 nvarchar(500))INSERT [msdb].[dbo].[sysdbmaintplan_history]([plan_id],[plan_name],[server_name],[database_name],[activity],[succeeded],[duration],[error
28447       Compiled Plan     Prepared 62006381    1      0        -2     2           2           2           0                    0                    0                    0                    0                    187         0      1          0           620         (@1 varchar(100),@2 nvarchar(50),@3 nvarchar(15),@4 nvarchar(15),@5 nvarchar(50),@6 int,@7 int,@8 int,@9 nvarchar(500))INSERT [msdb].[dbo].[sysdbmaintplan_history]([plan_id],[plan_name],[server_name],[database_name],[activity],[succeeded],[duration],[error

 

Answer 18

the backup buffers comes form MEM2Leave whic is 384 MB in size and not from buffer pool.

Looks like your MEM2leave is not able to accomodate the backup buffers.use -g384 switch in the startup parameters .this will increase the M2L to 512 MB (but at the cost of reducing the buffer pool).Also get the status of DBCC memory  status for better picture .To me it looks like something is consuming the M2L in such a wy that its getting chocked .I have seen this many times when we use 3rd party tools and not native backups .You might have to try and reduce the buffer size and more settings in the backup tool yo uare using .I have seen such isues with backupexec and a couple of more tools .

Further try to find out who is consuming the maximum buffers in buffer pool

------------------------------------------------------------------------------------------

/*This will run for 24 hours and 48 times */

create procedure Check_mem_status
As
declare @var int
set @var =0
while @var <48
begin
select objtype, sum (pagesused)as pagesused from syscacheobjects group by objtype
                select * from syscacheobjects
waitfor delay '00:30:00'
set @var=@var+1
end
exec Check_mem_status

----Otherwise

When , you are sure that the issue  is happening run these commands separately :

select objtype, sum (pagesused)as pagesused from syscacheobjects group by objtype
select * from syscacheobjects
dbcc memorystatus

Also

1) make sure that Max server memory is set to 6 GB

2) make sure /PAE is enabled in boot.ini

3) Make sure set working set size is set to 0

4) make sure that make sure that lock pages in memory (secpol.msc) has SQL Sever service account in it.

5) Check the errorlog to see if SQL is really using AWE .

 

If nothing helps you then as a last step ,take a dump :

1) Run DBCC DUMPTRIGGER ('set',701)
2) In some time you will will be having the dumps generated .
3) Run DBCC DUMPTRIGGER ('clear',701) after ensuring that the dumps have been generated.
4) Send the dump to Microsoft after opening the case.

Regards

Abhay

Hi Abhay, I followed your instruction, the issues is happening now, I posted the results of the two queries.  but not sure where is the problem.  can you give some advice?

Also since I am using local system to start the service, there is no user account in lockpages in tthe group policy, is that propery way(I saw one post regarding this that it is no necessary to put an account in lock pages when using local system)

thanks

george

 

 

Answer 19

also whenever the log backup fails, I see "Downgrading backup buffers from 960K to 64K" log before it
 

Answer 20

found backup log ... with buffercount=1 works for a while, then has to reduce maxtransfersize as well. then found the backup database has insufficient memory as well, could this be a bug?
 

Answer 21

Hi Abhay, how to send dump file to Microsoft
 

Answer 22

Sorry for replying so late George ....somehow I did not get any email when the reply to your post was made ..

I have seen the output you have posted and looks like its incomplete ...you should have run it for atleast some time so that we had the bigger picture where we could see many objects with size .The highest is 620 bytes ...looks like you ran it late ...

Anyway , if you have the dump then you can follow support.microsoft.com link and you will get the contact for Microsoft there .However , you need to pay the price .I think now ite $299 ...

Regards

 

Answer 23

thank you. will try.
 
 
 

<< Previous      Next >>


Microsoft   |   Windows   |   Visual Studio   |   Follow us on Twitter