Hi,
I have two servers, SV1 is a 24/7 OLTP server and SV2 is a backup
server. A Full Backup of the db (prototype) was taken from SV1 was
applied to SV2. One of the tables in the database has over 40 million
rows in it. My task is to keep any rows no older than 18 months.
SV1.Prototype.dbo.tblConsignment approx. 42,000,000 rows
SV2.Prototype.dbo.tblConsignment approx. 41,800,000 rows
this is correct as SV1 is constantly in use, so SV2 will always be a
day or two behind. The issue arrises when I perform my count on both
servers, when I run the following on SV1:
select count(*) from tblConsignments
where [Date] >= '2003-08-01 00:00:00.000'
the count is approx 25,500,000 rows, however when I run the exact same
statement on SV2 the count is 19,500,000. Why would there be a
discrepancy of 6,000,000? The collation is the same for both servers
(SV2 is a higher spec server, SV1 is 2K enterprise editions, SV2 2k
standard, both SP3a).
When I do a count year by year (on both servers) it comes to aprrox.
35,000,000 however the total for the tblConsignments is 42million.
Possibly the answer would be to check the data more throughly, however
I would like to know if Datetime columns can be set to different
versions?
Thanks
qh
Have you broken the data into smaller chunks? E.g. maybe a week or a month
is missing on one server...
This is my signature. It is a general reminder.
Please post DDL, sample data and desired results.
See http://www.aspfaq.com/5006 for info.
"Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
news:1115224865.478078.231810@.o13g2000cwo.googlegr oups.com...
> Hi,
> I have two servers, SV1 is a 24/7 OLTP server and SV2 is a backup
> server. A Full Backup of the db (prototype) was taken from SV1 was
> applied to SV2. One of the tables in the database has over 40 million
> rows in it. My task is to keep any rows no older than 18 months.
> SV1.Prototype.dbo.tblConsignment approx. 42,000,000 rows
> SV2.Prototype.dbo.tblConsignment approx. 41,800,000 rows
> this is correct as SV1 is constantly in use, so SV2 will always be a
> day or two behind. The issue arrises when I perform my count on both
> servers, when I run the following on SV1:
> select count(*) from tblConsignments
> where [Date] >= '2003-08-01 00:00:00.000'
> the count is approx 25,500,000 rows, however when I run the exact same
> statement on SV2 the count is 19,500,000. Why would there be a
> discrepancy of 6,000,000? The collation is the same for both servers
> (SV2 is a higher spec server, SV1 is 2K enterprise editions, SV2 2k
> standard, both SP3a).
> When I do a count year by year (on both servers) it comes to aprrox.
> 35,000,000 however the total for the tblConsignments is 42million.
> Possibly the answer would be to check the data more throughly, however
> I would like to know if Datetime columns can be set to different
> versions?
> Thanks
> qh
>
|||Whenever you perform a COUNT(*), I'd recommend you use a MAXDOP (1) in
the SELECT to make sure the parallelism "issue" doesn't cause the
rowcount to be off.
David Gugick
Imceda Software
www.imceda.com
|||Hi,
Can you update the statistics and see.
UPDATE STATISTICS <TABLE NAME>
Otherwise execute the below command in both databases replacing count(*)
with *, but you need more space in ur hard disk on both servers
select * into old_records from tblConsignments where [Date] >= '2003-08-01
00:00:00.000'
Once the statement is completed try
select count(*) from old_records
But first try updating the statistics and see the difference.
Thanks
Hari
SQL Server MVP
"Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
news:1115224865.478078.231810@.o13g2000cwo.googlegr oups.com...
> Hi,
> I have two servers, SV1 is a 24/7 OLTP server and SV2 is a backup
> server. A Full Backup of the db (prototype) was taken from SV1 was
> applied to SV2. One of the tables in the database has over 40 million
> rows in it. My task is to keep any rows no older than 18 months.
> SV1.Prototype.dbo.tblConsignment approx. 42,000,000 rows
> SV2.Prototype.dbo.tblConsignment approx. 41,800,000 rows
> this is correct as SV1 is constantly in use, so SV2 will always be a
> day or two behind. The issue arrises when I perform my count on both
> servers, when I run the following on SV1:
> select count(*) from tblConsignments
> where [Date] >= '2003-08-01 00:00:00.000'
> the count is approx 25,500,000 rows, however when I run the exact same
> statement on SV2 the count is 19,500,000. Why would there be a
> discrepancy of 6,000,000? The collation is the same for both servers
> (SV2 is a higher spec server, SV1 is 2K enterprise editions, SV2 2k
> standard, both SP3a).
> When I do a count year by year (on both servers) it comes to aprrox.
> 35,000,000 however the total for the tblConsignments is 42million.
> Possibly the answer would be to check the data more throughly, however
> I would like to know if Datetime columns can be set to different
> versions?
> Thanks
> qh
>
|||Hari,
Just for the record, the date string you give will be
interpreted as August 1, 2003 in some cases (such as us_english language
settings),
and as January 8, 2003 in others (such as french language setting). The two
SQL Server safe formats to use are (for August 1) '20030801' and
'2003-08-01T00:00:00.000'
Steve Kass
Drew University
Hari Prasad wrote:
>Hi,
>Can you update the statistics and see.
>UPDATE STATISTICS <TABLE NAME>
>Otherwise execute the below command in both databases replacing count(*)
>with *, but you need more space in ur hard disk on both servers
>select * into old_records from tblConsignments where [Date] >= '2003-08-01
>00:00:00.000'
>Once the statement is completed try
>select count(*) from old_records
>
>But first try updating the statistics and see the difference.
>Thanks
>Hari
>SQL Server MVP
>
>"Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
>news:1115224865.478078.231810@.o13g2000cwo.googleg roups.com...
>
>
>
|||Hi ,
Update Statistics is used for statistics generation which is used for
determining the cost of the query , not sure how this will affect count(*) ,
please let me know if update statistics has any thing to do with count(*).
Regards
Vishal
"Hari Prasad" wrote:
> Hi,
> Can you update the statistics and see.
> UPDATE STATISTICS <TABLE NAME>
> Otherwise execute the below command in both databases replacing count(*)
> with *, but you need more space in ur hard disk on both servers
> select * into old_records from tblConsignments where [Date] >= '2003-08-01
> 00:00:00.000'
> Once the statement is completed try
> select count(*) from old_records
>
> But first try updating the statistics and see the difference.
> Thanks
> Hari
> SQL Server MVP
>
> "Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
> news:1115224865.478078.231810@.o13g2000cwo.googlegr oups.com...
>
>
|||Hi ,
Update Statistics is used for statistics generation which is used for
determining the cost of the query and selecting the appropriate index , not
sure how this will affect count(*) , please let me know if update statistics
has any thing to do with count(*).
Regards
Vishal
Showing posts with label servers. Show all posts
Showing posts with label servers. Show all posts
Tuesday, March 27, 2012
Count is different on same table? Datetime column error?
Hi,
I have two servers, SV1 is a 24/7 OLTP server and SV2 is a backup
server. A Full Backup of the db (prototype) was taken from SV1 was
applied to SV2. One of the tables in the database has over 40 million
rows in it. My task is to keep any rows no older than 18 months.
SV1.Prototype.dbo.tblConsignment approx. 42,000,000 rows
SV2.Prototype.dbo.tblConsignment approx. 41,800,000 rows
this is correct as SV1 is constantly in use, so SV2 will always be a
day or two behind. The issue arrises when I perform my count on both
servers, when I run the following on SV1:
select count(*) from tblConsignments
where [Date] >= '2003-08-01 00:00:00.000'
the count is approx 25,500,000 rows, however when I run the exact same
statement on SV2 the count is 19,500,000. Why would there be a
discrepancy of 6,000,000? The collation is the same for both servers
(SV2 is a higher spec server, SV1 is 2K enterprise editions, SV2 2k
standard, both SP3a).
When I do a count year by year (on both servers) it comes to aprrox.
35,000,000 however the total for the tblConsignments is 42million.
Possibly the answer would be to check the data more throughly, however
I would like to know if Datetime columns can be set to different
versions?
Thanks
qhHave you broken the data into smaller chunks? E.g. maybe a week or a month
is missing on one server...
--
This is my signature. It is a general reminder.
Please post DDL, sample data and desired results.
See http://www.aspfaq.com/5006 for info.
"Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
news:1115224865.478078.231810@.o13g2000cwo.googlegroups.com...
> Hi,
> I have two servers, SV1 is a 24/7 OLTP server and SV2 is a backup
> server. A Full Backup of the db (prototype) was taken from SV1 was
> applied to SV2. One of the tables in the database has over 40 million
> rows in it. My task is to keep any rows no older than 18 months.
> SV1.Prototype.dbo.tblConsignment approx. 42,000,000 rows
> SV2.Prototype.dbo.tblConsignment approx. 41,800,000 rows
> this is correct as SV1 is constantly in use, so SV2 will always be a
> day or two behind. The issue arrises when I perform my count on both
> servers, when I run the following on SV1:
> select count(*) from tblConsignments
> where [Date] >= '2003-08-01 00:00:00.000'
> the count is approx 25,500,000 rows, however when I run the exact same
> statement on SV2 the count is 19,500,000. Why would there be a
> discrepancy of 6,000,000? The collation is the same for both servers
> (SV2 is a higher spec server, SV1 is 2K enterprise editions, SV2 2k
> standard, both SP3a).
> When I do a count year by year (on both servers) it comes to aprrox.
> 35,000,000 however the total for the tblConsignments is 42million.
> Possibly the answer would be to check the data more throughly, however
> I would like to know if Datetime columns can be set to different
> versions?
> Thanks
> qh
>|||Whenever you perform a COUNT(*), I'd recommend you use a MAXDOP (1) in
the SELECT to make sure the parallelism "issue" doesn't cause the
rowcount to be off.
--
David Gugick
Imceda Software
www.imceda.com|||Hi,
Can you update the statistics and see.
UPDATE STATISTICS <TABLE NAME>
Otherwise execute the below command in both databases replacing count(*)
with *, but you need more space in ur hard disk on both servers
select * into old_records from tblConsignments where [Date] >= '2003-08-01
00:00:00.000'
Once the statement is completed try
select count(*) from old_records
But first try updating the statistics and see the difference.
Thanks
Hari
SQL Server MVP
"Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
news:1115224865.478078.231810@.o13g2000cwo.googlegroups.com...
> Hi,
> I have two servers, SV1 is a 24/7 OLTP server and SV2 is a backup
> server. A Full Backup of the db (prototype) was taken from SV1 was
> applied to SV2. One of the tables in the database has over 40 million
> rows in it. My task is to keep any rows no older than 18 months.
> SV1.Prototype.dbo.tblConsignment approx. 42,000,000 rows
> SV2.Prototype.dbo.tblConsignment approx. 41,800,000 rows
> this is correct as SV1 is constantly in use, so SV2 will always be a
> day or two behind. The issue arrises when I perform my count on both
> servers, when I run the following on SV1:
> select count(*) from tblConsignments
> where [Date] >= '2003-08-01 00:00:00.000'
> the count is approx 25,500,000 rows, however when I run the exact same
> statement on SV2 the count is 19,500,000. Why would there be a
> discrepancy of 6,000,000? The collation is the same for both servers
> (SV2 is a higher spec server, SV1 is 2K enterprise editions, SV2 2k
> standard, both SP3a).
> When I do a count year by year (on both servers) it comes to aprrox.
> 35,000,000 however the total for the tblConsignments is 42million.
> Possibly the answer would be to check the data more throughly, however
> I would like to know if Datetime columns can be set to different
> versions?
> Thanks
> qh
>|||Hari,
Just for the record, the date string you give will be
interpreted as August 1, 2003 in some cases (such as us_english language
settings),
and as January 8, 2003 in others (such as french language setting). The two
SQL Server safe formats to use are (for August 1) '20030801' and
'2003-08-01T00:00:00.000'
Steve Kass
Drew University
Hari Prasad wrote:
>Hi,
>Can you update the statistics and see.
>UPDATE STATISTICS <TABLE NAME>
>Otherwise execute the below command in both databases replacing count(*)
>with *, but you need more space in ur hard disk on both servers
>select * into old_records from tblConsignments where [Date] >= '2003-08-01
>00:00:00.000'
>Once the statement is completed try
>select count(*) from old_records
>
>But first try updating the statistics and see the difference.
>Thanks
>Hari
>SQL Server MVP
>
>"Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
>news:1115224865.478078.231810@.o13g2000cwo.googlegroups.com...
>
>>Hi,
>>I have two servers, SV1 is a 24/7 OLTP server and SV2 is a backup
>>server. A Full Backup of the db (prototype) was taken from SV1 was
>>applied to SV2. One of the tables in the database has over 40 million
>>rows in it. My task is to keep any rows no older than 18 months.
>>SV1.Prototype.dbo.tblConsignment approx. 42,000,000 rows
>>SV2.Prototype.dbo.tblConsignment approx. 41,800,000 rows
>>this is correct as SV1 is constantly in use, so SV2 will always be a
>>day or two behind. The issue arrises when I perform my count on both
>>servers, when I run the following on SV1:
>>select count(*) from tblConsignments
>>where [Date] >= '2003-08-01 00:00:00.000'
>>the count is approx 25,500,000 rows, however when I run the exact same
>>statement on SV2 the count is 19,500,000. Why would there be a
>>discrepancy of 6,000,000? The collation is the same for both servers
>>(SV2 is a higher spec server, SV1 is 2K enterprise editions, SV2 2k
>>standard, both SP3a).
>>When I do a count year by year (on both servers) it comes to aprrox.
>>35,000,000 however the total for the tblConsignments is 42million.
>>Possibly the answer would be to check the data more throughly, however
>>I would like to know if Datetime columns can be set to different
>>versions?
>>Thanks
>>qh
>>
>
>|||Hi ,
Update Statistics is used for statistics generation which is used for
determining the cost of the query , not sure how this will affect count(*) ,
please let me know if update statistics has any thing to do with count(*).
Regards
Vishal
"Hari Prasad" wrote:
> Hi,
> Can you update the statistics and see.
> UPDATE STATISTICS <TABLE NAME>
> Otherwise execute the below command in both databases replacing count(*)
> with *, but you need more space in ur hard disk on both servers
> select * into old_records from tblConsignments where [Date] >= '2003-08-01
> 00:00:00.000'
> Once the statement is completed try
> select count(*) from old_records
>
> But first try updating the statistics and see the difference.
> Thanks
> Hari
> SQL Server MVP
>
> "Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
> news:1115224865.478078.231810@.o13g2000cwo.googlegroups.com...
> > Hi,
> >
> > I have two servers, SV1 is a 24/7 OLTP server and SV2 is a backup
> > server. A Full Backup of the db (prototype) was taken from SV1 was
> > applied to SV2. One of the tables in the database has over 40 million
> > rows in it. My task is to keep any rows no older than 18 months.
> >
> > SV1.Prototype.dbo.tblConsignment approx. 42,000,000 rows
> > SV2.Prototype.dbo.tblConsignment approx. 41,800,000 rows
> >
> > this is correct as SV1 is constantly in use, so SV2 will always be a
> > day or two behind. The issue arrises when I perform my count on both
> > servers, when I run the following on SV1:
> >
> > select count(*) from tblConsignments
> > where [Date] >= '2003-08-01 00:00:00.000'
> >
> > the count is approx 25,500,000 rows, however when I run the exact same
> > statement on SV2 the count is 19,500,000. Why would there be a
> > discrepancy of 6,000,000? The collation is the same for both servers
> > (SV2 is a higher spec server, SV1 is 2K enterprise editions, SV2 2k
> > standard, both SP3a).
> >
> > When I do a count year by year (on both servers) it comes to aprrox.
> > 35,000,000 however the total for the tblConsignments is 42million.
> > Possibly the answer would be to check the data more throughly, however
> > I would like to know if Datetime columns can be set to different
> > versions?
> >
> > Thanks
> >
> > qh
> >
>
>|||Hi ,
Update Statistics is used for statistics generation which is used for
determining the cost of the query and selecting the appropriate index , not
sure how this will affect count(*) , please let me know if update statistics
has any thing to do with count(*).
Regards
Vishal|||Hi Guys,
many thanks for the replies, I have taken your suggestions and tried
both UPDATE STATISTICS and the OPTION (MAXDOP 1) however I am still
returning the same rowcount anomoly. The main thing that has me
stumped is that the db on SV2 is a backup from the db on SV1!! I have
even created a small table based on a rowcount between certain years.
SV1 SV2
older 6,486,356 6,486,356
2002 9,893,118 9,893,118
2003 10,402,588 10,402,588
2004 11,268,420 11,268,419
2005+ 4,014,450 3,862,644
Totals 35,578,576 35,426,769
However as I originally mentioned there is over 41,000,000 rows in both
the corresponding tables? My next plan is to perform a simple data
complare on each table.
Thanks again
qhsql
I have two servers, SV1 is a 24/7 OLTP server and SV2 is a backup
server. A Full Backup of the db (prototype) was taken from SV1 was
applied to SV2. One of the tables in the database has over 40 million
rows in it. My task is to keep any rows no older than 18 months.
SV1.Prototype.dbo.tblConsignment approx. 42,000,000 rows
SV2.Prototype.dbo.tblConsignment approx. 41,800,000 rows
this is correct as SV1 is constantly in use, so SV2 will always be a
day or two behind. The issue arrises when I perform my count on both
servers, when I run the following on SV1:
select count(*) from tblConsignments
where [Date] >= '2003-08-01 00:00:00.000'
the count is approx 25,500,000 rows, however when I run the exact same
statement on SV2 the count is 19,500,000. Why would there be a
discrepancy of 6,000,000? The collation is the same for both servers
(SV2 is a higher spec server, SV1 is 2K enterprise editions, SV2 2k
standard, both SP3a).
When I do a count year by year (on both servers) it comes to aprrox.
35,000,000 however the total for the tblConsignments is 42million.
Possibly the answer would be to check the data more throughly, however
I would like to know if Datetime columns can be set to different
versions?
Thanks
qhHave you broken the data into smaller chunks? E.g. maybe a week or a month
is missing on one server...
--
This is my signature. It is a general reminder.
Please post DDL, sample data and desired results.
See http://www.aspfaq.com/5006 for info.
"Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
news:1115224865.478078.231810@.o13g2000cwo.googlegroups.com...
> Hi,
> I have two servers, SV1 is a 24/7 OLTP server and SV2 is a backup
> server. A Full Backup of the db (prototype) was taken from SV1 was
> applied to SV2. One of the tables in the database has over 40 million
> rows in it. My task is to keep any rows no older than 18 months.
> SV1.Prototype.dbo.tblConsignment approx. 42,000,000 rows
> SV2.Prototype.dbo.tblConsignment approx. 41,800,000 rows
> this is correct as SV1 is constantly in use, so SV2 will always be a
> day or two behind. The issue arrises when I perform my count on both
> servers, when I run the following on SV1:
> select count(*) from tblConsignments
> where [Date] >= '2003-08-01 00:00:00.000'
> the count is approx 25,500,000 rows, however when I run the exact same
> statement on SV2 the count is 19,500,000. Why would there be a
> discrepancy of 6,000,000? The collation is the same for both servers
> (SV2 is a higher spec server, SV1 is 2K enterprise editions, SV2 2k
> standard, both SP3a).
> When I do a count year by year (on both servers) it comes to aprrox.
> 35,000,000 however the total for the tblConsignments is 42million.
> Possibly the answer would be to check the data more throughly, however
> I would like to know if Datetime columns can be set to different
> versions?
> Thanks
> qh
>|||Whenever you perform a COUNT(*), I'd recommend you use a MAXDOP (1) in
the SELECT to make sure the parallelism "issue" doesn't cause the
rowcount to be off.
--
David Gugick
Imceda Software
www.imceda.com|||Hi,
Can you update the statistics and see.
UPDATE STATISTICS <TABLE NAME>
Otherwise execute the below command in both databases replacing count(*)
with *, but you need more space in ur hard disk on both servers
select * into old_records from tblConsignments where [Date] >= '2003-08-01
00:00:00.000'
Once the statement is completed try
select count(*) from old_records
But first try updating the statistics and see the difference.
Thanks
Hari
SQL Server MVP
"Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
news:1115224865.478078.231810@.o13g2000cwo.googlegroups.com...
> Hi,
> I have two servers, SV1 is a 24/7 OLTP server and SV2 is a backup
> server. A Full Backup of the db (prototype) was taken from SV1 was
> applied to SV2. One of the tables in the database has over 40 million
> rows in it. My task is to keep any rows no older than 18 months.
> SV1.Prototype.dbo.tblConsignment approx. 42,000,000 rows
> SV2.Prototype.dbo.tblConsignment approx. 41,800,000 rows
> this is correct as SV1 is constantly in use, so SV2 will always be a
> day or two behind. The issue arrises when I perform my count on both
> servers, when I run the following on SV1:
> select count(*) from tblConsignments
> where [Date] >= '2003-08-01 00:00:00.000'
> the count is approx 25,500,000 rows, however when I run the exact same
> statement on SV2 the count is 19,500,000. Why would there be a
> discrepancy of 6,000,000? The collation is the same for both servers
> (SV2 is a higher spec server, SV1 is 2K enterprise editions, SV2 2k
> standard, both SP3a).
> When I do a count year by year (on both servers) it comes to aprrox.
> 35,000,000 however the total for the tblConsignments is 42million.
> Possibly the answer would be to check the data more throughly, however
> I would like to know if Datetime columns can be set to different
> versions?
> Thanks
> qh
>|||Hari,
Just for the record, the date string you give will be
interpreted as August 1, 2003 in some cases (such as us_english language
settings),
and as January 8, 2003 in others (such as french language setting). The two
SQL Server safe formats to use are (for August 1) '20030801' and
'2003-08-01T00:00:00.000'
Steve Kass
Drew University
Hari Prasad wrote:
>Hi,
>Can you update the statistics and see.
>UPDATE STATISTICS <TABLE NAME>
>Otherwise execute the below command in both databases replacing count(*)
>with *, but you need more space in ur hard disk on both servers
>select * into old_records from tblConsignments where [Date] >= '2003-08-01
>00:00:00.000'
>Once the statement is completed try
>select count(*) from old_records
>
>But first try updating the statistics and see the difference.
>Thanks
>Hari
>SQL Server MVP
>
>"Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
>news:1115224865.478078.231810@.o13g2000cwo.googlegroups.com...
>
>>Hi,
>>I have two servers, SV1 is a 24/7 OLTP server and SV2 is a backup
>>server. A Full Backup of the db (prototype) was taken from SV1 was
>>applied to SV2. One of the tables in the database has over 40 million
>>rows in it. My task is to keep any rows no older than 18 months.
>>SV1.Prototype.dbo.tblConsignment approx. 42,000,000 rows
>>SV2.Prototype.dbo.tblConsignment approx. 41,800,000 rows
>>this is correct as SV1 is constantly in use, so SV2 will always be a
>>day or two behind. The issue arrises when I perform my count on both
>>servers, when I run the following on SV1:
>>select count(*) from tblConsignments
>>where [Date] >= '2003-08-01 00:00:00.000'
>>the count is approx 25,500,000 rows, however when I run the exact same
>>statement on SV2 the count is 19,500,000. Why would there be a
>>discrepancy of 6,000,000? The collation is the same for both servers
>>(SV2 is a higher spec server, SV1 is 2K enterprise editions, SV2 2k
>>standard, both SP3a).
>>When I do a count year by year (on both servers) it comes to aprrox.
>>35,000,000 however the total for the tblConsignments is 42million.
>>Possibly the answer would be to check the data more throughly, however
>>I would like to know if Datetime columns can be set to different
>>versions?
>>Thanks
>>qh
>>
>
>|||Hi ,
Update Statistics is used for statistics generation which is used for
determining the cost of the query , not sure how this will affect count(*) ,
please let me know if update statistics has any thing to do with count(*).
Regards
Vishal
"Hari Prasad" wrote:
> Hi,
> Can you update the statistics and see.
> UPDATE STATISTICS <TABLE NAME>
> Otherwise execute the below command in both databases replacing count(*)
> with *, but you need more space in ur hard disk on both servers
> select * into old_records from tblConsignments where [Date] >= '2003-08-01
> 00:00:00.000'
> Once the statement is completed try
> select count(*) from old_records
>
> But first try updating the statistics and see the difference.
> Thanks
> Hari
> SQL Server MVP
>
> "Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
> news:1115224865.478078.231810@.o13g2000cwo.googlegroups.com...
> > Hi,
> >
> > I have two servers, SV1 is a 24/7 OLTP server and SV2 is a backup
> > server. A Full Backup of the db (prototype) was taken from SV1 was
> > applied to SV2. One of the tables in the database has over 40 million
> > rows in it. My task is to keep any rows no older than 18 months.
> >
> > SV1.Prototype.dbo.tblConsignment approx. 42,000,000 rows
> > SV2.Prototype.dbo.tblConsignment approx. 41,800,000 rows
> >
> > this is correct as SV1 is constantly in use, so SV2 will always be a
> > day or two behind. The issue arrises when I perform my count on both
> > servers, when I run the following on SV1:
> >
> > select count(*) from tblConsignments
> > where [Date] >= '2003-08-01 00:00:00.000'
> >
> > the count is approx 25,500,000 rows, however when I run the exact same
> > statement on SV2 the count is 19,500,000. Why would there be a
> > discrepancy of 6,000,000? The collation is the same for both servers
> > (SV2 is a higher spec server, SV1 is 2K enterprise editions, SV2 2k
> > standard, both SP3a).
> >
> > When I do a count year by year (on both servers) it comes to aprrox.
> > 35,000,000 however the total for the tblConsignments is 42million.
> > Possibly the answer would be to check the data more throughly, however
> > I would like to know if Datetime columns can be set to different
> > versions?
> >
> > Thanks
> >
> > qh
> >
>
>|||Hi ,
Update Statistics is used for statistics generation which is used for
determining the cost of the query and selecting the appropriate index , not
sure how this will affect count(*) , please let me know if update statistics
has any thing to do with count(*).
Regards
Vishal|||Hi Guys,
many thanks for the replies, I have taken your suggestions and tried
both UPDATE STATISTICS and the OPTION (MAXDOP 1) however I am still
returning the same rowcount anomoly. The main thing that has me
stumped is that the db on SV2 is a backup from the db on SV1!! I have
even created a small table based on a rowcount between certain years.
SV1 SV2
older 6,486,356 6,486,356
2002 9,893,118 9,893,118
2003 10,402,588 10,402,588
2004 11,268,420 11,268,419
2005+ 4,014,450 3,862,644
Totals 35,578,576 35,426,769
However as I originally mentioned there is over 41,000,000 rows in both
the corresponding tables? My next plan is to perform a simple data
complare on each table.
Thanks again
qhsql
Count is different on same table? Datetime column error?
Hi,
I have two servers, SV1 is a 24/7 OLTP server and SV2 is a backup
server. A Full Backup of the db (prototype) was taken from SV1 was
applied to SV2. One of the tables in the database has over 40 million
rows in it. My task is to keep any rows no older than 18 months.
SV1.Prototype.dbo.tblConsignment approx. 42,000,000 rows
SV2.Prototype.dbo.tblConsignment approx. 41,800,000 rows
this is correct as SV1 is constantly in use, so SV2 will always be a
day or two behind. The issue arrises when I perform my count on both
servers, when I run the following on SV1:
select count(*) from tblConsignments
where [Date] >= '2003-08-01 00:00:00.000'
the count is approx 25,500,000 rows, however when I run the exact same
statement on SV2 the count is 19,500,000. Why would there be a
discrepancy of 6,000,000? The collation is the same for both servers
(SV2 is a higher spec server, SV1 is 2K enterprise editions, SV2 2k
standard, both SP3a).
When I do a count year by year (on both servers) it comes to aprrox.
35,000,000 however the total for the tblConsignments is 42million.
Possibly the answer would be to check the data more throughly, however
I would like to know if Datetime columns can be set to different
versions?
Thanks
qhHave you broken the data into smaller chunks? E.g. maybe a week or a month
is missing on one server...
This is my signature. It is a general reminder.
Please post DDL, sample data and desired results.
See http://www.aspfaq.com/5006 for info.
"Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
news:1115224865.478078.231810@.o13g2000cwo.googlegroups.com...
> Hi,
> I have two servers, SV1 is a 24/7 OLTP server and SV2 is a backup
> server. A Full Backup of the db (prototype) was taken from SV1 was
> applied to SV2. One of the tables in the database has over 40 million
> rows in it. My task is to keep any rows no older than 18 months.
> SV1.Prototype.dbo.tblConsignment approx. 42,000,000 rows
> SV2.Prototype.dbo.tblConsignment approx. 41,800,000 rows
> this is correct as SV1 is constantly in use, so SV2 will always be a
> day or two behind. The issue arrises when I perform my count on both
> servers, when I run the following on SV1:
> select count(*) from tblConsignments
> where [Date] >= '2003-08-01 00:00:00.000'
> the count is approx 25,500,000 rows, however when I run the exact same
> statement on SV2 the count is 19,500,000. Why would there be a
> discrepancy of 6,000,000? The collation is the same for both servers
> (SV2 is a higher spec server, SV1 is 2K enterprise editions, SV2 2k
> standard, both SP3a).
> When I do a count year by year (on both servers) it comes to aprrox.
> 35,000,000 however the total for the tblConsignments is 42million.
> Possibly the answer would be to check the data more throughly, however
> I would like to know if Datetime columns can be set to different
> versions?
> Thanks
> qh
>|||Whenever you perform a COUNT(*), I'd recommend you use a MAXDOP (1) in
the SELECT to make sure the parallelism "issue" doesn't cause the
rowcount to be off.
David Gugick
Imceda Software
www.imceda.com|||Hi,
Can you update the statistics and see.
UPDATE STATISTICS <TABLE NAME>
Otherwise execute the below command in both databases replacing count(*)
with *, but you need more space in ur hard disk on both servers
select * into old_records from tblConsignments where [Date] >= '2003-08-
01
00:00:00.000'
Once the statement is completed try
select count(*) from old_records
But first try updating the statistics and see the difference.
Thanks
Hari
SQL Server MVP
"Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
news:1115224865.478078.231810@.o13g2000cwo.googlegroups.com...
> Hi,
> I have two servers, SV1 is a 24/7 OLTP server and SV2 is a backup
> server. A Full Backup of the db (prototype) was taken from SV1 was
> applied to SV2. One of the tables in the database has over 40 million
> rows in it. My task is to keep any rows no older than 18 months.
> SV1.Prototype.dbo.tblConsignment approx. 42,000,000 rows
> SV2.Prototype.dbo.tblConsignment approx. 41,800,000 rows
> this is correct as SV1 is constantly in use, so SV2 will always be a
> day or two behind. The issue arrises when I perform my count on both
> servers, when I run the following on SV1:
> select count(*) from tblConsignments
> where [Date] >= '2003-08-01 00:00:00.000'
> the count is approx 25,500,000 rows, however when I run the exact same
> statement on SV2 the count is 19,500,000. Why would there be a
> discrepancy of 6,000,000? The collation is the same for both servers
> (SV2 is a higher spec server, SV1 is 2K enterprise editions, SV2 2k
> standard, both SP3a).
> When I do a count year by year (on both servers) it comes to aprrox.
> 35,000,000 however the total for the tblConsignments is 42million.
> Possibly the answer would be to check the data more throughly, however
> I would like to know if Datetime columns can be set to different
> versions?
> Thanks
> qh
>|||Hari,
Just for the record, the date string you give will be
interpreted as August 1, 2003 in some cases (such as us_english language
settings),
and as January 8, 2003 in others (such as french language setting). The two
SQL Server safe formats to use are (for August 1) '20030801' and
'2003-08-01T00:00:00.000'
Steve Kass
Drew University
Hari Prasad wrote:
>Hi,
>Can you update the statistics and see.
>UPDATE STATISTICS <TABLE NAME>
>Otherwise execute the below command in both databases replacing count(*)
>with *, but you need more space in ur hard disk on both servers
>select * into old_records from tblConsignments where [Date] >= '2003-08
-01
>00:00:00.000'
>Once the statement is completed try
>select count(*) from old_records
>
>But first try updating the statistics and see the difference.
>Thanks
>Hari
>SQL Server MVP
>
>"Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
>news:1115224865.478078.231810@.o13g2000cwo.googlegroups.com...
>
>
>|||Hi ,
Update Statistics is used for statistics generation which is used for
determining the cost of the query , not sure how this will affect count(*)
,
please let me know if update statistics has any thing to do with count(*).
Regards
Vishal
"Hari Prasad" wrote:
> Hi,
> Can you update the statistics and see.
> UPDATE STATISTICS <TABLE NAME>
> Otherwise execute the below command in both databases replacing count(*)
> with *, but you need more space in ur hard disk on both servers
> select * into old_records from tblConsignments where [Date] >= '2003-0
8-01
> 00:00:00.000'
> Once the statement is completed try
> select count(*) from old_records
>
> But first try updating the statistics and see the difference.
> Thanks
> Hari
> SQL Server MVP
>
> "Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
> news:1115224865.478078.231810@.o13g2000cwo.googlegroups.com...
>
>|||Hi ,
Update Statistics is used for statistics generation which is used for
determining the cost of the query and selecting the appropriate index , not
sure how this will affect count(*) , please let me know if update statistic
s
has any thing to do with count(*).
Regards
Vishal
I have two servers, SV1 is a 24/7 OLTP server and SV2 is a backup
server. A Full Backup of the db (prototype) was taken from SV1 was
applied to SV2. One of the tables in the database has over 40 million
rows in it. My task is to keep any rows no older than 18 months.
SV1.Prototype.dbo.tblConsignment approx. 42,000,000 rows
SV2.Prototype.dbo.tblConsignment approx. 41,800,000 rows
this is correct as SV1 is constantly in use, so SV2 will always be a
day or two behind. The issue arrises when I perform my count on both
servers, when I run the following on SV1:
select count(*) from tblConsignments
where [Date] >= '2003-08-01 00:00:00.000'
the count is approx 25,500,000 rows, however when I run the exact same
statement on SV2 the count is 19,500,000. Why would there be a
discrepancy of 6,000,000? The collation is the same for both servers
(SV2 is a higher spec server, SV1 is 2K enterprise editions, SV2 2k
standard, both SP3a).
When I do a count year by year (on both servers) it comes to aprrox.
35,000,000 however the total for the tblConsignments is 42million.
Possibly the answer would be to check the data more throughly, however
I would like to know if Datetime columns can be set to different
versions?
Thanks
qhHave you broken the data into smaller chunks? E.g. maybe a week or a month
is missing on one server...
This is my signature. It is a general reminder.
Please post DDL, sample data and desired results.
See http://www.aspfaq.com/5006 for info.
"Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
news:1115224865.478078.231810@.o13g2000cwo.googlegroups.com...
> Hi,
> I have two servers, SV1 is a 24/7 OLTP server and SV2 is a backup
> server. A Full Backup of the db (prototype) was taken from SV1 was
> applied to SV2. One of the tables in the database has over 40 million
> rows in it. My task is to keep any rows no older than 18 months.
> SV1.Prototype.dbo.tblConsignment approx. 42,000,000 rows
> SV2.Prototype.dbo.tblConsignment approx. 41,800,000 rows
> this is correct as SV1 is constantly in use, so SV2 will always be a
> day or two behind. The issue arrises when I perform my count on both
> servers, when I run the following on SV1:
> select count(*) from tblConsignments
> where [Date] >= '2003-08-01 00:00:00.000'
> the count is approx 25,500,000 rows, however when I run the exact same
> statement on SV2 the count is 19,500,000. Why would there be a
> discrepancy of 6,000,000? The collation is the same for both servers
> (SV2 is a higher spec server, SV1 is 2K enterprise editions, SV2 2k
> standard, both SP3a).
> When I do a count year by year (on both servers) it comes to aprrox.
> 35,000,000 however the total for the tblConsignments is 42million.
> Possibly the answer would be to check the data more throughly, however
> I would like to know if Datetime columns can be set to different
> versions?
> Thanks
> qh
>|||Whenever you perform a COUNT(*), I'd recommend you use a MAXDOP (1) in
the SELECT to make sure the parallelism "issue" doesn't cause the
rowcount to be off.
David Gugick
Imceda Software
www.imceda.com|||Hi,
Can you update the statistics and see.
UPDATE STATISTICS <TABLE NAME>
Otherwise execute the below command in both databases replacing count(*)
with *, but you need more space in ur hard disk on both servers
select * into old_records from tblConsignments where [Date] >= '2003-08-
01
00:00:00.000'
Once the statement is completed try
select count(*) from old_records
But first try updating the statistics and see the difference.
Thanks
Hari
SQL Server MVP
"Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
news:1115224865.478078.231810@.o13g2000cwo.googlegroups.com...
> Hi,
> I have two servers, SV1 is a 24/7 OLTP server and SV2 is a backup
> server. A Full Backup of the db (prototype) was taken from SV1 was
> applied to SV2. One of the tables in the database has over 40 million
> rows in it. My task is to keep any rows no older than 18 months.
> SV1.Prototype.dbo.tblConsignment approx. 42,000,000 rows
> SV2.Prototype.dbo.tblConsignment approx. 41,800,000 rows
> this is correct as SV1 is constantly in use, so SV2 will always be a
> day or two behind. The issue arrises when I perform my count on both
> servers, when I run the following on SV1:
> select count(*) from tblConsignments
> where [Date] >= '2003-08-01 00:00:00.000'
> the count is approx 25,500,000 rows, however when I run the exact same
> statement on SV2 the count is 19,500,000. Why would there be a
> discrepancy of 6,000,000? The collation is the same for both servers
> (SV2 is a higher spec server, SV1 is 2K enterprise editions, SV2 2k
> standard, both SP3a).
> When I do a count year by year (on both servers) it comes to aprrox.
> 35,000,000 however the total for the tblConsignments is 42million.
> Possibly the answer would be to check the data more throughly, however
> I would like to know if Datetime columns can be set to different
> versions?
> Thanks
> qh
>|||Hari,
Just for the record, the date string you give will be
interpreted as August 1, 2003 in some cases (such as us_english language
settings),
and as January 8, 2003 in others (such as french language setting). The two
SQL Server safe formats to use are (for August 1) '20030801' and
'2003-08-01T00:00:00.000'
Steve Kass
Drew University
Hari Prasad wrote:
>Hi,
>Can you update the statistics and see.
>UPDATE STATISTICS <TABLE NAME>
>Otherwise execute the below command in both databases replacing count(*)
>with *, but you need more space in ur hard disk on both servers
>select * into old_records from tblConsignments where [Date] >= '2003-08
-01
>00:00:00.000'
>Once the statement is completed try
>select count(*) from old_records
>
>But first try updating the statistics and see the difference.
>Thanks
>Hari
>SQL Server MVP
>
>"Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
>news:1115224865.478078.231810@.o13g2000cwo.googlegroups.com...
>
>
>|||Hi ,
Update Statistics is used for statistics generation which is used for
determining the cost of the query , not sure how this will affect count(*)
,
please let me know if update statistics has any thing to do with count(*).
Regards
Vishal
"Hari Prasad" wrote:
> Hi,
> Can you update the statistics and see.
> UPDATE STATISTICS <TABLE NAME>
> Otherwise execute the below command in both databases replacing count(*)
> with *, but you need more space in ur hard disk on both servers
> select * into old_records from tblConsignments where [Date] >= '2003-0
8-01
> 00:00:00.000'
> Once the statement is completed try
> select count(*) from old_records
>
> But first try updating the statistics and see the difference.
> Thanks
> Hari
> SQL Server MVP
>
> "Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
> news:1115224865.478078.231810@.o13g2000cwo.googlegroups.com...
>
>|||Hi ,
Update Statistics is used for statistics generation which is used for
determining the cost of the query and selecting the appropriate index , not
sure how this will affect count(*) , please let me know if update statistic
s
has any thing to do with count(*).
Regards
Vishal
Thursday, March 22, 2012
Couldn't establish trusted connections after SQL memory problems
Has anyone seen anything like this before? One of our production servers lost
the ability establish any trusted connections list night from 5:47 until we
rebooted it this morning. From that point on all attempts for SQL jobs
running under domain ids on the system and any BizTalk processes trying to
connect failed with the message "Login failed for user '(null)'. Reason: Not
associated with a trusted SQL Server connection.". The couple jobs that we
have running under standard Standard users (other than sa) had no problems.
None of our other servers had problems and the network looked ok, so I don't
think it was a domain or network connectivity problem. The reboot resolved
the problem, so it doesn't look like anything related to the domain userids
themself had changed, just SQL suddenly started failing to be able to use
them.
Looking in the SQL Server Log I see the following messages logged (all from
the same spid) at the same time all this started. I'm interpreting these as
meaning the server ran out of memory, but can not think of any reason why
this could have caused trusted connections to fail from that point on. BTW -
We have the latest service packs and patches (ver 8.00.818) installed. Any
ideas? Has anyone else ever seen anything like this before?
Query Memory Manager: Grants=0 Waiting=0 Maximum=150632 Available=150632
Global Memory Objects: Resource=2705 Locks=93
SQLCache=102 Replication=2
LockBytes=2 ServerGlobal=45
Xact=215
Dynamic Memory Manager: Stolen=9211 OS Reserved=16632
OS Committed=16593
OS In Use=12841
Query Plan=3349 Optimizer=0
General=14718
Utilities=13 Connection=3870
Procedure Cache: TotalProcs=922 TotalPages=3400 InUsePages=1759
Buffer Counts: Commited=208408 Target=208408 Hashed=198709
InternalReservation=201 ExternalReservation=0 Min Free=128
Buffer Distribution: Stolen=5811 Free=488 Procedures=3400
Inram=0 Dirty=46274 Kept=0
I/O=0, Latched=35, Other=152400
WARNING: Failed to reserve contiguous memory of Size= 65536.
The message
WARNING: Failed to reserve contiguous memory of Size= 65536.
shows that you have some pressure it seems in your MemoryToleave area of
sql server's address pool.
Here are a couple of things regarding this error. Try tshooting this, else
you might want to call into PSS and open up a support case -
There are two main areas of memory within SQL Server's address space, the
buffer pool (BPool) and a second memory pool sometimes called the
"MemToLeave" area. The contents of the SQL Server buffer pool include
cached table data, workspace memory used during query execution for
in-memory sorts or hashes, most cached stored procedure and query plans,
memory for locks and other internal structures, and the majority of other
miscellaneous memory needs of the SQL Server. SQL Server 7.0 introduced
dynamic memory management to the buffer pool, which means that the amount
of memory under SQL Server's direct control may grow and shrink in response
to internal SQL Server needs and external memory pressure from other
applications. It is normal for the size of the SQL Server buffer pool to
increase over time until most memory on the server is consumed. This
design can give the false appearance of a memory leak in the SQL Server
buffer pool when operating under normal circumstances. For more detailed
information please reference the Books Online articles "Server Memory
Options", "Memory Architecture", and (SQL Server 2000 only) "Effects of min
and max server memory".
The other significant memory area is sometimes called MemToLeave, and it is
primarily used by non-SQL Server code that happens to be executing within
the SQL Server process. The MemToLeave area is memory that is left
unallocated and unreserved, primarily for code that is not part of the core
SQL Server and therefore does not know how to access memory in the SQL
Server buffer pool. Some examples of components that may use this memory
include extended stored procedures, OLE Automation/COM objects, linked
server OLEDB providers and ODBC drivers, MAPI components used by SQLMail,
and thread stacks (one-half MB per thread). This does not just include the
.EXE and .DLL binary images for these components; any memory allocated at
runtime by the components listed above will also be allocated from the
MemToLeave area. Non-SQL Server code makes its memory allocation requests
directly from the OS, not from the SQL Server buffer pool. The entire SQL
Server buffer pool is reserved at server startup, so any requests for
memory made directly from the operating system must be satisfied from the
MemToLeave area, which is the only source of unreserved memory in the SQL
Server address space. SQL Server itself also uses the MemToLeave memory
area for certain allocations; for example, SQL Server 7.0 stores procedure
plans in the MemToLeave area if they are too large for a single 8KB buffer
pool page.
How to Determine Whether the Error Points to a Memory Shortage in Buffer
Pool or MemToLeave
If the 17803 error in the SQL Server errorlog is accompanied by one of the
following error messages, the memory pressure is most likely in the
MemToLeave area (see section "Troubleshooting MemToLeave Memory Pressure"
below). If the messages below do not appear with the 17803, start from
section "Troubleshooting Buffer Pool Memory Pressure".
Errors that imply insufficient contiguous free memory in the MemToLeave
area:
WARNING: Failed to reserve contiguous memory of Size= 65536.
WARNING: Clearing procedure cache to free contiguous memory.
Error: 17802, Severity: 18, State: 3 Could not create server event
thread.
SQL Server could not spawn process_loginread thread
Server: Msg 7399, Level 16, State 1, Line 1
OLE DB provider 'SQLOLEDB' reported an error. The provider ran out of
memory.
Troubleshooting MemToLeave Memory Pressure
If the 17803 and associated errors point to a shortage of memory in
MemToLeave, the items below list some of the most common causes of this
type of problem and suggest ways to alleviate the issue.
- If the server runs many concurrent linked server queries against SQL
Servers using SQLOLEDB or MSDASQL, upgrade to MDAC 2.5 or MDAC 2.6 on
the server. The MDAC 2.5 and 2.6 versions of SQLOLEDB are more
conservative with their initial memory allocations.
Perhaps the most common cause of out-of-memory conditions in the MemToLeave
area is a memory leak in a non-SQL Server component running inside the SQL
address space. Items to check:
-Check the errorlog for messages like "Using 'abc.dll' version '123' to
execute extended stored procedure 'xp_abc'." If the dll referenced in this
message is not a Microsoft-provided DLL you may want to consider moving it
out of the production server's address space
-Determine whether COM objects are being executed inside SQL Server's
address space with the sp_OA stored procedures. If sp_OA is being used you
will see ODSOLE70.DLL which hosts sp_OACreate being loaded and the
following message in the errorlog:
Using 'odsole70.dll' version '2000.80.382' to execute extended stored
procedure 'sp_OACreate'.
If sp_OA stored procedures are being used, ensure that the COM objects
are being loaded out of process by passing 4 to the optional third
parameter for sp_OACreate (e.g. "EXEC sp_OACreate 'SQLDMO.SQLServer', @.obj
OUTPUT, 4").
-If linked servers using third-party OLEDB providers or ODBC drivers are
in use, these are also a possible cause of memory leaks. Evaluate whether
the linked servers can be disabled for a time as a troubleshooting step to
see whether this prevents the leak, or examine whether the provider is
still fully functional once it has been configured to run out of process by
setting the AllowInProcess registry value for the provider to 0 (the
AllowInProcess value can be found at HKLM\Software\Microsoft\MSSQLServer(or
MSSQL$instance key)\Providers\[ProviderName]).
- If the server supports heavy linked server activity or must run
memory-hungry non-SQL Server code inside the SQL Server process, you
may simply need to adjust the size of the MemToLeave area to make more
memory available to non-SQL Server memory consumers. "-g" is an
optional SQL Server startup parameter that can be used to increase the
size of the MemToLeave area. The default -g memory size is 128MB in SQL
Server 7.0 and 256MB in SQL Server 2000. You can increase the size of
the MemToLeave area by an additioal 128MB by adding -g256 (SQL 7.0) or
-g384 (SQL 2000) as a server startup parameter. This setting will take
effect the next time the SQL Server service is started. Startup
parameters are added in the "General" tab of the Server Properties
dialog in Enterprise Manager.
Troubleshooting Buffer Pool Memory Pressure
Because of SQL Server's dynamic memory managment, it is not unusual for a
significant leak in the MemToLeave area to initially manifest itself as a
shortage of buffers in buffer pool because SQL Server will dynamically
scale down the size of the buffer pool in response to the growing number of
bytes committed within the MemToLeave area. Similarly, a memory hungry or
leaking application running on the same box can cause SQL Server to release
almost all of its BPool memory, leading to a 17803. To rule out these two
alternatives and confirm that the problem is confined to BPool, start by
looking at the Performance Monitor log you collected.
If the counter "Process(sqlservr):Working Set" was much lower than the
amount of physical RAM on the server while the insufficient memory errors
were occurring, look for another process that holds most of the memory and
pursue that process as the root of the memory pressure.
If "Process(sqlservr):Working Set" accounts for most or nearly all of the
physical RAM on the server but if "SQL Server:Memory Manager:Target Server
Memory(KB)" is only a fraction of this amount of memory, the root cause of
the problem is likely a leak in MemToLeave that had the side effect of
causing skrinkage of BPool. In this case follow the suggestions in the
previous section "Troubleshooting MemToLeave Memory Pressure".
If the root cause of the problem is a leak in or excessive demand for bpool
memory, the following should generally be true:
- Buffer pool should have already been grown to its maximum size. In
other words, "SQL Server:Memory Manager:Total Server Memory(KB)" should
be equal to "SQL Server:Memory Manager:Target Server Memory(KB)".
- Buffer pool should consume the majority of physical RAM allocated to
the SQL process ("SQL Server:Memory Manager:Target Server Memory(KB)"
should account for the majority of "Process(sqlservr):Working Set".)
- There should be high lazywriter activity ("SQLServer:Buffer Manager -
Lazy Writer Buffers/sec"). If the problem appears to be BPool memory
pressure and you see no lazywriter activity, something may be blocking
lazywriter. Consider getting one or more DBCC STACKDUMPs while in this
state.
If you have determined that the problem is a leak or excessive demand for
buffer pool memory, use perfmon to determine what is consuming the most
buffer pages. Counters to examine include:
- "SQLServer:Buffer Manager - Cache Size (pages)" (procedure cache)
- "SQLServer:Cache Manager - Cache Pages(Adhoc/Cursor/Stored Proc/etc)"
- "SQLServer:Memory Manager - Granted Workspace Memory (KB)" (query
memory)
- "SQLServer:Memory Manager - SQL Cache Memory (KB)" (cached data pages)
- "SQLServer:Memory Manager - Lock Memory (KB)"
- "SQLServer:Memory Manager - Optimizer Memory (KB)"
- "SQLServer:Buffer Manager - Stolen Pages"
Hope that helps!
Vikram Jayaram
Microsoft, SQL Server
This posting is provided "AS IS" with no warranties, and confers no rights.
Subscribe to MSDN & use http://msdn.microsoft.com/newsgroups.
|||SQL Server cache's connection information, in the MEM TO LEAVE region. If
that was the process that was attempting to make a memory reservation in
this region, then that would account for your errors.
What is happening to cause this is too many external memory processes
executing on the server. Although SQL Server has wonderful Dynamic Memory
managers for the internal processes, it has to appeal to the OS to manage
the external reservation calls. Over time, the MEM TO LEAVE region get
fragmented and the only fix is to clear them segments. Thus, at least a
recycling of the SQL Server services if not a reboot of the system.
You can counter balance this by reducing the number of external process
calls and/or adjust the amount of memory left to the MEM TO LEAVE region
using the /g startup parameter. Howerver, the sizing of this parameter
needs to be adjusted with the assistance of the PSS staff.
A good first start, however, would be to increase from the default for this
parameter from 256 to 384, which represents MB removed from the Buffer Pool
allocation in addition to what is reserved for the UMS Worker threads, 128
MB with the default 256 threads configuration.
Sincerely,
Anthony Thomas
"Jeff Turlington" <Jeff Turlington@.discussions.microsoft.com> wrote in
message news:D9783602-FAD2-4730-BB96-FB7CD2438605@.microsoft.com...
Has anyone seen anything like this before? One of our production servers
lost
the ability establish any trusted connections list night from 5:47 until we
rebooted it this morning. From that point on all attempts for SQL jobs
running under domain ids on the system and any BizTalk processes trying to
connect failed with the message "Login failed for user '(null)'. Reason: Not
associated with a trusted SQL Server connection.". The couple jobs that we
have running under standard Standard users (other than sa) had no problems.
None of our other servers had problems and the network looked ok, so I don't
think it was a domain or network connectivity problem. The reboot resolved
the problem, so it doesn't look like anything related to the domain userids
themself had changed, just SQL suddenly started failing to be able to use
them.
Looking in the SQL Server Log I see the following messages logged (all from
the same spid) at the same time all this started. I'm interpreting these as
meaning the server ran out of memory, but can not think of any reason why
this could have caused trusted connections to fail from that point on. BTW -
We have the latest service packs and patches (ver 8.00.818) installed. Any
ideas? Has anyone else ever seen anything like this before?
Query Memory Manager: Grants=0 Waiting=0 Maximum=150632 Available=150632
Global Memory Objects: Resource=2705 Locks=93
SQLCache=102 Replication=2
LockBytes=2 ServerGlobal=45
Xact=215
Dynamic Memory Manager: Stolen=9211 OS Reserved=16632
OS Committed=16593
OS In Use=12841
Query Plan=3349 Optimizer=0
General=14718
Utilities=13 Connection=3870
Procedure Cache: TotalProcs=922 TotalPages=3400 InUsePages=1759
Buffer Counts: Commited=208408 Target=208408 Hashed=198709
InternalReservation=201 ExternalReservation=0 Min Free=128
Buffer Distribution: Stolen=5811 Free=488 Procedures=3400
Inram=0 Dirty=46274 Kept=0
I/O=0, Latched=35, Other=152400
WARNING: Failed to reserve contiguous memory of Size= 65536.
the ability establish any trusted connections list night from 5:47 until we
rebooted it this morning. From that point on all attempts for SQL jobs
running under domain ids on the system and any BizTalk processes trying to
connect failed with the message "Login failed for user '(null)'. Reason: Not
associated with a trusted SQL Server connection.". The couple jobs that we
have running under standard Standard users (other than sa) had no problems.
None of our other servers had problems and the network looked ok, so I don't
think it was a domain or network connectivity problem. The reboot resolved
the problem, so it doesn't look like anything related to the domain userids
themself had changed, just SQL suddenly started failing to be able to use
them.
Looking in the SQL Server Log I see the following messages logged (all from
the same spid) at the same time all this started. I'm interpreting these as
meaning the server ran out of memory, but can not think of any reason why
this could have caused trusted connections to fail from that point on. BTW -
We have the latest service packs and patches (ver 8.00.818) installed. Any
ideas? Has anyone else ever seen anything like this before?
Query Memory Manager: Grants=0 Waiting=0 Maximum=150632 Available=150632
Global Memory Objects: Resource=2705 Locks=93
SQLCache=102 Replication=2
LockBytes=2 ServerGlobal=45
Xact=215
Dynamic Memory Manager: Stolen=9211 OS Reserved=16632
OS Committed=16593
OS In Use=12841
Query Plan=3349 Optimizer=0
General=14718
Utilities=13 Connection=3870
Procedure Cache: TotalProcs=922 TotalPages=3400 InUsePages=1759
Buffer Counts: Commited=208408 Target=208408 Hashed=198709
InternalReservation=201 ExternalReservation=0 Min Free=128
Buffer Distribution: Stolen=5811 Free=488 Procedures=3400
Inram=0 Dirty=46274 Kept=0
I/O=0, Latched=35, Other=152400
WARNING: Failed to reserve contiguous memory of Size= 65536.
The message
WARNING: Failed to reserve contiguous memory of Size= 65536.
shows that you have some pressure it seems in your MemoryToleave area of
sql server's address pool.
Here are a couple of things regarding this error. Try tshooting this, else
you might want to call into PSS and open up a support case -
There are two main areas of memory within SQL Server's address space, the
buffer pool (BPool) and a second memory pool sometimes called the
"MemToLeave" area. The contents of the SQL Server buffer pool include
cached table data, workspace memory used during query execution for
in-memory sorts or hashes, most cached stored procedure and query plans,
memory for locks and other internal structures, and the majority of other
miscellaneous memory needs of the SQL Server. SQL Server 7.0 introduced
dynamic memory management to the buffer pool, which means that the amount
of memory under SQL Server's direct control may grow and shrink in response
to internal SQL Server needs and external memory pressure from other
applications. It is normal for the size of the SQL Server buffer pool to
increase over time until most memory on the server is consumed. This
design can give the false appearance of a memory leak in the SQL Server
buffer pool when operating under normal circumstances. For more detailed
information please reference the Books Online articles "Server Memory
Options", "Memory Architecture", and (SQL Server 2000 only) "Effects of min
and max server memory".
The other significant memory area is sometimes called MemToLeave, and it is
primarily used by non-SQL Server code that happens to be executing within
the SQL Server process. The MemToLeave area is memory that is left
unallocated and unreserved, primarily for code that is not part of the core
SQL Server and therefore does not know how to access memory in the SQL
Server buffer pool. Some examples of components that may use this memory
include extended stored procedures, OLE Automation/COM objects, linked
server OLEDB providers and ODBC drivers, MAPI components used by SQLMail,
and thread stacks (one-half MB per thread). This does not just include the
.EXE and .DLL binary images for these components; any memory allocated at
runtime by the components listed above will also be allocated from the
MemToLeave area. Non-SQL Server code makes its memory allocation requests
directly from the OS, not from the SQL Server buffer pool. The entire SQL
Server buffer pool is reserved at server startup, so any requests for
memory made directly from the operating system must be satisfied from the
MemToLeave area, which is the only source of unreserved memory in the SQL
Server address space. SQL Server itself also uses the MemToLeave memory
area for certain allocations; for example, SQL Server 7.0 stores procedure
plans in the MemToLeave area if they are too large for a single 8KB buffer
pool page.
How to Determine Whether the Error Points to a Memory Shortage in Buffer
Pool or MemToLeave
If the 17803 error in the SQL Server errorlog is accompanied by one of the
following error messages, the memory pressure is most likely in the
MemToLeave area (see section "Troubleshooting MemToLeave Memory Pressure"
below). If the messages below do not appear with the 17803, start from
section "Troubleshooting Buffer Pool Memory Pressure".
Errors that imply insufficient contiguous free memory in the MemToLeave
area:
WARNING: Failed to reserve contiguous memory of Size= 65536.
WARNING: Clearing procedure cache to free contiguous memory.
Error: 17802, Severity: 18, State: 3 Could not create server event
thread.
SQL Server could not spawn process_loginread thread
Server: Msg 7399, Level 16, State 1, Line 1
OLE DB provider 'SQLOLEDB' reported an error. The provider ran out of
memory.
Troubleshooting MemToLeave Memory Pressure
If the 17803 and associated errors point to a shortage of memory in
MemToLeave, the items below list some of the most common causes of this
type of problem and suggest ways to alleviate the issue.
- If the server runs many concurrent linked server queries against SQL
Servers using SQLOLEDB or MSDASQL, upgrade to MDAC 2.5 or MDAC 2.6 on
the server. The MDAC 2.5 and 2.6 versions of SQLOLEDB are more
conservative with their initial memory allocations.
Perhaps the most common cause of out-of-memory conditions in the MemToLeave
area is a memory leak in a non-SQL Server component running inside the SQL
address space. Items to check:
-Check the errorlog for messages like "Using 'abc.dll' version '123' to
execute extended stored procedure 'xp_abc'." If the dll referenced in this
message is not a Microsoft-provided DLL you may want to consider moving it
out of the production server's address space
-Determine whether COM objects are being executed inside SQL Server's
address space with the sp_OA stored procedures. If sp_OA is being used you
will see ODSOLE70.DLL which hosts sp_OACreate being loaded and the
following message in the errorlog:
Using 'odsole70.dll' version '2000.80.382' to execute extended stored
procedure 'sp_OACreate'.
If sp_OA stored procedures are being used, ensure that the COM objects
are being loaded out of process by passing 4 to the optional third
parameter for sp_OACreate (e.g. "EXEC sp_OACreate 'SQLDMO.SQLServer', @.obj
OUTPUT, 4").
-If linked servers using third-party OLEDB providers or ODBC drivers are
in use, these are also a possible cause of memory leaks. Evaluate whether
the linked servers can be disabled for a time as a troubleshooting step to
see whether this prevents the leak, or examine whether the provider is
still fully functional once it has been configured to run out of process by
setting the AllowInProcess registry value for the provider to 0 (the
AllowInProcess value can be found at HKLM\Software\Microsoft\MSSQLServer(or
MSSQL$instance key)\Providers\[ProviderName]).
- If the server supports heavy linked server activity or must run
memory-hungry non-SQL Server code inside the SQL Server process, you
may simply need to adjust the size of the MemToLeave area to make more
memory available to non-SQL Server memory consumers. "-g" is an
optional SQL Server startup parameter that can be used to increase the
size of the MemToLeave area. The default -g memory size is 128MB in SQL
Server 7.0 and 256MB in SQL Server 2000. You can increase the size of
the MemToLeave area by an additioal 128MB by adding -g256 (SQL 7.0) or
-g384 (SQL 2000) as a server startup parameter. This setting will take
effect the next time the SQL Server service is started. Startup
parameters are added in the "General" tab of the Server Properties
dialog in Enterprise Manager.
Troubleshooting Buffer Pool Memory Pressure
Because of SQL Server's dynamic memory managment, it is not unusual for a
significant leak in the MemToLeave area to initially manifest itself as a
shortage of buffers in buffer pool because SQL Server will dynamically
scale down the size of the buffer pool in response to the growing number of
bytes committed within the MemToLeave area. Similarly, a memory hungry or
leaking application running on the same box can cause SQL Server to release
almost all of its BPool memory, leading to a 17803. To rule out these two
alternatives and confirm that the problem is confined to BPool, start by
looking at the Performance Monitor log you collected.
If the counter "Process(sqlservr):Working Set" was much lower than the
amount of physical RAM on the server while the insufficient memory errors
were occurring, look for another process that holds most of the memory and
pursue that process as the root of the memory pressure.
If "Process(sqlservr):Working Set" accounts for most or nearly all of the
physical RAM on the server but if "SQL Server:Memory Manager:Target Server
Memory(KB)" is only a fraction of this amount of memory, the root cause of
the problem is likely a leak in MemToLeave that had the side effect of
causing skrinkage of BPool. In this case follow the suggestions in the
previous section "Troubleshooting MemToLeave Memory Pressure".
If the root cause of the problem is a leak in or excessive demand for bpool
memory, the following should generally be true:
- Buffer pool should have already been grown to its maximum size. In
other words, "SQL Server:Memory Manager:Total Server Memory(KB)" should
be equal to "SQL Server:Memory Manager:Target Server Memory(KB)".
- Buffer pool should consume the majority of physical RAM allocated to
the SQL process ("SQL Server:Memory Manager:Target Server Memory(KB)"
should account for the majority of "Process(sqlservr):Working Set".)
- There should be high lazywriter activity ("SQLServer:Buffer Manager -
Lazy Writer Buffers/sec"). If the problem appears to be BPool memory
pressure and you see no lazywriter activity, something may be blocking
lazywriter. Consider getting one or more DBCC STACKDUMPs while in this
state.
If you have determined that the problem is a leak or excessive demand for
buffer pool memory, use perfmon to determine what is consuming the most
buffer pages. Counters to examine include:
- "SQLServer:Buffer Manager - Cache Size (pages)" (procedure cache)
- "SQLServer:Cache Manager - Cache Pages(Adhoc/Cursor/Stored Proc/etc)"
- "SQLServer:Memory Manager - Granted Workspace Memory (KB)" (query
memory)
- "SQLServer:Memory Manager - SQL Cache Memory (KB)" (cached data pages)
- "SQLServer:Memory Manager - Lock Memory (KB)"
- "SQLServer:Memory Manager - Optimizer Memory (KB)"
- "SQLServer:Buffer Manager - Stolen Pages"
Hope that helps!
Vikram Jayaram
Microsoft, SQL Server
This posting is provided "AS IS" with no warranties, and confers no rights.
Subscribe to MSDN & use http://msdn.microsoft.com/newsgroups.
|||SQL Server cache's connection information, in the MEM TO LEAVE region. If
that was the process that was attempting to make a memory reservation in
this region, then that would account for your errors.
What is happening to cause this is too many external memory processes
executing on the server. Although SQL Server has wonderful Dynamic Memory
managers for the internal processes, it has to appeal to the OS to manage
the external reservation calls. Over time, the MEM TO LEAVE region get
fragmented and the only fix is to clear them segments. Thus, at least a
recycling of the SQL Server services if not a reboot of the system.
You can counter balance this by reducing the number of external process
calls and/or adjust the amount of memory left to the MEM TO LEAVE region
using the /g startup parameter. Howerver, the sizing of this parameter
needs to be adjusted with the assistance of the PSS staff.
A good first start, however, would be to increase from the default for this
parameter from 256 to 384, which represents MB removed from the Buffer Pool
allocation in addition to what is reserved for the UMS Worker threads, 128
MB with the default 256 threads configuration.
Sincerely,
Anthony Thomas
"Jeff Turlington" <Jeff Turlington@.discussions.microsoft.com> wrote in
message news:D9783602-FAD2-4730-BB96-FB7CD2438605@.microsoft.com...
Has anyone seen anything like this before? One of our production servers
lost
the ability establish any trusted connections list night from 5:47 until we
rebooted it this morning. From that point on all attempts for SQL jobs
running under domain ids on the system and any BizTalk processes trying to
connect failed with the message "Login failed for user '(null)'. Reason: Not
associated with a trusted SQL Server connection.". The couple jobs that we
have running under standard Standard users (other than sa) had no problems.
None of our other servers had problems and the network looked ok, so I don't
think it was a domain or network connectivity problem. The reboot resolved
the problem, so it doesn't look like anything related to the domain userids
themself had changed, just SQL suddenly started failing to be able to use
them.
Looking in the SQL Server Log I see the following messages logged (all from
the same spid) at the same time all this started. I'm interpreting these as
meaning the server ran out of memory, but can not think of any reason why
this could have caused trusted connections to fail from that point on. BTW -
We have the latest service packs and patches (ver 8.00.818) installed. Any
ideas? Has anyone else ever seen anything like this before?
Query Memory Manager: Grants=0 Waiting=0 Maximum=150632 Available=150632
Global Memory Objects: Resource=2705 Locks=93
SQLCache=102 Replication=2
LockBytes=2 ServerGlobal=45
Xact=215
Dynamic Memory Manager: Stolen=9211 OS Reserved=16632
OS Committed=16593
OS In Use=12841
Query Plan=3349 Optimizer=0
General=14718
Utilities=13 Connection=3870
Procedure Cache: TotalProcs=922 TotalPages=3400 InUsePages=1759
Buffer Counts: Commited=208408 Target=208408 Hashed=198709
InternalReservation=201 ExternalReservation=0 Min Free=128
Buffer Distribution: Stolen=5811 Free=488 Procedures=3400
Inram=0 Dirty=46274 Kept=0
I/O=0, Latched=35, Other=152400
WARNING: Failed to reserve contiguous memory of Size= 65536.
Couldn't establish trusted connections after SQL memory problems
Has anyone seen anything like this before? One of our production servers los
t
the ability establish any trusted connections list night from 5:47 until we
rebooted it this morning. From that point on all attempts for SQL jobs
running under domain ids on the system and any BizTalk processes trying to
connect failed with the message "Login failed for user '(null)'. Reason: Not
associated with a trusted SQL Server connection.". The couple jobs that we
have running under standard Standard users (other than sa) had no problems.
None of our other servers had problems and the network looked ok, so I don't
think it was a domain or network connectivity problem. The reboot resolved
the problem, so it doesn't look like anything related to the domain userids
themself had changed, just SQL suddenly started failing to be able to use
them.
Looking in the SQL Server Log I see the following messages logged (all from
the same spid) at the same time all this started. I'm interpreting these as
meaning the server ran out of memory, but can not think of any reason why
this could have caused trusted connections to fail from that point on. BTW -
We have the latest service packs and patches (ver 8.00.818) installed. Any
ideas? Has anyone else ever seen anything like this before?
Query Memory Manager: Grants=0 Waiting=0 Maximum=150632 Available=150632
Global Memory Objects: Resource=2705 Locks=93
SQLCache=102 Replication=2
LockBytes=2 ServerGlobal=45
Xact=215
Dynamic Memory Manager: Stolen=9211 OS Reserved=16632
OS Committed=16593
OS In Use=12841
Query Plan=3349 Optimizer=0
General=14718
Utilities=13 Connection=3870
Procedure Cache: TotalProcs=922 TotalPages=3400 InUsePages=1759
Buffer Counts: Commited=208408 Target=208408 Hashed=198709
InternalReservation=201 ExternalReservation=0 Min Free=128
Buffer Distribution: Stolen=5811 Free=488 Procedures=3400
Inram=0 Dirty=46274 Kept=0
I/O=0, Latched=35, Other=152400
WARNING: Failed to reserve contiguous memory of Size= 65536.The message
WARNING: Failed to reserve contiguous memory of Size= 65536.
shows that you have some pressure it seems in your MemoryToleave area of
sql server's address pool.
Here are a couple of things regarding this error. Try tshooting this, else
you might want to call into PSS and open up a support case -
There are two main areas of memory within SQL Server's address space, the
buffer pool (BPool) and a second memory pool sometimes called the
"MemToLeave" area. The contents of the SQL Server buffer pool include
cached table data, workspace memory used during query execution for
in-memory sorts or hashes, most cached stored procedure and query plans,
memory for locks and other internal structures, and the majority of other
miscellaneous memory needs of the SQL Server. SQL Server 7.0 introduced
dynamic memory management to the buffer pool, which means that the amount
of memory under SQL Server's direct control may grow and shrink in response
to internal SQL Server needs and external memory pressure from other
applications. It is normal for the size of the SQL Server buffer pool to
increase over time until most memory on the server is consumed. This
design can give the false appearance of a memory leak in the SQL Server
buffer pool when operating under normal circumstances. For more detailed
information please reference the Books Online articles "Server Memory
Options", "Memory Architecture", and (SQL Server 2000 only) "Effects of min
and max server memory".
The other significant memory area is sometimes called MemToLeave, and it is
primarily used by non-SQL Server code that happens to be executing within
the SQL Server process. The MemToLeave area is memory that is left
unallocated and unreserved, primarily for code that is not part of the core
SQL Server and therefore does not know how to access memory in the SQL
Server buffer pool. Some examples of components that may use this memory
include extended stored procedures, OLE Automation/COM objects, linked
server OLEDB providers and ODBC drivers, MAPI components used by SQLMail,
and thread stacks (one-half MB per thread). This does not just include the
.EXE and .DLL binary images for these components; any memory allocated at
runtime by the components listed above will also be allocated from the
MemToLeave area. Non-SQL Server code makes its memory allocation requests
directly from the OS, not from the SQL Server buffer pool. The entire SQL
Server buffer pool is reserved at server startup, so any requests for
memory made directly from the operating system must be satisfied from the
MemToLeave area, which is the only source of unreserved memory in the SQL
Server address space. SQL Server itself also uses the MemToLeave memory
area for certain allocations; for example, SQL Server 7.0 stores procedure
plans in the MemToLeave area if they are too large for a single 8KB buffer
pool page.
How to Determine Whether the Error Points to a Memory Shortage in Buffer
Pool or MemToLeave
If the 17803 error in the SQL Server errorlog is accompanied by one of the
following error messages, the memory pressure is most likely in the
MemToLeave area (see section "Troubleshooting MemToLeave Memory Pressure"
below). If the messages below do not appear with the 17803, start from
section "Troubleshooting Buffer Pool Memory Pressure".
Errors that imply insufficient contiguous free memory in the MemToLeave
area:
WARNING: Failed to reserve contiguous memory of Size= 65536.
WARNING: Clearing procedure cache to free contiguous memory.
Error: 17802, Severity: 18, State: 3 Could not create server event
thread.
SQL Server could not spawn process_loginread thread
Server: Msg 7399, Level 16, State 1, Line 1
OLE DB provider 'SQLOLEDB' reported an error. The provider ran out of
memory.
Troubleshooting MemToLeave Memory Pressure
If the 17803 and associated errors point to a shortage of memory in
MemToLeave, the items below list some of the most common causes of this
type of problem and suggest ways to alleviate the issue.
- If the server runs many concurrent linked server queries against SQL
Servers using SQLOLEDB or MSDASQL, upgrade to MDAC 2.5 or MDAC 2.6 on
the server. The MDAC 2.5 and 2.6 versions of SQLOLEDB are more
conservative with their initial memory allocations.
Perhaps the most common cause of out-of-memory conditions in the MemToLeave
area is a memory leak in a non-SQL Server component running inside the SQL
address space. Items to check:
- Check the errorlog for messages like "Using 'abc.dll' version '123' to
execute extended stored procedure 'xp_abc'." If the dll referenced in this
message is not a Microsoft-provided DLL you may want to consider moving it
out of the production server's address space
- Determine whether COM objects are being executed inside SQL Server's
address space with the sp_OA stored procedures. If sp_OA is being used you
will see ODSOLE70.DLL which hosts sp_OACreate being loaded and the
following message in the errorlog:
Using 'odsole70.dll' version '2000.80.382' to execute extended stored
procedure 'sp_OACreate'.
If sp_OA stored procedures are being used, ensure that the COM objects
are being loaded out of process by passing 4 to the optional third
parameter for sp_OACreate (e.g. "EXEC sp_OACreate 'SQLDMO.SQLServer', @.obj
OUTPUT, 4").
- If linked servers using third-party OLEDB providers or ODBC drivers are
in use, these are also a possible cause of memory leaks. Evaluate whether
the linked servers can be disabled for a time as a troubleshooting step to
see whether this prevents the leak, or examine whether the provider is
still fully functional once it has been configured to run out of process by
setting the AllowInProcess registry value for the provider to 0 (the
AllowInProcess value can be found at HKLM\Software\Microsoft\MSSQLServer(or
MSSQL$instance key)\Providers\[ProviderName]).
- If the server supports heavy linked server activity or must run
memory-hungry non-SQL Server code inside the SQL Server process, you
may simply need to adjust the size of the MemToLeave area to make more
memory available to non-SQL Server memory consumers. "-g" is an
optional SQL Server startup parameter that can be used to increase the
size of the MemToLeave area. The default -g memory size is 128MB in SQL
Server 7.0 and 256MB in SQL Server 2000. You can increase the size of
the MemToLeave area by an additioal 128MB by adding -g256 (SQL 7.0) or
-g384 (SQL 2000) as a server startup parameter. This setting will take
effect the next time the SQL Server service is started. Startup
parameters are added in the "General" tab of the Server Properties
dialog in Enterprise Manager.
Troubleshooting Buffer Pool Memory Pressure
Because of SQL Server's dynamic memory managment, it is not unusual for a
significant leak in the MemToLeave area to initially manifest itself as a
shortage of buffers in buffer pool because SQL Server will dynamically
scale down the size of the buffer pool in response to the growing number of
bytes committed within the MemToLeave area. Similarly, a memory hungry or
leaking application running on the same box can cause SQL Server to release
almost all of its BPool memory, leading to a 17803. To rule out these two
alternatives and confirm that the problem is confined to BPool, start by
looking at the Performance Monitor log you collected.
If the counter "Process(sqlservr):Working Set" was much lower than the
amount of physical RAM on the server while the insufficient memory errors
were occurring, look for another process that holds most of the memory and
pursue that process as the root of the memory pressure.
If "Process(sqlservr):Working Set" accounts for most or nearly all of the
physical RAM on the server but if "SQL Server:Memory Manager:Target Server
Memory(KB)" is only a fraction of this amount of memory, the root cause of
the problem is likely a leak in MemToLeave that had the side effect of
causing skrinkage of BPool. In this case follow the suggestions in the
previous section "Troubleshooting MemToLeave Memory Pressure".
If the root cause of the problem is a leak in or excessive demand for bpool
memory, the following should generally be true:
- Buffer pool should have already been grown to its maximum size. In
other words, "SQL Server:Memory Manager:Total Server Memory(KB)" should
be equal to "SQL Server:Memory Manager:Target Server Memory(KB)".
- Buffer pool should consume the majority of physical RAM allocated to
the SQL process ("SQL Server:Memory Manager:Target Server Memory(KB)"
should account for the majority of "Process(sqlservr):Working Set".)
- There should be high lazywriter activity ("SQLServer:Buffer Manager -
Lazy Writer Buffers/sec"). If the problem appears to be BPool memory
pressure and you see no lazywriter activity, something may be blocking
lazywriter. Consider getting one or more DBCC STACKDUMPs while in this
state.
If you have determined that the problem is a leak or excessive demand for
buffer pool memory, use perfmon to determine what is consuming the most
buffer pages. Counters to examine include:
- "SQLServer:Buffer Manager - Cache Size (pages)" (procedure cache)
- "SQLServer:Cache Manager - Cache Pages(Adhoc/Cursor/Stored Proc/etc)"
- "SQLServer:Memory Manager - Granted Workspace Memory (KB)" (query
memory)
- "SQLServer:Memory Manager - SQL Cache Memory (KB)" (cached data pages)
- "SQLServer:Memory Manager - Lock Memory (KB)"
- "SQLServer:Memory Manager - Optimizer Memory (KB)"
- "SQLServer:Buffer Manager - Stolen Pages"
Hope that helps!
Vikram Jayaram
Microsoft, SQL Server
This posting is provided "AS IS" with no warranties, and confers no rights.
Subscribe to MSDN & use http://msdn.microsoft.com/newsgroups.|||SQL Server cache's connection information, in the MEM TO LEAVE region. If
that was the process that was attempting to make a memory reservation in
this region, then that would account for your errors.
What is happening to cause this is too many external memory processes
executing on the server. Although SQL Server has wonderful Dynamic Memory
managers for the internal processes, it has to appeal to the OS to manage
the external reservation calls. Over time, the MEM TO LEAVE region get
fragmented and the only fix is to clear them segments. Thus, at least a
recycling of the SQL Server services if not a reboot of the system.
You can counter balance this by reducing the number of external process
calls and/or adjust the amount of memory left to the MEM TO LEAVE region
using the /g startup parameter. Howerver, the sizing of this parameter
needs to be adjusted with the assistance of the PSS staff.
A good first start, however, would be to increase from the default for this
parameter from 256 to 384, which represents MB removed from the Buffer Pool
allocation in addition to what is reserved for the UMS Worker threads, 128
MB with the default 256 threads configuration.
Sincerely,
Anthony Thomas
"Jeff Turlington" <Jeff Turlington@.discussions.microsoft.com> wrote in
message news:D9783602-FAD2-4730-BB96-FB7CD2438605@.microsoft.com...
Has anyone seen anything like this before? One of our production servers
lost
the ability establish any trusted connections list night from 5:47 until we
rebooted it this morning. From that point on all attempts for SQL jobs
running under domain ids on the system and any BizTalk processes trying to
connect failed with the message "Login failed for user '(null)'. Reason: Not
associated with a trusted SQL Server connection.". The couple jobs that we
have running under standard Standard users (other than sa) had no problems.
None of our other servers had problems and the network looked ok, so I don't
think it was a domain or network connectivity problem. The reboot resolved
the problem, so it doesn't look like anything related to the domain userids
themself had changed, just SQL suddenly started failing to be able to use
them.
Looking in the SQL Server Log I see the following messages logged (all from
the same spid) at the same time all this started. I'm interpreting these as
meaning the server ran out of memory, but can not think of any reason why
this could have caused trusted connections to fail from that point on. BTW -
We have the latest service packs and patches (ver 8.00.818) installed. Any
ideas? Has anyone else ever seen anything like this before?
Query Memory Manager: Grants=0 Waiting=0 Maximum=150632 Available=150632
Global Memory Objects: Resource=2705 Locks=93
SQLCache=102 Replication=2
LockBytes=2 ServerGlobal=45
Xact=215
Dynamic Memory Manager: Stolen=9211 OS Reserved=16632
OS Committed=16593
OS In Use=12841
Query Plan=3349 Optimizer=0
General=14718
Utilities=13 Connection=3870
Procedure Cache: TotalProcs=922 TotalPages=3400 InUsePages=1759
Buffer Counts: Commited=208408 Target=208408 Hashed=198709
InternalReservation=201 ExternalReservation=0 Min Free=128
Buffer Distribution: Stolen=5811 Free=488 Procedures=3400
Inram=0 Dirty=46274 Kept=0
I/O=0, Latched=35, Other=152400
WARNING: Failed to reserve contiguous memory of Size= 65536.
t
the ability establish any trusted connections list night from 5:47 until we
rebooted it this morning. From that point on all attempts for SQL jobs
running under domain ids on the system and any BizTalk processes trying to
connect failed with the message "Login failed for user '(null)'. Reason: Not
associated with a trusted SQL Server connection.". The couple jobs that we
have running under standard Standard users (other than sa) had no problems.
None of our other servers had problems and the network looked ok, so I don't
think it was a domain or network connectivity problem. The reboot resolved
the problem, so it doesn't look like anything related to the domain userids
themself had changed, just SQL suddenly started failing to be able to use
them.
Looking in the SQL Server Log I see the following messages logged (all from
the same spid) at the same time all this started. I'm interpreting these as
meaning the server ran out of memory, but can not think of any reason why
this could have caused trusted connections to fail from that point on. BTW -
We have the latest service packs and patches (ver 8.00.818) installed. Any
ideas? Has anyone else ever seen anything like this before?
Query Memory Manager: Grants=0 Waiting=0 Maximum=150632 Available=150632
Global Memory Objects: Resource=2705 Locks=93
SQLCache=102 Replication=2
LockBytes=2 ServerGlobal=45
Xact=215
Dynamic Memory Manager: Stolen=9211 OS Reserved=16632
OS Committed=16593
OS In Use=12841
Query Plan=3349 Optimizer=0
General=14718
Utilities=13 Connection=3870
Procedure Cache: TotalProcs=922 TotalPages=3400 InUsePages=1759
Buffer Counts: Commited=208408 Target=208408 Hashed=198709
InternalReservation=201 ExternalReservation=0 Min Free=128
Buffer Distribution: Stolen=5811 Free=488 Procedures=3400
Inram=0 Dirty=46274 Kept=0
I/O=0, Latched=35, Other=152400
WARNING: Failed to reserve contiguous memory of Size= 65536.The message
WARNING: Failed to reserve contiguous memory of Size= 65536.
shows that you have some pressure it seems in your MemoryToleave area of
sql server's address pool.
Here are a couple of things regarding this error. Try tshooting this, else
you might want to call into PSS and open up a support case -
There are two main areas of memory within SQL Server's address space, the
buffer pool (BPool) and a second memory pool sometimes called the
"MemToLeave" area. The contents of the SQL Server buffer pool include
cached table data, workspace memory used during query execution for
in-memory sorts or hashes, most cached stored procedure and query plans,
memory for locks and other internal structures, and the majority of other
miscellaneous memory needs of the SQL Server. SQL Server 7.0 introduced
dynamic memory management to the buffer pool, which means that the amount
of memory under SQL Server's direct control may grow and shrink in response
to internal SQL Server needs and external memory pressure from other
applications. It is normal for the size of the SQL Server buffer pool to
increase over time until most memory on the server is consumed. This
design can give the false appearance of a memory leak in the SQL Server
buffer pool when operating under normal circumstances. For more detailed
information please reference the Books Online articles "Server Memory
Options", "Memory Architecture", and (SQL Server 2000 only) "Effects of min
and max server memory".
The other significant memory area is sometimes called MemToLeave, and it is
primarily used by non-SQL Server code that happens to be executing within
the SQL Server process. The MemToLeave area is memory that is left
unallocated and unreserved, primarily for code that is not part of the core
SQL Server and therefore does not know how to access memory in the SQL
Server buffer pool. Some examples of components that may use this memory
include extended stored procedures, OLE Automation/COM objects, linked
server OLEDB providers and ODBC drivers, MAPI components used by SQLMail,
and thread stacks (one-half MB per thread). This does not just include the
.EXE and .DLL binary images for these components; any memory allocated at
runtime by the components listed above will also be allocated from the
MemToLeave area. Non-SQL Server code makes its memory allocation requests
directly from the OS, not from the SQL Server buffer pool. The entire SQL
Server buffer pool is reserved at server startup, so any requests for
memory made directly from the operating system must be satisfied from the
MemToLeave area, which is the only source of unreserved memory in the SQL
Server address space. SQL Server itself also uses the MemToLeave memory
area for certain allocations; for example, SQL Server 7.0 stores procedure
plans in the MemToLeave area if they are too large for a single 8KB buffer
pool page.
How to Determine Whether the Error Points to a Memory Shortage in Buffer
Pool or MemToLeave
If the 17803 error in the SQL Server errorlog is accompanied by one of the
following error messages, the memory pressure is most likely in the
MemToLeave area (see section "Troubleshooting MemToLeave Memory Pressure"
below). If the messages below do not appear with the 17803, start from
section "Troubleshooting Buffer Pool Memory Pressure".
Errors that imply insufficient contiguous free memory in the MemToLeave
area:
WARNING: Failed to reserve contiguous memory of Size= 65536.
WARNING: Clearing procedure cache to free contiguous memory.
Error: 17802, Severity: 18, State: 3 Could not create server event
thread.
SQL Server could not spawn process_loginread thread
Server: Msg 7399, Level 16, State 1, Line 1
OLE DB provider 'SQLOLEDB' reported an error. The provider ran out of
memory.
Troubleshooting MemToLeave Memory Pressure
If the 17803 and associated errors point to a shortage of memory in
MemToLeave, the items below list some of the most common causes of this
type of problem and suggest ways to alleviate the issue.
- If the server runs many concurrent linked server queries against SQL
Servers using SQLOLEDB or MSDASQL, upgrade to MDAC 2.5 or MDAC 2.6 on
the server. The MDAC 2.5 and 2.6 versions of SQLOLEDB are more
conservative with their initial memory allocations.
Perhaps the most common cause of out-of-memory conditions in the MemToLeave
area is a memory leak in a non-SQL Server component running inside the SQL
address space. Items to check:
- Check the errorlog for messages like "Using 'abc.dll' version '123' to
execute extended stored procedure 'xp_abc'." If the dll referenced in this
message is not a Microsoft-provided DLL you may want to consider moving it
out of the production server's address space
- Determine whether COM objects are being executed inside SQL Server's
address space with the sp_OA stored procedures. If sp_OA is being used you
will see ODSOLE70.DLL which hosts sp_OACreate being loaded and the
following message in the errorlog:
Using 'odsole70.dll' version '2000.80.382' to execute extended stored
procedure 'sp_OACreate'.
If sp_OA stored procedures are being used, ensure that the COM objects
are being loaded out of process by passing 4 to the optional third
parameter for sp_OACreate (e.g. "EXEC sp_OACreate 'SQLDMO.SQLServer', @.obj
OUTPUT, 4").
- If linked servers using third-party OLEDB providers or ODBC drivers are
in use, these are also a possible cause of memory leaks. Evaluate whether
the linked servers can be disabled for a time as a troubleshooting step to
see whether this prevents the leak, or examine whether the provider is
still fully functional once it has been configured to run out of process by
setting the AllowInProcess registry value for the provider to 0 (the
AllowInProcess value can be found at HKLM\Software\Microsoft\MSSQLServer(or
MSSQL$instance key)\Providers\[ProviderName]).
- If the server supports heavy linked server activity or must run
memory-hungry non-SQL Server code inside the SQL Server process, you
may simply need to adjust the size of the MemToLeave area to make more
memory available to non-SQL Server memory consumers. "-g" is an
optional SQL Server startup parameter that can be used to increase the
size of the MemToLeave area. The default -g memory size is 128MB in SQL
Server 7.0 and 256MB in SQL Server 2000. You can increase the size of
the MemToLeave area by an additioal 128MB by adding -g256 (SQL 7.0) or
-g384 (SQL 2000) as a server startup parameter. This setting will take
effect the next time the SQL Server service is started. Startup
parameters are added in the "General" tab of the Server Properties
dialog in Enterprise Manager.
Troubleshooting Buffer Pool Memory Pressure
Because of SQL Server's dynamic memory managment, it is not unusual for a
significant leak in the MemToLeave area to initially manifest itself as a
shortage of buffers in buffer pool because SQL Server will dynamically
scale down the size of the buffer pool in response to the growing number of
bytes committed within the MemToLeave area. Similarly, a memory hungry or
leaking application running on the same box can cause SQL Server to release
almost all of its BPool memory, leading to a 17803. To rule out these two
alternatives and confirm that the problem is confined to BPool, start by
looking at the Performance Monitor log you collected.
If the counter "Process(sqlservr):Working Set" was much lower than the
amount of physical RAM on the server while the insufficient memory errors
were occurring, look for another process that holds most of the memory and
pursue that process as the root of the memory pressure.
If "Process(sqlservr):Working Set" accounts for most or nearly all of the
physical RAM on the server but if "SQL Server:Memory Manager:Target Server
Memory(KB)" is only a fraction of this amount of memory, the root cause of
the problem is likely a leak in MemToLeave that had the side effect of
causing skrinkage of BPool. In this case follow the suggestions in the
previous section "Troubleshooting MemToLeave Memory Pressure".
If the root cause of the problem is a leak in or excessive demand for bpool
memory, the following should generally be true:
- Buffer pool should have already been grown to its maximum size. In
other words, "SQL Server:Memory Manager:Total Server Memory(KB)" should
be equal to "SQL Server:Memory Manager:Target Server Memory(KB)".
- Buffer pool should consume the majority of physical RAM allocated to
the SQL process ("SQL Server:Memory Manager:Target Server Memory(KB)"
should account for the majority of "Process(sqlservr):Working Set".)
- There should be high lazywriter activity ("SQLServer:Buffer Manager -
Lazy Writer Buffers/sec"). If the problem appears to be BPool memory
pressure and you see no lazywriter activity, something may be blocking
lazywriter. Consider getting one or more DBCC STACKDUMPs while in this
state.
If you have determined that the problem is a leak or excessive demand for
buffer pool memory, use perfmon to determine what is consuming the most
buffer pages. Counters to examine include:
- "SQLServer:Buffer Manager - Cache Size (pages)" (procedure cache)
- "SQLServer:Cache Manager - Cache Pages(Adhoc/Cursor/Stored Proc/etc)"
- "SQLServer:Memory Manager - Granted Workspace Memory (KB)" (query
memory)
- "SQLServer:Memory Manager - SQL Cache Memory (KB)" (cached data pages)
- "SQLServer:Memory Manager - Lock Memory (KB)"
- "SQLServer:Memory Manager - Optimizer Memory (KB)"
- "SQLServer:Buffer Manager - Stolen Pages"
Hope that helps!
Vikram Jayaram
Microsoft, SQL Server
This posting is provided "AS IS" with no warranties, and confers no rights.
Subscribe to MSDN & use http://msdn.microsoft.com/newsgroups.|||SQL Server cache's connection information, in the MEM TO LEAVE region. If
that was the process that was attempting to make a memory reservation in
this region, then that would account for your errors.
What is happening to cause this is too many external memory processes
executing on the server. Although SQL Server has wonderful Dynamic Memory
managers for the internal processes, it has to appeal to the OS to manage
the external reservation calls. Over time, the MEM TO LEAVE region get
fragmented and the only fix is to clear them segments. Thus, at least a
recycling of the SQL Server services if not a reboot of the system.
You can counter balance this by reducing the number of external process
calls and/or adjust the amount of memory left to the MEM TO LEAVE region
using the /g startup parameter. Howerver, the sizing of this parameter
needs to be adjusted with the assistance of the PSS staff.
A good first start, however, would be to increase from the default for this
parameter from 256 to 384, which represents MB removed from the Buffer Pool
allocation in addition to what is reserved for the UMS Worker threads, 128
MB with the default 256 threads configuration.
Sincerely,
Anthony Thomas
"Jeff Turlington" <Jeff Turlington@.discussions.microsoft.com> wrote in
message news:D9783602-FAD2-4730-BB96-FB7CD2438605@.microsoft.com...
Has anyone seen anything like this before? One of our production servers
lost
the ability establish any trusted connections list night from 5:47 until we
rebooted it this morning. From that point on all attempts for SQL jobs
running under domain ids on the system and any BizTalk processes trying to
connect failed with the message "Login failed for user '(null)'. Reason: Not
associated with a trusted SQL Server connection.". The couple jobs that we
have running under standard Standard users (other than sa) had no problems.
None of our other servers had problems and the network looked ok, so I don't
think it was a domain or network connectivity problem. The reboot resolved
the problem, so it doesn't look like anything related to the domain userids
themself had changed, just SQL suddenly started failing to be able to use
them.
Looking in the SQL Server Log I see the following messages logged (all from
the same spid) at the same time all this started. I'm interpreting these as
meaning the server ran out of memory, but can not think of any reason why
this could have caused trusted connections to fail from that point on. BTW -
We have the latest service packs and patches (ver 8.00.818) installed. Any
ideas? Has anyone else ever seen anything like this before?
Query Memory Manager: Grants=0 Waiting=0 Maximum=150632 Available=150632
Global Memory Objects: Resource=2705 Locks=93
SQLCache=102 Replication=2
LockBytes=2 ServerGlobal=45
Xact=215
Dynamic Memory Manager: Stolen=9211 OS Reserved=16632
OS Committed=16593
OS In Use=12841
Query Plan=3349 Optimizer=0
General=14718
Utilities=13 Connection=3870
Procedure Cache: TotalProcs=922 TotalPages=3400 InUsePages=1759
Buffer Counts: Commited=208408 Target=208408 Hashed=198709
InternalReservation=201 ExternalReservation=0 Min Free=128
Buffer Distribution: Stolen=5811 Free=488 Procedures=3400
Inram=0 Dirty=46274 Kept=0
I/O=0, Latched=35, Other=152400
WARNING: Failed to reserve contiguous memory of Size= 65536.
Couldn't establish trusted connections after SQL memory problems
Has anyone seen anything like this before? One of our production servers lost
the ability establish any trusted connections list night from 5:47 until we
rebooted it this morning. From that point on all attempts for SQL jobs
running under domain ids on the system and any BizTalk processes trying to
connect failed with the message "Login failed for user '(null)'. Reason: Not
associated with a trusted SQL Server connection.". The couple jobs that we
have running under standard Standard users (other than sa) had no problems.
None of our other servers had problems and the network looked ok, so I don't
think it was a domain or network connectivity problem. The reboot resolved
the problem, so it doesn't look like anything related to the domain userids
themself had changed, just SQL suddenly started failing to be able to use
them.
Looking in the SQL Server Log I see the following messages logged (all from
the same spid) at the same time all this started. I'm interpreting these as
meaning the server ran out of memory, but can not think of any reason why
this could have caused trusted connections to fail from that point on. BTW -
We have the latest service packs and patches (ver 8.00.818) installed. Any
ideas? Has anyone else ever seen anything like this before?
Query Memory Manager: Grants=0 Waiting=0 Maximum=150632 Available=150632
Global Memory Objects: Resource=2705 Locks=93
SQLCache=102 Replication=2
LockBytes=2 ServerGlobal=45
Xact=215
Dynamic Memory Manager: Stolen=9211 OS Reserved=16632
OS Committed=16593
OS In Use=12841
Query Plan=3349 Optimizer=0
General=14718
Utilities=13 Connection=3870
Procedure Cache: TotalProcs=922 TotalPages=3400 InUsePages=1759
Buffer Counts: Commited=208408 Target=208408 Hashed=198709
InternalReservation=201 ExternalReservation=0 Min Free=128
Buffer Distribution: Stolen=5811 Free=488 Procedures=3400
Inram=0 Dirty=46274 Kept=0
I/O=0, Latched=35, Other=152400
WARNING: Failed to reserve contiguous memory of Size= 65536.The message
WARNING: Failed to reserve contiguous memory of Size= 65536.
shows that you have some pressure it seems in your MemoryToleave area of
sql server's address pool.
Here are a couple of things regarding this error. Try tshooting this, else
you might want to call into PSS and open up a support case -
There are two main areas of memory within SQL Server's address space, the
buffer pool (BPool) and a second memory pool sometimes called the
"MemToLeave" area. The contents of the SQL Server buffer pool include
cached table data, workspace memory used during query execution for
in-memory sorts or hashes, most cached stored procedure and query plans,
memory for locks and other internal structures, and the majority of other
miscellaneous memory needs of the SQL Server. SQL Server 7.0 introduced
dynamic memory management to the buffer pool, which means that the amount
of memory under SQL Server's direct control may grow and shrink in response
to internal SQL Server needs and external memory pressure from other
applications. It is normal for the size of the SQL Server buffer pool to
increase over time until most memory on the server is consumed. This
design can give the false appearance of a memory leak in the SQL Server
buffer pool when operating under normal circumstances. For more detailed
information please reference the Books Online articles "Server Memory
Options", "Memory Architecture", and (SQL Server 2000 only) "Effects of min
and max server memory".
The other significant memory area is sometimes called MemToLeave, and it is
primarily used by non-SQL Server code that happens to be executing within
the SQL Server process. The MemToLeave area is memory that is left
unallocated and unreserved, primarily for code that is not part of the core
SQL Server and therefore does not know how to access memory in the SQL
Server buffer pool. Some examples of components that may use this memory
include extended stored procedures, OLE Automation/COM objects, linked
server OLEDB providers and ODBC drivers, MAPI components used by SQLMail,
and thread stacks (one-half MB per thread). This does not just include the
EXE and .DLL binary images for these components; any memory allocated at
runtime by the components listed above will also be allocated from the
MemToLeave area. Non-SQL Server code makes its memory allocation requests
directly from the OS, not from the SQL Server buffer pool. The entire SQL
Server buffer pool is reserved at server startup, so any requests for
memory made directly from the operating system must be satisfied from the
MemToLeave area, which is the only source of unreserved memory in the SQL
Server address space. SQL Server itself also uses the MemToLeave memory
area for certain allocations; for example, SQL Server 7.0 stores procedure
plans in the MemToLeave area if they are too large for a single 8KB buffer
pool page.
How to Determine Whether the Error Points to a Memory Shortage in Buffer
Pool or MemToLeave
If the 17803 error in the SQL Server errorlog is accompanied by one of the
following error messages, the memory pressure is most likely in the
MemToLeave area (see section "Troubleshooting MemToLeave Memory Pressure"
below). If the messages below do not appear with the 17803, start from
section "Troubleshooting Buffer Pool Memory Pressure".
Errors that imply insufficient contiguous free memory in the MemToLeave
area:
WARNING: Failed to reserve contiguous memory of Size= 65536.
WARNING: Clearing procedure cache to free contiguous memory.
Error: 17802, Severity: 18, State: 3 Could not create server event
thread.
SQL Server could not spawn process_loginread thread
Server: Msg 7399, Level 16, State 1, Line 1
OLE DB provider 'SQLOLEDB' reported an error. The provider ran out of
memory.
Troubleshooting MemToLeave Memory Pressure
If the 17803 and associated errors point to a shortage of memory in
MemToLeave, the items below list some of the most common causes of this
type of problem and suggest ways to alleviate the issue.
- If the server runs many concurrent linked server queries against SQL
Servers using SQLOLEDB or MSDASQL, upgrade to MDAC 2.5 or MDAC 2.6 on
the server. The MDAC 2.5 and 2.6 versions of SQLOLEDB are more
conservative with their initial memory allocations.
Perhaps the most common cause of out-of-memory conditions in the MemToLeave
area is a memory leak in a non-SQL Server component running inside the SQL
address space. Items to check:
- Check the errorlog for messages like "Using 'abc.dll' version '123' to
execute extended stored procedure 'xp_abc'." If the dll referenced in this
message is not a Microsoft-provided DLL you may want to consider moving it
out of the production server's address space
- Determine whether COM objects are being executed inside SQL Server's
address space with the sp_OA stored procedures. If sp_OA is being used you
will see ODSOLE70.DLL which hosts sp_OACreate being loaded and the
following message in the errorlog:
Using 'odsole70.dll' version '2000.80.382' to execute extended stored
procedure 'sp_OACreate'.
If sp_OA stored procedures are being used, ensure that the COM objects
are being loaded out of process by passing 4 to the optional third
parameter for sp_OACreate (e.g. "EXEC sp_OACreate 'SQLDMO.SQLServer', @.obj
OUTPUT, 4").
- If linked servers using third-party OLEDB providers or ODBC drivers are
in use, these are also a possible cause of memory leaks. Evaluate whether
the linked servers can be disabled for a time as a troubleshooting step to
see whether this prevents the leak, or examine whether the provider is
still fully functional once it has been configured to run out of process by
setting the AllowInProcess registry value for the provider to 0 (the
AllowInProcess value can be found at HKLM\Software\Microsoft\MSSQLServer(or
MSSQL$instance key)\Providers\[ProviderName]).
- If the server supports heavy linked server activity or must run
memory-hungry non-SQL Server code inside the SQL Server process, you
may simply need to adjust the size of the MemToLeave area to make more
memory available to non-SQL Server memory consumers. "-g" is an
optional SQL Server startup parameter that can be used to increase the
size of the MemToLeave area. The default -g memory size is 128MB in SQL
Server 7.0 and 256MB in SQL Server 2000. You can increase the size of
the MemToLeave area by an additioal 128MB by adding -g256 (SQL 7.0) or
-g384 (SQL 2000) as a server startup parameter. This setting will take
effect the next time the SQL Server service is started. Startup
parameters are added in the "General" tab of the Server Properties
dialog in Enterprise Manager.
Troubleshooting Buffer Pool Memory Pressure
Because of SQL Server's dynamic memory managment, it is not unusual for a
significant leak in the MemToLeave area to initially manifest itself as a
shortage of buffers in buffer pool because SQL Server will dynamically
scale down the size of the buffer pool in response to the growing number of
bytes committed within the MemToLeave area. Similarly, a memory hungry or
leaking application running on the same box can cause SQL Server to release
almost all of its BPool memory, leading to a 17803. To rule out these two
alternatives and confirm that the problem is confined to BPool, start by
looking at the Performance Monitor log you collected.
If the counter "Process(sqlservr):Working Set" was much lower than the
amount of physical RAM on the server while the insufficient memory errors
were occurring, look for another process that holds most of the memory and
pursue that process as the root of the memory pressure.
If "Process(sqlservr):Working Set" accounts for most or nearly all of the
physical RAM on the server but if "SQL Server:Memory Manager:Target Server
Memory(KB)" is only a fraction of this amount of memory, the root cause of
the problem is likely a leak in MemToLeave that had the side effect of
causing skrinkage of BPool. In this case follow the suggestions in the
previous section "Troubleshooting MemToLeave Memory Pressure".
If the root cause of the problem is a leak in or excessive demand for bpool
memory, the following should generally be true:
- Buffer pool should have already been grown to its maximum size. In
other words, "SQL Server:Memory Manager:Total Server Memory(KB)" should
be equal to "SQL Server:Memory Manager:Target Server Memory(KB)".
- Buffer pool should consume the majority of physical RAM allocated to
the SQL process ("SQL Server:Memory Manager:Target Server Memory(KB)"
should account for the majority of "Process(sqlservr):Working Set".)
- There should be high lazywriter activity ("SQLServer:Buffer Manager -
Lazy Writer Buffers/sec"). If the problem appears to be BPool memory
pressure and you see no lazywriter activity, something may be blocking
lazywriter. Consider getting one or more DBCC STACKDUMPs while in this
state.
If you have determined that the problem is a leak or excessive demand for
buffer pool memory, use perfmon to determine what is consuming the most
buffer pages. Counters to examine include:
- "SQLServer:Buffer Manager - Cache Size (pages)" (procedure cache)
- "SQLServer:Cache Manager - Cache Pages(Adhoc/Cursor/Stored Proc/etc)"
- "SQLServer:Memory Manager - Granted Workspace Memory (KB)" (query
memory)
- "SQLServer:Memory Manager - SQL Cache Memory (KB)" (cached data pages)
- "SQLServer:Memory Manager - Lock Memory (KB)"
- "SQLServer:Memory Manager - Optimizer Memory (KB)"
- "SQLServer:Buffer Manager - Stolen Pages"
Hope that helps!
Vikram Jayaram
Microsoft, SQL Server
This posting is provided "AS IS" with no warranties, and confers no rights.
Subscribe to MSDN & use http://msdn.microsoft.com/newsgroups.|||SQL Server cache's connection information, in the MEM TO LEAVE region. If
that was the process that was attempting to make a memory reservation in
this region, then that would account for your errors.
What is happening to cause this is too many external memory processes
executing on the server. Although SQL Server has wonderful Dynamic Memory
managers for the internal processes, it has to appeal to the OS to manage
the external reservation calls. Over time, the MEM TO LEAVE region get
fragmented and the only fix is to clear them segments. Thus, at least a
recycling of the SQL Server services if not a reboot of the system.
You can counter balance this by reducing the number of external process
calls and/or adjust the amount of memory left to the MEM TO LEAVE region
using the /g startup parameter. Howerver, the sizing of this parameter
needs to be adjusted with the assistance of the PSS staff.
A good first start, however, would be to increase from the default for this
parameter from 256 to 384, which represents MB removed from the Buffer Pool
allocation in addition to what is reserved for the UMS Worker threads, 128
MB with the default 256 threads configuration.
Sincerely,
Anthony Thomas
"Jeff Turlington" <Jeff Turlington@.discussions.microsoft.com> wrote in
message news:D9783602-FAD2-4730-BB96-FB7CD2438605@.microsoft.com...
Has anyone seen anything like this before? One of our production servers
lost
the ability establish any trusted connections list night from 5:47 until we
rebooted it this morning. From that point on all attempts for SQL jobs
running under domain ids on the system and any BizTalk processes trying to
connect failed with the message "Login failed for user '(null)'. Reason: Not
associated with a trusted SQL Server connection.". The couple jobs that we
have running under standard Standard users (other than sa) had no problems.
None of our other servers had problems and the network looked ok, so I don't
think it was a domain or network connectivity problem. The reboot resolved
the problem, so it doesn't look like anything related to the domain userids
themself had changed, just SQL suddenly started failing to be able to use
them.
Looking in the SQL Server Log I see the following messages logged (all from
the same spid) at the same time all this started. I'm interpreting these as
meaning the server ran out of memory, but can not think of any reason why
this could have caused trusted connections to fail from that point on. BTW -
We have the latest service packs and patches (ver 8.00.818) installed. Any
ideas? Has anyone else ever seen anything like this before?
Query Memory Manager: Grants=0 Waiting=0 Maximum=150632 Available=150632
Global Memory Objects: Resource=2705 Locks=93
SQLCache=102 Replication=2
LockBytes=2 ServerGlobal=45
Xact=215
Dynamic Memory Manager: Stolen=9211 OS Reserved=16632
OS Committed=16593
OS In Use=12841
Query Plan=3349 Optimizer=0
General=14718
Utilities=13 Connection=3870
Procedure Cache: TotalProcs=922 TotalPages=3400 InUsePages=1759
Buffer Counts: Commited=208408 Target=208408 Hashed=198709
InternalReservation=201 ExternalReservation=0 Min Free=128
Buffer Distribution: Stolen=5811 Free=488 Procedures=3400
Inram=0 Dirty=46274 Kept=0
I/O=0, Latched=35, Other=152400
WARNING: Failed to reserve contiguous memory of Size= 65536.
the ability establish any trusted connections list night from 5:47 until we
rebooted it this morning. From that point on all attempts for SQL jobs
running under domain ids on the system and any BizTalk processes trying to
connect failed with the message "Login failed for user '(null)'. Reason: Not
associated with a trusted SQL Server connection.". The couple jobs that we
have running under standard Standard users (other than sa) had no problems.
None of our other servers had problems and the network looked ok, so I don't
think it was a domain or network connectivity problem. The reboot resolved
the problem, so it doesn't look like anything related to the domain userids
themself had changed, just SQL suddenly started failing to be able to use
them.
Looking in the SQL Server Log I see the following messages logged (all from
the same spid) at the same time all this started. I'm interpreting these as
meaning the server ran out of memory, but can not think of any reason why
this could have caused trusted connections to fail from that point on. BTW -
We have the latest service packs and patches (ver 8.00.818) installed. Any
ideas? Has anyone else ever seen anything like this before?
Query Memory Manager: Grants=0 Waiting=0 Maximum=150632 Available=150632
Global Memory Objects: Resource=2705 Locks=93
SQLCache=102 Replication=2
LockBytes=2 ServerGlobal=45
Xact=215
Dynamic Memory Manager: Stolen=9211 OS Reserved=16632
OS Committed=16593
OS In Use=12841
Query Plan=3349 Optimizer=0
General=14718
Utilities=13 Connection=3870
Procedure Cache: TotalProcs=922 TotalPages=3400 InUsePages=1759
Buffer Counts: Commited=208408 Target=208408 Hashed=198709
InternalReservation=201 ExternalReservation=0 Min Free=128
Buffer Distribution: Stolen=5811 Free=488 Procedures=3400
Inram=0 Dirty=46274 Kept=0
I/O=0, Latched=35, Other=152400
WARNING: Failed to reserve contiguous memory of Size= 65536.The message
WARNING: Failed to reserve contiguous memory of Size= 65536.
shows that you have some pressure it seems in your MemoryToleave area of
sql server's address pool.
Here are a couple of things regarding this error. Try tshooting this, else
you might want to call into PSS and open up a support case -
There are two main areas of memory within SQL Server's address space, the
buffer pool (BPool) and a second memory pool sometimes called the
"MemToLeave" area. The contents of the SQL Server buffer pool include
cached table data, workspace memory used during query execution for
in-memory sorts or hashes, most cached stored procedure and query plans,
memory for locks and other internal structures, and the majority of other
miscellaneous memory needs of the SQL Server. SQL Server 7.0 introduced
dynamic memory management to the buffer pool, which means that the amount
of memory under SQL Server's direct control may grow and shrink in response
to internal SQL Server needs and external memory pressure from other
applications. It is normal for the size of the SQL Server buffer pool to
increase over time until most memory on the server is consumed. This
design can give the false appearance of a memory leak in the SQL Server
buffer pool when operating under normal circumstances. For more detailed
information please reference the Books Online articles "Server Memory
Options", "Memory Architecture", and (SQL Server 2000 only) "Effects of min
and max server memory".
The other significant memory area is sometimes called MemToLeave, and it is
primarily used by non-SQL Server code that happens to be executing within
the SQL Server process. The MemToLeave area is memory that is left
unallocated and unreserved, primarily for code that is not part of the core
SQL Server and therefore does not know how to access memory in the SQL
Server buffer pool. Some examples of components that may use this memory
include extended stored procedures, OLE Automation/COM objects, linked
server OLEDB providers and ODBC drivers, MAPI components used by SQLMail,
and thread stacks (one-half MB per thread). This does not just include the
EXE and .DLL binary images for these components; any memory allocated at
runtime by the components listed above will also be allocated from the
MemToLeave area. Non-SQL Server code makes its memory allocation requests
directly from the OS, not from the SQL Server buffer pool. The entire SQL
Server buffer pool is reserved at server startup, so any requests for
memory made directly from the operating system must be satisfied from the
MemToLeave area, which is the only source of unreserved memory in the SQL
Server address space. SQL Server itself also uses the MemToLeave memory
area for certain allocations; for example, SQL Server 7.0 stores procedure
plans in the MemToLeave area if they are too large for a single 8KB buffer
pool page.
How to Determine Whether the Error Points to a Memory Shortage in Buffer
Pool or MemToLeave
If the 17803 error in the SQL Server errorlog is accompanied by one of the
following error messages, the memory pressure is most likely in the
MemToLeave area (see section "Troubleshooting MemToLeave Memory Pressure"
below). If the messages below do not appear with the 17803, start from
section "Troubleshooting Buffer Pool Memory Pressure".
Errors that imply insufficient contiguous free memory in the MemToLeave
area:
WARNING: Failed to reserve contiguous memory of Size= 65536.
WARNING: Clearing procedure cache to free contiguous memory.
Error: 17802, Severity: 18, State: 3 Could not create server event
thread.
SQL Server could not spawn process_loginread thread
Server: Msg 7399, Level 16, State 1, Line 1
OLE DB provider 'SQLOLEDB' reported an error. The provider ran out of
memory.
Troubleshooting MemToLeave Memory Pressure
If the 17803 and associated errors point to a shortage of memory in
MemToLeave, the items below list some of the most common causes of this
type of problem and suggest ways to alleviate the issue.
- If the server runs many concurrent linked server queries against SQL
Servers using SQLOLEDB or MSDASQL, upgrade to MDAC 2.5 or MDAC 2.6 on
the server. The MDAC 2.5 and 2.6 versions of SQLOLEDB are more
conservative with their initial memory allocations.
Perhaps the most common cause of out-of-memory conditions in the MemToLeave
area is a memory leak in a non-SQL Server component running inside the SQL
address space. Items to check:
- Check the errorlog for messages like "Using 'abc.dll' version '123' to
execute extended stored procedure 'xp_abc'." If the dll referenced in this
message is not a Microsoft-provided DLL you may want to consider moving it
out of the production server's address space
- Determine whether COM objects are being executed inside SQL Server's
address space with the sp_OA stored procedures. If sp_OA is being used you
will see ODSOLE70.DLL which hosts sp_OACreate being loaded and the
following message in the errorlog:
Using 'odsole70.dll' version '2000.80.382' to execute extended stored
procedure 'sp_OACreate'.
If sp_OA stored procedures are being used, ensure that the COM objects
are being loaded out of process by passing 4 to the optional third
parameter for sp_OACreate (e.g. "EXEC sp_OACreate 'SQLDMO.SQLServer', @.obj
OUTPUT, 4").
- If linked servers using third-party OLEDB providers or ODBC drivers are
in use, these are also a possible cause of memory leaks. Evaluate whether
the linked servers can be disabled for a time as a troubleshooting step to
see whether this prevents the leak, or examine whether the provider is
still fully functional once it has been configured to run out of process by
setting the AllowInProcess registry value for the provider to 0 (the
AllowInProcess value can be found at HKLM\Software\Microsoft\MSSQLServer(or
MSSQL$instance key)\Providers\[ProviderName]).
- If the server supports heavy linked server activity or must run
memory-hungry non-SQL Server code inside the SQL Server process, you
may simply need to adjust the size of the MemToLeave area to make more
memory available to non-SQL Server memory consumers. "-g" is an
optional SQL Server startup parameter that can be used to increase the
size of the MemToLeave area. The default -g memory size is 128MB in SQL
Server 7.0 and 256MB in SQL Server 2000. You can increase the size of
the MemToLeave area by an additioal 128MB by adding -g256 (SQL 7.0) or
-g384 (SQL 2000) as a server startup parameter. This setting will take
effect the next time the SQL Server service is started. Startup
parameters are added in the "General" tab of the Server Properties
dialog in Enterprise Manager.
Troubleshooting Buffer Pool Memory Pressure
Because of SQL Server's dynamic memory managment, it is not unusual for a
significant leak in the MemToLeave area to initially manifest itself as a
shortage of buffers in buffer pool because SQL Server will dynamically
scale down the size of the buffer pool in response to the growing number of
bytes committed within the MemToLeave area. Similarly, a memory hungry or
leaking application running on the same box can cause SQL Server to release
almost all of its BPool memory, leading to a 17803. To rule out these two
alternatives and confirm that the problem is confined to BPool, start by
looking at the Performance Monitor log you collected.
If the counter "Process(sqlservr):Working Set" was much lower than the
amount of physical RAM on the server while the insufficient memory errors
were occurring, look for another process that holds most of the memory and
pursue that process as the root of the memory pressure.
If "Process(sqlservr):Working Set" accounts for most or nearly all of the
physical RAM on the server but if "SQL Server:Memory Manager:Target Server
Memory(KB)" is only a fraction of this amount of memory, the root cause of
the problem is likely a leak in MemToLeave that had the side effect of
causing skrinkage of BPool. In this case follow the suggestions in the
previous section "Troubleshooting MemToLeave Memory Pressure".
If the root cause of the problem is a leak in or excessive demand for bpool
memory, the following should generally be true:
- Buffer pool should have already been grown to its maximum size. In
other words, "SQL Server:Memory Manager:Total Server Memory(KB)" should
be equal to "SQL Server:Memory Manager:Target Server Memory(KB)".
- Buffer pool should consume the majority of physical RAM allocated to
the SQL process ("SQL Server:Memory Manager:Target Server Memory(KB)"
should account for the majority of "Process(sqlservr):Working Set".)
- There should be high lazywriter activity ("SQLServer:Buffer Manager -
Lazy Writer Buffers/sec"). If the problem appears to be BPool memory
pressure and you see no lazywriter activity, something may be blocking
lazywriter. Consider getting one or more DBCC STACKDUMPs while in this
state.
If you have determined that the problem is a leak or excessive demand for
buffer pool memory, use perfmon to determine what is consuming the most
buffer pages. Counters to examine include:
- "SQLServer:Buffer Manager - Cache Size (pages)" (procedure cache)
- "SQLServer:Cache Manager - Cache Pages(Adhoc/Cursor/Stored Proc/etc)"
- "SQLServer:Memory Manager - Granted Workspace Memory (KB)" (query
memory)
- "SQLServer:Memory Manager - SQL Cache Memory (KB)" (cached data pages)
- "SQLServer:Memory Manager - Lock Memory (KB)"
- "SQLServer:Memory Manager - Optimizer Memory (KB)"
- "SQLServer:Buffer Manager - Stolen Pages"
Hope that helps!
Vikram Jayaram
Microsoft, SQL Server
This posting is provided "AS IS" with no warranties, and confers no rights.
Subscribe to MSDN & use http://msdn.microsoft.com/newsgroups.|||SQL Server cache's connection information, in the MEM TO LEAVE region. If
that was the process that was attempting to make a memory reservation in
this region, then that would account for your errors.
What is happening to cause this is too many external memory processes
executing on the server. Although SQL Server has wonderful Dynamic Memory
managers for the internal processes, it has to appeal to the OS to manage
the external reservation calls. Over time, the MEM TO LEAVE region get
fragmented and the only fix is to clear them segments. Thus, at least a
recycling of the SQL Server services if not a reboot of the system.
You can counter balance this by reducing the number of external process
calls and/or adjust the amount of memory left to the MEM TO LEAVE region
using the /g startup parameter. Howerver, the sizing of this parameter
needs to be adjusted with the assistance of the PSS staff.
A good first start, however, would be to increase from the default for this
parameter from 256 to 384, which represents MB removed from the Buffer Pool
allocation in addition to what is reserved for the UMS Worker threads, 128
MB with the default 256 threads configuration.
Sincerely,
Anthony Thomas
"Jeff Turlington" <Jeff Turlington@.discussions.microsoft.com> wrote in
message news:D9783602-FAD2-4730-BB96-FB7CD2438605@.microsoft.com...
Has anyone seen anything like this before? One of our production servers
lost
the ability establish any trusted connections list night from 5:47 until we
rebooted it this morning. From that point on all attempts for SQL jobs
running under domain ids on the system and any BizTalk processes trying to
connect failed with the message "Login failed for user '(null)'. Reason: Not
associated with a trusted SQL Server connection.". The couple jobs that we
have running under standard Standard users (other than sa) had no problems.
None of our other servers had problems and the network looked ok, so I don't
think it was a domain or network connectivity problem. The reboot resolved
the problem, so it doesn't look like anything related to the domain userids
themself had changed, just SQL suddenly started failing to be able to use
them.
Looking in the SQL Server Log I see the following messages logged (all from
the same spid) at the same time all this started. I'm interpreting these as
meaning the server ran out of memory, but can not think of any reason why
this could have caused trusted connections to fail from that point on. BTW -
We have the latest service packs and patches (ver 8.00.818) installed. Any
ideas? Has anyone else ever seen anything like this before?
Query Memory Manager: Grants=0 Waiting=0 Maximum=150632 Available=150632
Global Memory Objects: Resource=2705 Locks=93
SQLCache=102 Replication=2
LockBytes=2 ServerGlobal=45
Xact=215
Dynamic Memory Manager: Stolen=9211 OS Reserved=16632
OS Committed=16593
OS In Use=12841
Query Plan=3349 Optimizer=0
General=14718
Utilities=13 Connection=3870
Procedure Cache: TotalProcs=922 TotalPages=3400 InUsePages=1759
Buffer Counts: Commited=208408 Target=208408 Hashed=198709
InternalReservation=201 ExternalReservation=0 Min Free=128
Buffer Distribution: Stolen=5811 Free=488 Procedures=3400
Inram=0 Dirty=46274 Kept=0
I/O=0, Latched=35, Other=152400
WARNING: Failed to reserve contiguous memory of Size= 65536.
Tuesday, March 20, 2012
Could Publisher Server Pull Subscriber Server
Hello,
I have two SQL SERVERs and I would like to establish a simple architecture
of one Publisher and one SUbscriber. the catch is that I would like to use
SQL 2005 EXPRESS as my source DB - Production Server (even though, as far as
I understand, it can only be a Subscriber) and SQL SERVER 2005 STANDARD will
be the replica DB (even though it will probebly have to be Publisher).
So, is it pusible that SQL SERVER STANDARD, which acts as a Publisher, be
able to make a Pull operation (and it will be replica DB) from SQL SERVER
EXPRESS, which acts as a Subscriber (and it will be the source DB)?
It is important to note that at the moment the SQL SERVER EXPRESS is used as
the only SERVER we have and it is operational.
I would like to add SQL SERVER 2005 STANDARD in order to let it pull every 5
minutes (or so) the SQL SERVER 2005 EXPRESS and copy all the changes made in
the existing SQL 2005 EXPRESS operational server.
I do not have the option to shutdown the SQL 2005 EXPRESS SERVER in order to
upgrade it to a STANDARD vertion.
Thanks in advanced
Yossi
You can use merge replication for this. However your SQL Server Standard
will have to be the publisher which can push to the Express version.
Why do you need a pull on the SQL Server standard machine?
RelevantNoise.com - dedicated to mining blogs for business intelligence.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"yoffir" <yoffir@.discussions.microsoft.com> wrote in message
news:5A0216BC-5D31-4AF2-B6BE-3A1B266C9A75@.microsoft.com...
> Hello,
> I have two SQL SERVERs and I would like to establish a simple architecture
> of one Publisher and one SUbscriber. the catch is that I would like to use
> SQL 2005 EXPRESS as my source DB - Production Server (even though, as far
> as
> I understand, it can only be a Subscriber) and SQL SERVER 2005 STANDARD
> will
> be the replica DB (even though it will probebly have to be Publisher).
> So, is it pusible that SQL SERVER STANDARD, which acts as a Publisher, be
> able to make a Pull operation (and it will be replica DB) from SQL SERVER
> EXPRESS, which acts as a Subscriber (and it will be the source DB)?
> It is important to note that at the moment the SQL SERVER EXPRESS is used
> as
> the only SERVER we have and it is operational.
> I would like to add SQL SERVER 2005 STANDARD in order to let it pull every
> 5
> minutes (or so) the SQL SERVER 2005 EXPRESS and copy all the changes made
> in
> the existing SQL 2005 EXPRESS operational server.
> I do not have the option to shutdown the SQL 2005 EXPRESS SERVER in order
> to
> upgrade it to a STANDARD vertion.
> Thanks in advanced
> Yossi
>
|||I want to save the huge fee fore having a license for the STANDARD version.
The database is used by many users and a license will give Bill Gats 7000$
extra to his bank account.
Currently we are using the EXPRESS version free of charge.
I would like to have a replica of that database in a remote site.
The thing is that it is pusible that our programmers will use the replica
database and make changes to it. So, the original EXPRESS database must not
be updated by the replica database.
This is why I thought to use a publisher STANDARD database as a replica and
pull the subscriber EXPRESS database every 5 minutes for changes.
If I will use a publisher STANDARD server I will have to pay only 800$ for a
single user license (sa).
Hope I didn't confuse you
"Hilary Cotter" wrote:
> You can use merge replication for this. However your SQL Server Standard
> will have to be the publisher which can push to the Express version.
> Why do you need a pull on the SQL Server standard machine?
> --
> RelevantNoise.com - dedicated to mining blogs for business intelligence.
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
> "yoffir" <yoffir@.discussions.microsoft.com> wrote in message
> news:5A0216BC-5D31-4AF2-B6BE-3A1B266C9A75@.microsoft.com...
>
>
I have two SQL SERVERs and I would like to establish a simple architecture
of one Publisher and one SUbscriber. the catch is that I would like to use
SQL 2005 EXPRESS as my source DB - Production Server (even though, as far as
I understand, it can only be a Subscriber) and SQL SERVER 2005 STANDARD will
be the replica DB (even though it will probebly have to be Publisher).
So, is it pusible that SQL SERVER STANDARD, which acts as a Publisher, be
able to make a Pull operation (and it will be replica DB) from SQL SERVER
EXPRESS, which acts as a Subscriber (and it will be the source DB)?
It is important to note that at the moment the SQL SERVER EXPRESS is used as
the only SERVER we have and it is operational.
I would like to add SQL SERVER 2005 STANDARD in order to let it pull every 5
minutes (or so) the SQL SERVER 2005 EXPRESS and copy all the changes made in
the existing SQL 2005 EXPRESS operational server.
I do not have the option to shutdown the SQL 2005 EXPRESS SERVER in order to
upgrade it to a STANDARD vertion.
Thanks in advanced
Yossi
You can use merge replication for this. However your SQL Server Standard
will have to be the publisher which can push to the Express version.
Why do you need a pull on the SQL Server standard machine?
RelevantNoise.com - dedicated to mining blogs for business intelligence.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"yoffir" <yoffir@.discussions.microsoft.com> wrote in message
news:5A0216BC-5D31-4AF2-B6BE-3A1B266C9A75@.microsoft.com...
> Hello,
> I have two SQL SERVERs and I would like to establish a simple architecture
> of one Publisher and one SUbscriber. the catch is that I would like to use
> SQL 2005 EXPRESS as my source DB - Production Server (even though, as far
> as
> I understand, it can only be a Subscriber) and SQL SERVER 2005 STANDARD
> will
> be the replica DB (even though it will probebly have to be Publisher).
> So, is it pusible that SQL SERVER STANDARD, which acts as a Publisher, be
> able to make a Pull operation (and it will be replica DB) from SQL SERVER
> EXPRESS, which acts as a Subscriber (and it will be the source DB)?
> It is important to note that at the moment the SQL SERVER EXPRESS is used
> as
> the only SERVER we have and it is operational.
> I would like to add SQL SERVER 2005 STANDARD in order to let it pull every
> 5
> minutes (or so) the SQL SERVER 2005 EXPRESS and copy all the changes made
> in
> the existing SQL 2005 EXPRESS operational server.
> I do not have the option to shutdown the SQL 2005 EXPRESS SERVER in order
> to
> upgrade it to a STANDARD vertion.
> Thanks in advanced
> Yossi
>
|||I want to save the huge fee fore having a license for the STANDARD version.
The database is used by many users and a license will give Bill Gats 7000$
extra to his bank account.
Currently we are using the EXPRESS version free of charge.
I would like to have a replica of that database in a remote site.
The thing is that it is pusible that our programmers will use the replica
database and make changes to it. So, the original EXPRESS database must not
be updated by the replica database.
This is why I thought to use a publisher STANDARD database as a replica and
pull the subscriber EXPRESS database every 5 minutes for changes.
If I will use a publisher STANDARD server I will have to pay only 800$ for a
single user license (sa).
Hope I didn't confuse you

"Hilary Cotter" wrote:
> You can use merge replication for this. However your SQL Server Standard
> will have to be the publisher which can push to the Express version.
> Why do you need a pull on the SQL Server standard machine?
> --
> RelevantNoise.com - dedicated to mining blogs for business intelligence.
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
> "yoffir" <yoffir@.discussions.microsoft.com> wrote in message
> news:5A0216BC-5D31-4AF2-B6BE-3A1B266C9A75@.microsoft.com...
>
>
Monday, March 19, 2012
Could not Re-index !
My company has 5 SQL Servers, their configurations are : SQL Server 7.0 +
Service Pack 3.
But I could not do DBCC DBREINDEX on one of them and got the following error
:
DBCC DBREINDEX('dbo.SA_TRN_DTL', ' ', 0)
The error was :
Server: Msg 1105, Level 17, State 2, Line 1
Could not allocate space for object 'SA_TRN_DTL' in database 'rmsdb_prd'
because the
'PRIMARY' filegroup is full.
Index (ID = 1) is being rebuilt.
Index (ID = 3) is being rebuilt.
Index (ID = 4) is being rebuilt.
But I don't think the error came from insufficient hard disk space because I
could execute the same command successfully on another machine that has even
less free hard disk space.
Can anybodies help me ?
Sometimes autogrow doesn't grow fast enough. The only way I know to handle this is to pre-allocate storage.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"cpchan" <cpchaney@.netvigator.com> wrote in message news:c97flr$rtl1@.imsp212.netvigator.com...
> My company has 5 SQL Servers, their configurations are : SQL Server 7.0 +
> Service Pack 3.
> But I could not do DBCC DBREINDEX on one of them and got the following error
> :
> DBCC DBREINDEX('dbo.SA_TRN_DTL', ' ', 0)
> The error was :
> Server: Msg 1105, Level 17, State 2, Line 1
> Could not allocate space for object 'SA_TRN_DTL' in database 'rmsdb_prd'
> because the
> 'PRIMARY' filegroup is full.
> Index (ID = 1) is being rebuilt.
> Index (ID = 3) is being rebuilt.
> Index (ID = 4) is being rebuilt.
> But I don't think the error came from insufficient hard disk space because I
> could execute the same command successfully on another machine that has even
> less free hard disk space.
> Can anybodies help me ?
>
>
>
Service Pack 3.
But I could not do DBCC DBREINDEX on one of them and got the following error
:
DBCC DBREINDEX('dbo.SA_TRN_DTL', ' ', 0)
The error was :
Server: Msg 1105, Level 17, State 2, Line 1
Could not allocate space for object 'SA_TRN_DTL' in database 'rmsdb_prd'
because the
'PRIMARY' filegroup is full.
Index (ID = 1) is being rebuilt.
Index (ID = 3) is being rebuilt.
Index (ID = 4) is being rebuilt.
But I don't think the error came from insufficient hard disk space because I
could execute the same command successfully on another machine that has even
less free hard disk space.
Can anybodies help me ?
Sometimes autogrow doesn't grow fast enough. The only way I know to handle this is to pre-allocate storage.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"cpchan" <cpchaney@.netvigator.com> wrote in message news:c97flr$rtl1@.imsp212.netvigator.com...
> My company has 5 SQL Servers, their configurations are : SQL Server 7.0 +
> Service Pack 3.
> But I could not do DBCC DBREINDEX on one of them and got the following error
> :
> DBCC DBREINDEX('dbo.SA_TRN_DTL', ' ', 0)
> The error was :
> Server: Msg 1105, Level 17, State 2, Line 1
> Could not allocate space for object 'SA_TRN_DTL' in database 'rmsdb_prd'
> because the
> 'PRIMARY' filegroup is full.
> Index (ID = 1) is being rebuilt.
> Index (ID = 3) is being rebuilt.
> Index (ID = 4) is being rebuilt.
> But I don't think the error came from insufficient hard disk space because I
> could execute the same command successfully on another machine that has even
> less free hard disk space.
> Can anybodies help me ?
>
>
>
Could not Re-index !
My company has 5 SQL Servers, their configurations are : SQL Server 7.0 +
Service Pack 3.
But I could not do DBCC DBREINDEX on one of them and got the following error
:
DBCC DBREINDEX('dbo.SA_TRN_DTL', ' ', 0)
The error was :
Server: Msg 1105, Level 17, State 2, Line 1
Could not allocate space for object 'SA_TRN_DTL' in database 'rmsdb_prd'
because the
'PRIMARY' filegroup is full.
Index (ID = 1) is being rebuilt.
Index (ID = 3) is being rebuilt.
Index (ID = 4) is being rebuilt.
But I don't think the error came from insufficient hard disk space because I
could execute the same command successfully on another machine that has even
less free hard disk space.
Can anybodies help me ?
Have you checked the settings of you data / log files to see if they have
reached there max size ?
J
"cpchan" <cpchaney@.netvigator.com> wrote in message
news:c9c7sj$j5r1618@.imsp212.netvigator.com...
> My company has 5 SQL Servers, their configurations are : SQL Server 7.0 +
> Service Pack 3.
> But I could not do DBCC DBREINDEX on one of them and got the following
error
> :
> DBCC DBREINDEX('dbo.SA_TRN_DTL', ' ', 0)
> The error was :
> Server: Msg 1105, Level 17, State 2, Line 1
> Could not allocate space for object 'SA_TRN_DTL' in database 'rmsdb_prd'
> because the
> 'PRIMARY' filegroup is full.
> Index (ID = 1) is being rebuilt.
> Index (ID = 3) is being rebuilt.
> Index (ID = 4) is being rebuilt.
> But I don't think the error came from insufficient hard disk space because
I
> could execute the same command successfully on another machine that has
even
> less free hard disk space.
> Can anybodies help me ?
>
Service Pack 3.
But I could not do DBCC DBREINDEX on one of them and got the following error
:
DBCC DBREINDEX('dbo.SA_TRN_DTL', ' ', 0)
The error was :
Server: Msg 1105, Level 17, State 2, Line 1
Could not allocate space for object 'SA_TRN_DTL' in database 'rmsdb_prd'
because the
'PRIMARY' filegroup is full.
Index (ID = 1) is being rebuilt.
Index (ID = 3) is being rebuilt.
Index (ID = 4) is being rebuilt.
But I don't think the error came from insufficient hard disk space because I
could execute the same command successfully on another machine that has even
less free hard disk space.
Can anybodies help me ?
Have you checked the settings of you data / log files to see if they have
reached there max size ?
J
"cpchan" <cpchaney@.netvigator.com> wrote in message
news:c9c7sj$j5r1618@.imsp212.netvigator.com...
> My company has 5 SQL Servers, their configurations are : SQL Server 7.0 +
> Service Pack 3.
> But I could not do DBCC DBREINDEX on one of them and got the following
error
> :
> DBCC DBREINDEX('dbo.SA_TRN_DTL', ' ', 0)
> The error was :
> Server: Msg 1105, Level 17, State 2, Line 1
> Could not allocate space for object 'SA_TRN_DTL' in database 'rmsdb_prd'
> because the
> 'PRIMARY' filegroup is full.
> Index (ID = 1) is being rebuilt.
> Index (ID = 3) is being rebuilt.
> Index (ID = 4) is being rebuilt.
> But I don't think the error came from insufficient hard disk space because
I
> could execute the same command successfully on another machine that has
even
> less free hard disk space.
> Can anybodies help me ?
>
Could not Re-index !
My company has 5 SQL Servers, their configurations are : SQL Server 7.0 +
Service Pack 3.
But I could not do DBCC DBREINDEX on one of them and got the following error
:
DBCC DBREINDEX('dbo.SA_TRN_DTL', ' ', 0)
The error was :
Server: Msg 1105, Level 17, State 2, Line 1
Could not allocate space for object 'SA_TRN_DTL' in database 'rmsdb_prd'
because the
'PRIMARY' filegroup is full.
Index (ID = 1) is being rebuilt.
Index (ID = 3) is being rebuilt.
Index (ID = 4) is being rebuilt.
But I don't think the error came from insufficient hard disk space because I
could execute the same command successfully on another machine that has even
less free hard disk space.
Can anybodies help me ?Sometimes autogrow doesn't grow fast enough. The only way I know to handle t
his is to pre-allocate storage.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"cpchan" <cpchaney@.netvigator.com> wrote in message news:c97flr$rtl1@.imsp212.netvigator.com.
.
> My company has 5 SQL Servers, their configurations are : SQL Server 7.0 +
> Service Pack 3.
> But I could not do DBCC DBREINDEX on one of them and got the following err
or
> :
> DBCC DBREINDEX('dbo.SA_TRN_DTL', ' ', 0)
> The error was :
> Server: Msg 1105, Level 17, State 2, Line 1
> Could not allocate space for object 'SA_TRN_DTL' in database 'rmsdb_prd'
> because the
> 'PRIMARY' filegroup is full.
> Index (ID = 1) is being rebuilt.
> Index (ID = 3) is being rebuilt.
> Index (ID = 4) is being rebuilt.
> But I don't think the error came from insufficient hard disk space because
I
> could execute the same command successfully on another machine that has ev
en
> less free hard disk space.
> Can anybodies help me ?
>
>
>
Service Pack 3.
But I could not do DBCC DBREINDEX on one of them and got the following error
:
DBCC DBREINDEX('dbo.SA_TRN_DTL', ' ', 0)
The error was :
Server: Msg 1105, Level 17, State 2, Line 1
Could not allocate space for object 'SA_TRN_DTL' in database 'rmsdb_prd'
because the
'PRIMARY' filegroup is full.
Index (ID = 1) is being rebuilt.
Index (ID = 3) is being rebuilt.
Index (ID = 4) is being rebuilt.
But I don't think the error came from insufficient hard disk space because I
could execute the same command successfully on another machine that has even
less free hard disk space.
Can anybodies help me ?Sometimes autogrow doesn't grow fast enough. The only way I know to handle t
his is to pre-allocate storage.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"cpchan" <cpchaney@.netvigator.com> wrote in message news:c97flr$rtl1@.imsp212.netvigator.com.
.
> My company has 5 SQL Servers, their configurations are : SQL Server 7.0 +
> Service Pack 3.
> But I could not do DBCC DBREINDEX on one of them and got the following err
or
> :
> DBCC DBREINDEX('dbo.SA_TRN_DTL', ' ', 0)
> The error was :
> Server: Msg 1105, Level 17, State 2, Line 1
> Could not allocate space for object 'SA_TRN_DTL' in database 'rmsdb_prd'
> because the
> 'PRIMARY' filegroup is full.
> Index (ID = 1) is being rebuilt.
> Index (ID = 3) is being rebuilt.
> Index (ID = 4) is being rebuilt.
> But I don't think the error came from insufficient hard disk space because
I
> could execute the same command successfully on another machine that has ev
en
> less free hard disk space.
> Can anybodies help me ?
>
>
>
Could not Re-index !
My company has 5 SQL Servers, their configurations are : SQL Server 7.0 +
Service Pack 3.
But I could not do DBCC DBREINDEX on one of them and got the following error
:
DBCC DBREINDEX('dbo.SA_TRN_DTL', ' ', 0)
The error was :
Server: Msg 1105, Level 17, State 2, Line 1
Could not allocate space for object 'SA_TRN_DTL' in database 'rmsdb_prd'
because the
'PRIMARY' filegroup is full.
Index (ID = 1) is being rebuilt.
Index (ID = 3) is being rebuilt.
Index (ID = 4) is being rebuilt.
But I don't think the error came from insufficient hard disk space because I
could execute the same command successfully on another machine that has even
less free hard disk space.
Can anybodies help me ?Have you checked the settings of you data / log files to see if they have
reached there max size ?
J
"cpchan" <cpchaney@.netvigator.com> wrote in message
news:c9c7sj$j5r1618@.imsp212.netvigator.com...
> My company has 5 SQL Servers, their configurations are : SQL Server 7.0 +
> Service Pack 3.
> But I could not do DBCC DBREINDEX on one of them and got the following
error
> :
> DBCC DBREINDEX('dbo.SA_TRN_DTL', ' ', 0)
> The error was :
> Server: Msg 1105, Level 17, State 2, Line 1
> Could not allocate space for object 'SA_TRN_DTL' in database 'rmsdb_prd'
> because the
> 'PRIMARY' filegroup is full.
> Index (ID = 1) is being rebuilt.
> Index (ID = 3) is being rebuilt.
> Index (ID = 4) is being rebuilt.
> But I don't think the error came from insufficient hard disk space because
I
> could execute the same command successfully on another machine that has
even
> less free hard disk space.
> Can anybodies help me ?
>
Service Pack 3.
But I could not do DBCC DBREINDEX on one of them and got the following error
:
DBCC DBREINDEX('dbo.SA_TRN_DTL', ' ', 0)
The error was :
Server: Msg 1105, Level 17, State 2, Line 1
Could not allocate space for object 'SA_TRN_DTL' in database 'rmsdb_prd'
because the
'PRIMARY' filegroup is full.
Index (ID = 1) is being rebuilt.
Index (ID = 3) is being rebuilt.
Index (ID = 4) is being rebuilt.
But I don't think the error came from insufficient hard disk space because I
could execute the same command successfully on another machine that has even
less free hard disk space.
Can anybodies help me ?Have you checked the settings of you data / log files to see if they have
reached there max size ?
J
"cpchan" <cpchaney@.netvigator.com> wrote in message
news:c9c7sj$j5r1618@.imsp212.netvigator.com...
> My company has 5 SQL Servers, their configurations are : SQL Server 7.0 +
> Service Pack 3.
> But I could not do DBCC DBREINDEX on one of them and got the following
error
> :
> DBCC DBREINDEX('dbo.SA_TRN_DTL', ' ', 0)
> The error was :
> Server: Msg 1105, Level 17, State 2, Line 1
> Could not allocate space for object 'SA_TRN_DTL' in database 'rmsdb_prd'
> because the
> 'PRIMARY' filegroup is full.
> Index (ID = 1) is being rebuilt.
> Index (ID = 3) is being rebuilt.
> Index (ID = 4) is being rebuilt.
> But I don't think the error came from insufficient hard disk space because
I
> could execute the same command successfully on another machine that has
even
> less free hard disk space.
> Can anybodies help me ?
>
Sunday, March 11, 2012
Could not get the data of the row from the OLE DB provider 'SQLOLE
We are running SQL2000 SP3 on two servers. One is Server 2003 the other is
Server 2000. Both servers are linked to one another. The 2003 server is
running DTC. We are running several lightweight jobs that are occasionally,
and more often than we like, having the following error:
Could not get the data of the row from the OLE DB provider 'SQLOLEDB'.
[SQLSTATE 42000] (Error 7346) [SQLSTATE 01000] (Error 7312) OLE DB error
trace [OLE/DB Provider 'SQLOLEDB' IRowset::GetData returned 0x80040e23].
[SQLSTATE 01000] (Error 7300). The step failed.
The sp basically set variables with data from both the primary server A and
the linked server B using views. This part is successful. The next step is
updating information in server B via a view. This appears to be where the
job is creating the above error. The last part of the sp is another update
entirely on the local server A.
I appreciate your help...
Thanks...Hi
Error 7312 is "Could not set up parameter for remote server '%.*ls'."
I have sometimes seen this error when a remote server has a deadlock or when
a lock can not be acquired to satisfy the request. Have a look at what is
happening on the linked server when this occurs.
Regards
--
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"ChrisD" <ChrisD@.discussions.microsoft.com> wrote in message
news:D508BC5E-178C-4518-BF00-D17BA13143DF@.microsoft.com...
> We are running SQL2000 SP3 on two servers. One is Server 2003 the other
is
> Server 2000. Both servers are linked to one another. The 2003 server is
> running DTC. We are running several lightweight jobs that are
occasionally,
> and more often than we like, having the following error:
> Could not get the data of the row from the OLE DB provider 'SQLOLEDB'.
> [SQLSTATE 42000] (Error 7346) [SQLSTATE 01000] (Error 7312) OLE DB
error
> trace [OLE/DB Provider 'SQLOLEDB' IRowset::GetData returned 0x80040e23].
> [SQLSTATE 01000] (Error 7300). The step failed.
> The sp basically set variables with data from both the primary server A
and
> the linked server B using views. This part is successful. The next step
is
> updating information in server B via a view. This appears to be where the
> job is creating the above error. The last part of the sp is another
update
> entirely on the local server A.
> I appreciate your help...
> Thanks...
>
Server 2000. Both servers are linked to one another. The 2003 server is
running DTC. We are running several lightweight jobs that are occasionally,
and more often than we like, having the following error:
Could not get the data of the row from the OLE DB provider 'SQLOLEDB'.
[SQLSTATE 42000] (Error 7346) [SQLSTATE 01000] (Error 7312) OLE DB error
trace [OLE/DB Provider 'SQLOLEDB' IRowset::GetData returned 0x80040e23].
[SQLSTATE 01000] (Error 7300). The step failed.
The sp basically set variables with data from both the primary server A and
the linked server B using views. This part is successful. The next step is
updating information in server B via a view. This appears to be where the
job is creating the above error. The last part of the sp is another update
entirely on the local server A.
I appreciate your help...
Thanks...Hi
Error 7312 is "Could not set up parameter for remote server '%.*ls'."
I have sometimes seen this error when a remote server has a deadlock or when
a lock can not be acquired to satisfy the request. Have a look at what is
happening on the linked server when this occurs.
Regards
--
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"ChrisD" <ChrisD@.discussions.microsoft.com> wrote in message
news:D508BC5E-178C-4518-BF00-D17BA13143DF@.microsoft.com...
> We are running SQL2000 SP3 on two servers. One is Server 2003 the other
is
> Server 2000. Both servers are linked to one another. The 2003 server is
> running DTC. We are running several lightweight jobs that are
occasionally,
> and more often than we like, having the following error:
> Could not get the data of the row from the OLE DB provider 'SQLOLEDB'.
> [SQLSTATE 42000] (Error 7346) [SQLSTATE 01000] (Error 7312) OLE DB
error
> trace [OLE/DB Provider 'SQLOLEDB' IRowset::GetData returned 0x80040e23].
> [SQLSTATE 01000] (Error 7300). The step failed.
> The sp basically set variables with data from both the primary server A
and
> the linked server B using views. This part is successful. The next step
is
> updating information in server B via a view. This appears to be where the
> job is creating the above error. The last part of the sp is another
update
> entirely on the local server A.
> I appreciate your help...
> Thanks...
>
Could not get the data of the row from the OLE DB provider 'SQLOLE
I am running SQL2000 on two different machines. One is 2003 server, the
other is 2000 server. DTC is setup on the 2003 server. Both servers are
linked to one another. I am running several small jobs that simply execute a
sp. I am consistantly getting the following error message:
Could not get the data of the row from the OLE DB provider 'SQLOLEDB'.
[SQLSTATE 42000] (Error 7346) [SQLSTATE 01000] (Error 7312) OLE DB error
trace [OLE/DB Provider 'SQLOLEDB' IRowset::GetData returned 0x80040e23].
[SQLSTATE 01000] (Error 7300). The step failed.
The sp that is running to cause this is error is one of two that consist of
Several varialbes that get their set values from select statements with
views back into Server B and some of them are setting values from the local
server A.
The error occurs after these are set, and when I try to do one of the two
following updates.
One is an update From Server A via a view into Server B. There are no date
values being passed or updated, but the view does include date fields.
The second update is all on the local server. Updating a source table flag.
Any help would be greatly appreciated.
Thanks...
You might want to post your actual T-SQL statements and some configuration
values.
Sincerely,
Anthony Thomas
"ChrisD" <ChrisD@.discussions.microsoft.com> wrote in message
news:1E63096F-95DF-480E-9BFB-6B36BEE44901@.microsoft.com...
I am running SQL2000 on two different machines. One is 2003 server, the
other is 2000 server. DTC is setup on the 2003 server. Both servers are
linked to one another. I am running several small jobs that simply execute
a
sp. I am consistantly getting the following error message:
Could not get the data of the row from the OLE DB provider 'SQLOLEDB'.
[SQLSTATE 42000] (Error 7346) [SQLSTATE 01000] (Error 7312) OLE DB error
trace [OLE/DB Provider 'SQLOLEDB' IRowset::GetData returned 0x80040e23].
[SQLSTATE 01000] (Error 7300). The step failed.
The sp that is running to cause this is error is one of two that consist of
Several varialbes that get their set values from select statements with
views back into Server B and some of them are setting values from the local
server A.
The error occurs after these are set, and when I try to do one of the two
following updates.
One is an update From Server A via a view into Server B. There are no date
values being passed or updated, but the view does include date fields.
The second update is all on the local server. Updating a source table flag.
Any help would be greatly appreciated.
Thanks...
other is 2000 server. DTC is setup on the 2003 server. Both servers are
linked to one another. I am running several small jobs that simply execute a
sp. I am consistantly getting the following error message:
Could not get the data of the row from the OLE DB provider 'SQLOLEDB'.
[SQLSTATE 42000] (Error 7346) [SQLSTATE 01000] (Error 7312) OLE DB error
trace [OLE/DB Provider 'SQLOLEDB' IRowset::GetData returned 0x80040e23].
[SQLSTATE 01000] (Error 7300). The step failed.
The sp that is running to cause this is error is one of two that consist of
Several varialbes that get their set values from select statements with
views back into Server B and some of them are setting values from the local
server A.
The error occurs after these are set, and when I try to do one of the two
following updates.
One is an update From Server A via a view into Server B. There are no date
values being passed or updated, but the view does include date fields.
The second update is all on the local server. Updating a source table flag.
Any help would be greatly appreciated.
Thanks...
You might want to post your actual T-SQL statements and some configuration
values.
Sincerely,
Anthony Thomas
"ChrisD" <ChrisD@.discussions.microsoft.com> wrote in message
news:1E63096F-95DF-480E-9BFB-6B36BEE44901@.microsoft.com...
I am running SQL2000 on two different machines. One is 2003 server, the
other is 2000 server. DTC is setup on the 2003 server. Both servers are
linked to one another. I am running several small jobs that simply execute
a
sp. I am consistantly getting the following error message:
Could not get the data of the row from the OLE DB provider 'SQLOLEDB'.
[SQLSTATE 42000] (Error 7346) [SQLSTATE 01000] (Error 7312) OLE DB error
trace [OLE/DB Provider 'SQLOLEDB' IRowset::GetData returned 0x80040e23].
[SQLSTATE 01000] (Error 7300). The step failed.
The sp that is running to cause this is error is one of two that consist of
Several varialbes that get their set values from select statements with
views back into Server B and some of them are setting values from the local
server A.
The error occurs after these are set, and when I try to do one of the two
following updates.
One is an update From Server A via a view into Server B. There are no date
values being passed or updated, but the view does include date fields.
The second update is all on the local server. Updating a source table flag.
Any help would be greatly appreciated.
Thanks...
Could not get the data of the row from the OLE DB provider 'SQLOLE
I am running SQL2000 on two different machines. One is 2003 server, the
other is 2000 server. DTC is setup on the 2003 server. Both servers are
linked to one another. I am running several small jobs that simply execute
a
sp. I am consistantly getting the following error message:
Could not get the data of the row from the OLE DB provider 'SQLOLEDB'.
[SQLSTATE 42000] (Error 7346) [SQLSTATE 01000] (Error 7312) OLE D
B error
trace [OLE/DB Provider 'SQLOLEDB' IRowset::GetData returned 0x80040e23].
[SQLSTATE 01000] (Error 7300). The step failed.
The sp that is running to cause this is error is one of two that consist of
Several varialbes that get their set values from select statements with
views back into Server B and some of them are setting values from the local
server A.
The error occurs after these are set, and when I try to do one of the two
following updates.
One is an update From Server A via a view into Server B. There are no date
values being passed or updated, but the view does include date fields.
The second update is all on the local server. Updating a source table flag.
Any help would be greatly appreciated.
Thanks...You might want to post your actual T-SQL statements and some configuration
values.
Sincerely,
Anthony Thomas
"ChrisD" <ChrisD@.discussions.microsoft.com> wrote in message
news:1E63096F-95DF-480E-9BFB-6B36BEE44901@.microsoft.com...
I am running SQL2000 on two different machines. One is 2003 server, the
other is 2000 server. DTC is setup on the 2003 server. Both servers are
linked to one another. I am running several small jobs that simply execute
a
sp. I am consistantly getting the following error message:
Could not get the data of the row from the OLE DB provider 'SQLOLEDB'.
[SQLSTATE 42000] (Error 7346) [SQLSTATE 01000] (Error 7312) OLE D
B error
trace [OLE/DB Provider 'SQLOLEDB' IRowset::GetData returned 0x80040e23].
[SQLSTATE 01000] (Error 7300). The step failed.
The sp that is running to cause this is error is one of two that consist of
Several varialbes that get their set values from select statements with
views back into Server B and some of them are setting values from the local
server A.
The error occurs after these are set, and when I try to do one of the two
following updates.
One is an update From Server A via a view into Server B. There are no date
values being passed or updated, but the view does include date fields.
The second update is all on the local server. Updating a source table flag.
Any help would be greatly appreciated.
Thanks...
other is 2000 server. DTC is setup on the 2003 server. Both servers are
linked to one another. I am running several small jobs that simply execute
a
sp. I am consistantly getting the following error message:
Could not get the data of the row from the OLE DB provider 'SQLOLEDB'.
[SQLSTATE 42000] (Error 7346) [SQLSTATE 01000] (Error 7312) OLE D
B error
trace [OLE/DB Provider 'SQLOLEDB' IRowset::GetData returned 0x80040e23].
[SQLSTATE 01000] (Error 7300). The step failed.
The sp that is running to cause this is error is one of two that consist of
Several varialbes that get their set values from select statements with
views back into Server B and some of them are setting values from the local
server A.
The error occurs after these are set, and when I try to do one of the two
following updates.
One is an update From Server A via a view into Server B. There are no date
values being passed or updated, but the view does include date fields.
The second update is all on the local server. Updating a source table flag.
Any help would be greatly appreciated.
Thanks...You might want to post your actual T-SQL statements and some configuration
values.
Sincerely,
Anthony Thomas
"ChrisD" <ChrisD@.discussions.microsoft.com> wrote in message
news:1E63096F-95DF-480E-9BFB-6B36BEE44901@.microsoft.com...
I am running SQL2000 on two different machines. One is 2003 server, the
other is 2000 server. DTC is setup on the 2003 server. Both servers are
linked to one another. I am running several small jobs that simply execute
a
sp. I am consistantly getting the following error message:
Could not get the data of the row from the OLE DB provider 'SQLOLEDB'.
[SQLSTATE 42000] (Error 7346) [SQLSTATE 01000] (Error 7312) OLE D
B error
trace [OLE/DB Provider 'SQLOLEDB' IRowset::GetData returned 0x80040e23].
[SQLSTATE 01000] (Error 7300). The step failed.
The sp that is running to cause this is error is one of two that consist of
Several varialbes that get their set values from select statements with
views back into Server B and some of them are setting values from the local
server A.
The error occurs after these are set, and when I try to do one of the two
following updates.
One is an update From Server A via a view into Server B. There are no date
values being passed or updated, but the view does include date fields.
The second update is all on the local server. Updating a source table flag.
Any help would be greatly appreciated.
Thanks...
Could not get the data of the row from the OLE DB provider 'SQLOLE
I am running SQL2000 on two different machines. One is 2003 server, the
other is 2000 server. DTC is setup on the 2003 server. Both servers are
linked to one another. I am running several small jobs that simply execute a
sp. I am consistantly getting the following error message:
Could not get the data of the row from the OLE DB provider 'SQLOLEDB'.
[SQLSTATE 42000] (Error 7346) [SQLSTATE 01000] (Error 7312) OLE DB error
trace [OLE/DB Provider 'SQLOLEDB' IRowset::GetData returned 0x80040e23].
[SQLSTATE 01000] (Error 7300). The step failed.
The sp that is running to cause this is error is one of two that consist of
Several varialbes that get their set values from select statements with
views back into Server B and some of them are setting values from the local
server A.
The error occurs after these are set, and when I try to do one of the two
following updates.
One is an update From Server A via a view into Server B. There are no date
values being passed or updated, but the view does include date fields.
The second update is all on the local server. Updating a source table flag.
Any help would be greatly appreciated.
Thanks...You might want to post your actual T-SQL statements and some configuration
values.
Sincerely,
Anthony Thomas
"ChrisD" <ChrisD@.discussions.microsoft.com> wrote in message
news:1E63096F-95DF-480E-9BFB-6B36BEE44901@.microsoft.com...
I am running SQL2000 on two different machines. One is 2003 server, the
other is 2000 server. DTC is setup on the 2003 server. Both servers are
linked to one another. I am running several small jobs that simply execute
a
sp. I am consistantly getting the following error message:
Could not get the data of the row from the OLE DB provider 'SQLOLEDB'.
[SQLSTATE 42000] (Error 7346) [SQLSTATE 01000] (Error 7312) OLE DB error
trace [OLE/DB Provider 'SQLOLEDB' IRowset::GetData returned 0x80040e23].
[SQLSTATE 01000] (Error 7300). The step failed.
The sp that is running to cause this is error is one of two that consist of
Several varialbes that get their set values from select statements with
views back into Server B and some of them are setting values from the local
server A.
The error occurs after these are set, and when I try to do one of the two
following updates.
One is an update From Server A via a view into Server B. There are no date
values being passed or updated, but the view does include date fields.
The second update is all on the local server. Updating a source table flag.
Any help would be greatly appreciated.
Thanks...
other is 2000 server. DTC is setup on the 2003 server. Both servers are
linked to one another. I am running several small jobs that simply execute a
sp. I am consistantly getting the following error message:
Could not get the data of the row from the OLE DB provider 'SQLOLEDB'.
[SQLSTATE 42000] (Error 7346) [SQLSTATE 01000] (Error 7312) OLE DB error
trace [OLE/DB Provider 'SQLOLEDB' IRowset::GetData returned 0x80040e23].
[SQLSTATE 01000] (Error 7300). The step failed.
The sp that is running to cause this is error is one of two that consist of
Several varialbes that get their set values from select statements with
views back into Server B and some of them are setting values from the local
server A.
The error occurs after these are set, and when I try to do one of the two
following updates.
One is an update From Server A via a view into Server B. There are no date
values being passed or updated, but the view does include date fields.
The second update is all on the local server. Updating a source table flag.
Any help would be greatly appreciated.
Thanks...You might want to post your actual T-SQL statements and some configuration
values.
Sincerely,
Anthony Thomas
"ChrisD" <ChrisD@.discussions.microsoft.com> wrote in message
news:1E63096F-95DF-480E-9BFB-6B36BEE44901@.microsoft.com...
I am running SQL2000 on two different machines. One is 2003 server, the
other is 2000 server. DTC is setup on the 2003 server. Both servers are
linked to one another. I am running several small jobs that simply execute
a
sp. I am consistantly getting the following error message:
Could not get the data of the row from the OLE DB provider 'SQLOLEDB'.
[SQLSTATE 42000] (Error 7346) [SQLSTATE 01000] (Error 7312) OLE DB error
trace [OLE/DB Provider 'SQLOLEDB' IRowset::GetData returned 0x80040e23].
[SQLSTATE 01000] (Error 7300). The step failed.
The sp that is running to cause this is error is one of two that consist of
Several varialbes that get their set values from select statements with
views back into Server B and some of them are setting values from the local
server A.
The error occurs after these are set, and when I try to do one of the two
following updates.
One is an update From Server A via a view into Server B. There are no date
values being passed or updated, but the view does include date fields.
The second update is all on the local server. Updating a source table flag.
Any help would be greatly appreciated.
Thanks...
Thursday, March 8, 2012
Could not find stored procedure 'sp_MSins_tablename'.
I am using sql server 2000 workgroup edition.
when trying to replicate data between 2 servers I am getting that
error
"Could not find stored procedure 'sp_MSins_tablename'."
Here's some info on this, if you are going with a 'nosync' subscription:
http://vyaskn.tripod.com/repl_ans2.htm#nosync
If you are on SQL Server 2000, all you need to generate the scripts for
these procedures is, sp_scriptpublicationcustomprocs
HTH,
Vyas, MVP (SQL Server)
SQL Server Articles and Code Samples @. http://vyaskn.tripod.com/
"marios" <amarios@.terravision.com-dot-lb.no-spam.invalid> wrote in message
news:ktadnVLDj9lI1WXfRVn_vQ@.giganews.com...
I am using sql server 2000 workgroup edition.
when trying to replicate data between 2 servers I am getting that
error
"Could not find stored procedure 'sp_MSins_tablename'."
when trying to replicate data between 2 servers I am getting that
error
"Could not find stored procedure 'sp_MSins_tablename'."
Here's some info on this, if you are going with a 'nosync' subscription:
http://vyaskn.tripod.com/repl_ans2.htm#nosync
If you are on SQL Server 2000, all you need to generate the scripts for
these procedures is, sp_scriptpublicationcustomprocs
HTH,
Vyas, MVP (SQL Server)
SQL Server Articles and Code Samples @. http://vyaskn.tripod.com/
"marios" <amarios@.terravision.com-dot-lb.no-spam.invalid> wrote in message
news:ktadnVLDj9lI1WXfRVn_vQ@.giganews.com...
I am using sql server 2000 workgroup edition.
when trying to replicate data between 2 servers I am getting that
error
"Could not find stored procedure 'sp_MSins_tablename'."
Subscribe to:
Posts (Atom)