Showing posts with label time. Show all posts
Showing posts with label time. Show all posts

Thursday, March 22, 2012

(almost) duplicates

I have a script to remove duplicate records from a table. I have since
found out that I have dups with a date/time stamp that are a few seconds off
(thus, I guess technically making them not duplicates). How can I properly
get rid of the later records. Again the only difference is the seconds in a
date/time field.
Jeff
Please post your table structures and sample data along with the script
which you use to distingush the duplicates. Without clear specifications and
an usable repro, it is hard for others to understand what would
"technically" make some rows non-duplicates when in reality, they are.
Anith
|||Any other column to identitfy the row?
delete t
where exists (select * from t as t1 where t1.col1 = t.col1 and
t1.col_datetime < t.col_datetime)
AMB
"J. Clarke" wrote:

> I have a script to remove duplicate records from a table. I have since
> found out that I have dups with a date/time stamp that are a few seconds off
> (thus, I guess technically making them not duplicates). How can I properly
> get rid of the later records. Again the only difference is the seconds in a
> date/time field.
> Jeff
>
>
|||Hmmm...wouldn't this delete everything except the most recent record (or am
I missreading this)?
"Alejandro Mesa" <AlejandroMesa@.discussions.microsoft.com> wrote in message
news:E562DCB5-4F62-4DD2-B859-2F357856DBF4@.microsoft.com...[vbcol=seagreen]
> Any other column to identitfy the row?
> delete t
> where exists (select * from t as t1 where t1.col1 = t.col1 and
> t1.col_datetime < t.col_datetime)
>
> AMB
> "J. Clarke" wrote:
|||I was afraid of this. Basically all the fields in a multiple rows contain
the same values EXCEPT the datetime field. So technically their not
duplicates (the datetimes are a couple of seconds off from each other).
However, I know the front end application had an error that was sticking
them in. I need to keep the 1st record it stuck in and get rid of the rest
that are a few seconds off from the 1st record.
"Anith Sen" <anith@.bizdatasolutions.com> wrote in message
news:%239EpuQ7MFHA.2656@.TK2MSFTNGP10.phx.gbl...
> Please post your table structures and sample data along with the script
> which you use to distingush the duplicates. Without clear specifications
> and an usable repro, it is hard for others to understand what would
> "technically" make some rows non-duplicates when in reality, they are.
> --
> Anith
>
|||No, it wouldn't . this is a correlated subquery... Take a look at the table
alias t and t1... the same table but treated as 2 different tables...
This takes a row from table t and is trying to decide whether or not to
delete it... It looks to find a row in the same table ( but aliased to t1)
that has the same col1 ( supposedly the id column) but which has a date
which is < than the date of the row in t you are considering for
deletion... If that expression is true, that means there is another row
with the same key but which has an earlier date, so this row must be the
additionaly row that was added later , and there fore should be deleted.
Hope this helps..
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"J. Clarke" <jaclarke01@.hotmail.comNOSPAM> wrote in message
news:u8lQjW$MFHA.2604@.TK2MSFTNGP10.phx.gbl...
> Hmmm...wouldn't this delete everything except the most recent record (or
> am I missreading this)?
> "Alejandro Mesa" <AlejandroMesa@.discussions.microsoft.com> wrote in
> message news:E562DCB5-4F62-4DD2-B859-2F357856DBF4@.microsoft.com...
>
|||On Mon, 28 Mar 2005 10:35:43 -0500, J. Clarke wrote:

>I have a script to remove duplicate records from a table. I have since
>found out that I have dups with a date/time stamp that are a few seconds off
>(thus, I guess technically making them not duplicates). How can I properly
>get rid of the later records. Again the only difference is the seconds in a
>date/time field.
>Jeff
>
Hi Jeff,
Assuming Col1, Col2 and Col3 are exactly the same and Col4 is the
datetime column with a few seconds difference, and that you want to
delete the duplicates if the time difference is no more than 20 seconds,
use:
DELETE FROM MyTable
WHERE EXISTS
(SELECT *
FROM MyTable AS b
WHERE b.Col1 = MyTable.Col1
AND b.Col2 = MyTable.Col2
AND b.Col3 = MyTable.Col3
AND b.Col4 < MyTable.Col4
AND b.Col4 >= DATEDIFF(second, 20, MyTable.Col4))
(untested)
Best, Hugo
(Remove _NO_ and _SPAM_ to get my e-mail address)
|||similar to Hugo's idea you could also try the following:
DELETE MyTable
from MyTable my
LEFT JOIN (
select Col1, Col2, Col3, MIN(Col4)as MinCol4 from MyTable
group by Col1, Col2, Col3) as gb
ON my.Col1 = gb.Col1,
and my.Col2 = gb.Col2,
and my.Col3 = gb.Col3,
and my.Col4 = gb.MinCol4
where gb.MinCol4 IS NULL
This does not take into account any specific time lag. This would find
duplicates that are days apart not just 20 seconds.
This uses a subquery "group by" to identify distinct records and the
"earliest" (min) date stamp and then matches it back and drops all the
matching records that are not the earliest datetime.
The magic comes from the LEFT JOIN and the fact that MinCol4 IS NULL
Good luck.
Message posted via http://www.sqlmonster.com
|||Just to make you feel better, you can try this test to prove the theory:
(also note: I inadvertantly had commas in the "and" section of the left
join)
if exists(select name from sysobjects where name = 'MyTable' and type = 'U')
drop table MyTable
go
create table MyTable(Col1 int, Col2 varchar(10), Col3 int, Col4 datetime)
insert MyTable select 1, 'Lucy', 101, '1/1/2005'
insert MyTable select 1, 'Lucy', 101, '1/2/2005'
insert MyTable select 2, 'Ricky', 102, '1/1/2005'
insert MyTable select 2, 'Ricky', 102, '1/2/2005'
insert MyTable select 2, 'Ricky', 102, '1/3/2005'
insert MyTable select 3, 'Fred', 103, '1/1/2005'
insert MyTable select 4, 'Ethel', 104, '1/1/2005'
insert MyTable select 4, 'Ethel', 104, '1/2/2005'
insert MyTable select 4, 'Ethel', 104, '1/3/2005'
insert MyTable select 4, 'Ethel', 104, '1/4/2005'
DELETE MyTable
from MyTable my
LEFT JOIN (
select Col1, Col2, Col3, MIN(Col4)as MinCol4 from MyTable
group by Col1, Col2, Col3) as gb
ON my.Col1 = gb.Col1
and my.Col2 = gb.Col2
and my.Col3 = gb.Col3
and my.Col4 = gb.MinCol4
where gb.MinCol4 IS NULL
select * from MyTable
Message posted via http://www.sqlmonster.com
|||Ah...Thanks for the explaination Wayne. That helps alot.
"Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
news:%232IH7KGNFHA.1476@.TK2MSFTNGP09.phx.gbl...
> No, it wouldn't . this is a correlated subquery... Take a look at the
> table alias t and t1... the same table but treated as 2 different
> tables...
> This takes a row from table t and is trying to decide whether or not to
> delete it... It looks to find a row in the same table ( but aliased to t1)
> that has the same col1 ( supposedly the id column) but which has a date
> which is < than the date of the row in t you are considering for
> deletion... If that expression is true, that means there is another row
> with the same key but which has an earlier date, so this row must be the
> additionaly row that was added later , and there fore should be deleted.
> Hope this helps..
> --
> Wayne Snyder, MCDBA, SQL Server MVP
> Mariner, Charlotte, NC
> www.mariner-usa.com
> (Please respond only to the newsgroups.)
> I support the Professional Association of SQL Server (PASS) and it's
> community of SQL Server professionals.
> www.sqlpass.org
> "J. Clarke" <jaclarke01@.hotmail.comNOSPAM> wrote in message
> news:u8lQjW$MFHA.2604@.TK2MSFTNGP10.phx.gbl...
>

(almost) duplicates

I have a script to remove duplicate records from a table. I have since
found out that I have dups with a date/time stamp that are a few seconds off
(thus, I guess technically making them not duplicates). How can I properly
get rid of the later records. Again the only difference is the seconds in a
date/time field.
JeffPlease post your table structures and sample data along with the script
which you use to distingush the duplicates. Without clear specifications and
an usable repro, it is hard for others to understand what would
"technically" make some rows non-duplicates when in reality, they are.
Anith|||Any other column to identitfy the row?
delete t
where exists (select * from t as t1 where t1.col1 = t.col1 and
t1.col_datetime < t.col_datetime)
AMB
"J. Clarke" wrote:

> I have a script to remove duplicate records from a table. I have since
> found out that I have dups with a date/time stamp that are a few seconds o
ff
> (thus, I guess technically making them not duplicates). How can I properl
y
> get rid of the later records. Again the only difference is the seconds in
a
> date/time field.
> Jeff
>
>|||Hmmm...wouldn't this delete everything except the most recent record (or am
I missreading this)?
"Alejandro Mesa" <AlejandroMesa@.discussions.microsoft.com> wrote in message
news:E562DCB5-4F62-4DD2-B859-2F357856DBF4@.microsoft.com...[vbcol=seagreen]
> Any other column to identitfy the row?
> delete t
> where exists (select * from t as t1 where t1.col1 = t.col1 and
> t1.col_datetime < t.col_datetime)
>
> AMB
> "J. Clarke" wrote:
>|||I was afraid of this. Basically all the fields in a multiple rows contain
the same values EXCEPT the datetime field. So technically their not
duplicates (the datetimes are a couple of seconds off from each other).
However, I know the front end application had an error that was sticking
them in. I need to keep the 1st record it stuck in and get rid of the rest
that are a few seconds off from the 1st record.
"Anith Sen" <anith@.bizdatasolutions.com> wrote in message
news:%239EpuQ7MFHA.2656@.TK2MSFTNGP10.phx.gbl...
> Please post your table structures and sample data along with the script
> which you use to distingush the duplicates. Without clear specifications
> and an usable repro, it is hard for others to understand what would
> "technically" make some rows non-duplicates when in reality, they are.
> --
> Anith
>|||No, it wouldn't . this is a correlated subquery... Take a look at the table
alias t and t1... the same table but treated as 2 different tables...
This takes a row from table t and is trying to decide whether or not to
delete it... It looks to find a row in the same table ( but aliased to t1)
that has the same col1 ( supposedly the id column) but which has a date
which is < than the date of the row in t you are considering for
deletion... If that expression is true, that means there is another row
with the same key but which has an earlier date, so this row must be the
additionaly row that was added later , and there fore should be deleted.
Hope this helps..
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"J. Clarke" <jaclarke01@.hotmail.comNOSPAM> wrote in message
news:u8lQjW$MFHA.2604@.TK2MSFTNGP10.phx.gbl...
> Hmmm...wouldn't this delete everything except the most recent record (or
> am I missreading this)?
> "Alejandro Mesa" <AlejandroMesa@.discussions.microsoft.com> wrote in
> message news:E562DCB5-4F62-4DD2-B859-2F357856DBF4@.microsoft.com...
>|||On Mon, 28 Mar 2005 10:35:43 -0500, J. Clarke wrote:

>I have a script to remove duplicate records from a table. I have since
>found out that I have dups with a date/time stamp that are a few seconds of
f
>(thus, I guess technically making them not duplicates). How can I properly
>get rid of the later records. Again the only difference is the seconds in
a
>date/time field.
>Jeff
>
Hi Jeff,
Assuming Col1, Col2 and Col3 are exactly the same and Col4 is the
datetime column with a few seconds difference, and that you want to
delete the duplicates if the time difference is no more than 20 seconds,
use:
DELETE FROM MyTable
WHERE EXISTS
(SELECT *
FROM MyTable AS b
WHERE b.Col1 = MyTable.Col1
AND b.Col2 = MyTable.Col2
AND b.Col3 = MyTable.Col3
AND b.Col4 < MyTable.Col4
AND b.Col4 >= DATEDIFF(second, 20, MyTable.Col4))
(untested)
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||similar to Hugo's idea you could also try the following:
DELETE MyTable
from MyTable my
LEFT JOIN (
select Col1, Col2, Col3, MIN(Col4)as MinCol4 from MyTable
group by Col1, Col2, Col3) as gb
ON my.Col1 = gb.Col1,
and my.Col2 = gb.Col2,
and my.Col3 = gb.Col3,
and my.Col4 = gb.MinCol4
where gb.MinCol4 IS NULL
This does not take into account any specific time lag. This would find
duplicates that are days apart not just 20 seconds.
This uses a subquery "group by" to identify distinct records and the
"earliest" (min) date stamp and then matches it back and drops all the
matching records that are not the earliest datetime.
The magic comes from the LEFT JOIN and the fact that MinCol4 IS NULL
Good luck.
Message posted via http://www.droptable.com|||Just to make you feel better, you can try this test to prove the theory:
(also note: I inadvertantly had commas in the "and" section of the left
join)
if exists(select name from sysobjects where name = 'MyTable' and type = 'U')
drop table MyTable
go
create table MyTable(Col1 int, Col2 varchar(10), Col3 int, Col4 datetime)
insert MyTable select 1, 'Lucy', 101, '1/1/2005'
insert MyTable select 1, 'Lucy', 101, '1/2/2005'
insert MyTable select 2, 'Ricky', 102, '1/1/2005'
insert MyTable select 2, 'Ricky', 102, '1/2/2005'
insert MyTable select 2, 'Ricky', 102, '1/3/2005'
insert MyTable select 3, 'Fred', 103, '1/1/2005'
insert MyTable select 4, 'Ethel', 104, '1/1/2005'
insert MyTable select 4, 'Ethel', 104, '1/2/2005'
insert MyTable select 4, 'Ethel', 104, '1/3/2005'
insert MyTable select 4, 'Ethel', 104, '1/4/2005'
DELETE MyTable
from MyTable my
LEFT JOIN (
select Col1, Col2, Col3, MIN(Col4)as MinCol4 from MyTable
group by Col1, Col2, Col3) as gb
ON my.Col1 = gb.Col1
and my.Col2 = gb.Col2
and my.Col3 = gb.Col3
and my.Col4 = gb.MinCol4
where gb.MinCol4 IS NULL
select * from MyTable
Message posted via http://www.droptable.com|||Ah...Thanks for the explaination Wayne. That helps alot.
"Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
news:%232IH7KGNFHA.1476@.TK2MSFTNGP09.phx.gbl...
> No, it wouldn't . this is a correlated subquery... Take a look at the
> table alias t and t1... the same table but treated as 2 different
> tables...
> This takes a row from table t and is trying to decide whether or not to
> delete it... It looks to find a row in the same table ( but aliased to t1)
> that has the same col1 ( supposedly the id column) but which has a date
> which is < than the date of the row in t you are considering for
> deletion... If that expression is true, that means there is another row
> with the same key but which has an earlier date, so this row must be the
> additionaly row that was added later , and there fore should be deleted.
> Hope this helps..
> --
> Wayne Snyder, MCDBA, SQL Server MVP
> Mariner, Charlotte, NC
> www.mariner-usa.com
> (Please respond only to the newsgroups.)
> I support the Professional Association of SQL Server (PASS) and it's
> community of SQL Server professionals.
> www.sqlpass.org
> "J. Clarke" <jaclarke01@.hotmail.comNOSPAM> wrote in message
> news:u8lQjW$MFHA.2604@.TK2MSFTNGP10.phx.gbl...
>

(almost) duplicates

I have a script to remove duplicate records from a table. I have since
found out that I have dups with a date/time stamp that are a few seconds off
(thus, I guess technically making them not duplicates). How can I properly
get rid of the later records. Again the only difference is the seconds in a
date/time field.
JeffPlease post your table structures and sample data along with the script
which you use to distingush the duplicates. Without clear specifications and
an usable repro, it is hard for others to understand what would
"technically" make some rows non-duplicates when in reality, they are.
--
Anith|||Any other column to identitfy the row?
delete t
where exists (select * from t as t1 where t1.col1 = t.col1 and
t1.col_datetime < t.col_datetime)
AMB
"J. Clarke" wrote:
> I have a script to remove duplicate records from a table. I have since
> found out that I have dups with a date/time stamp that are a few seconds off
> (thus, I guess technically making them not duplicates). How can I properly
> get rid of the later records. Again the only difference is the seconds in a
> date/time field.
> Jeff
>
>|||Hmmm...wouldn't this delete everything except the most recent record (or am
I missreading this)?
"Alejandro Mesa" <AlejandroMesa@.discussions.microsoft.com> wrote in message
news:E562DCB5-4F62-4DD2-B859-2F357856DBF4@.microsoft.com...
> Any other column to identitfy the row?
> delete t
> where exists (select * from t as t1 where t1.col1 = t.col1 and
> t1.col_datetime < t.col_datetime)
>
> AMB
> "J. Clarke" wrote:
>> I have a script to remove duplicate records from a table. I have since
>> found out that I have dups with a date/time stamp that are a few seconds
>> off
>> (thus, I guess technically making them not duplicates). How can I
>> properly
>> get rid of the later records. Again the only difference is the seconds
>> in a
>> date/time field.
>> Jeff
>>|||I was afraid of this. Basically all the fields in a multiple rows contain
the same values EXCEPT the datetime field. So technically their not
duplicates (the datetimes are a couple of seconds off from each other).
However, I know the front end application had an error that was sticking
them in. I need to keep the 1st record it stuck in and get rid of the rest
that are a few seconds off from the 1st record.
"Anith Sen" <anith@.bizdatasolutions.com> wrote in message
news:%239EpuQ7MFHA.2656@.TK2MSFTNGP10.phx.gbl...
> Please post your table structures and sample data along with the script
> which you use to distingush the duplicates. Without clear specifications
> and an usable repro, it is hard for others to understand what would
> "technically" make some rows non-duplicates when in reality, they are.
> --
> Anith
>|||No, it wouldn't . this is a correlated subquery... Take a look at the table
alias t and t1... the same table but treated as 2 different tables...
This takes a row from table t and is trying to decide whether or not to
delete it... It looks to find a row in the same table ( but aliased to t1)
that has the same col1 ( supposedly the id column) but which has a date
which is < than the date of the row in t you are considering for
deletion... If that expression is true, that means there is another row
with the same key but which has an earlier date, so this row must be the
additionaly row that was added later , and there fore should be deleted.
Hope this helps..
--
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"J. Clarke" <jaclarke01@.hotmail.comNOSPAM> wrote in message
news:u8lQjW$MFHA.2604@.TK2MSFTNGP10.phx.gbl...
> Hmmm...wouldn't this delete everything except the most recent record (or
> am I missreading this)?
> "Alejandro Mesa" <AlejandroMesa@.discussions.microsoft.com> wrote in
> message news:E562DCB5-4F62-4DD2-B859-2F357856DBF4@.microsoft.com...
>> Any other column to identitfy the row?
>> delete t
>> where exists (select * from t as t1 where t1.col1 = t.col1 and
>> t1.col_datetime < t.col_datetime)
>>
>> AMB
>> "J. Clarke" wrote:
>> I have a script to remove duplicate records from a table. I have since
>> found out that I have dups with a date/time stamp that are a few seconds
>> off
>> (thus, I guess technically making them not duplicates). How can I
>> properly
>> get rid of the later records. Again the only difference is the seconds
>> in a
>> date/time field.
>> Jeff
>>
>|||On Mon, 28 Mar 2005 10:35:43 -0500, J. Clarke wrote:
>I have a script to remove duplicate records from a table. I have since
>found out that I have dups with a date/time stamp that are a few seconds off
>(thus, I guess technically making them not duplicates). How can I properly
>get rid of the later records. Again the only difference is the seconds in a
>date/time field.
>Jeff
>
Hi Jeff,
Assuming Col1, Col2 and Col3 are exactly the same and Col4 is the
datetime column with a few seconds difference, and that you want to
delete the duplicates if the time difference is no more than 20 seconds,
use:
DELETE FROM MyTable
WHERE EXISTS
(SELECT *
FROM MyTable AS b
WHERE b.Col1 = MyTable.Col1
AND b.Col2 = MyTable.Col2
AND b.Col3 = MyTable.Col3
AND b.Col4 < MyTable.Col4
AND b.Col4 >= DATEDIFF(second, 20, MyTable.Col4))
(untested)
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||similar to Hugo's idea you could also try the following:
DELETE MyTable
from MyTable my
LEFT JOIN (
select Col1, Col2, Col3, MIN(Col4)as MinCol4 from MyTable
group by Col1, Col2, Col3) as gb
ON my.Col1 = gb.Col1,
and my.Col2 = gb.Col2,
and my.Col3 = gb.Col3,
and my.Col4 = gb.MinCol4
where gb.MinCol4 IS NULL
This does not take into account any specific time lag. This would find
duplicates that are days apart not just 20 seconds.
This uses a subquery "group by" to identify distinct records and the
"earliest" (min) date stamp and then matches it back and drops all the
matching records that are not the earliest datetime.
The magic comes from the LEFT JOIN and the fact that MinCol4 IS NULL
Good luck.
--
Message posted via http://www.sqlmonster.com|||Just to make you feel better, you can try this test to prove the theory:
(also note: I inadvertantly had commas in the "and" section of the left
join)
if exists(select name from sysobjects where name = 'MyTable' and type = 'U')
drop table MyTable
go
create table MyTable(Col1 int, Col2 varchar(10), Col3 int, Col4 datetime)
insert MyTable select 1, 'Lucy', 101, '1/1/2005'
insert MyTable select 1, 'Lucy', 101, '1/2/2005'
insert MyTable select 2, 'Ricky', 102, '1/1/2005'
insert MyTable select 2, 'Ricky', 102, '1/2/2005'
insert MyTable select 2, 'Ricky', 102, '1/3/2005'
insert MyTable select 3, 'Fred', 103, '1/1/2005'
insert MyTable select 4, 'Ethel', 104, '1/1/2005'
insert MyTable select 4, 'Ethel', 104, '1/2/2005'
insert MyTable select 4, 'Ethel', 104, '1/3/2005'
insert MyTable select 4, 'Ethel', 104, '1/4/2005'
DELETE MyTable
from MyTable my
LEFT JOIN (
select Col1, Col2, Col3, MIN(Col4)as MinCol4 from MyTable
group by Col1, Col2, Col3) as gb
ON my.Col1 = gb.Col1
and my.Col2 = gb.Col2
and my.Col3 = gb.Col3
and my.Col4 = gb.MinCol4
where gb.MinCol4 IS NULL
select * from MyTable
--
Message posted via http://www.sqlmonster.com|||Ah...Thanks for the explaination Wayne. That helps alot.
"Wayne Snyder" <wayne.nospam.snyder@.mariner-usa.com> wrote in message
news:%232IH7KGNFHA.1476@.TK2MSFTNGP09.phx.gbl...
> No, it wouldn't . this is a correlated subquery... Take a look at the
> table alias t and t1... the same table but treated as 2 different
> tables...
> This takes a row from table t and is trying to decide whether or not to
> delete it... It looks to find a row in the same table ( but aliased to t1)
> that has the same col1 ( supposedly the id column) but which has a date
> which is < than the date of the row in t you are considering for
> deletion... If that expression is true, that means there is another row
> with the same key but which has an earlier date, so this row must be the
> additionaly row that was added later , and there fore should be deleted.
> Hope this helps..
> --
> Wayne Snyder, MCDBA, SQL Server MVP
> Mariner, Charlotte, NC
> www.mariner-usa.com
> (Please respond only to the newsgroups.)
> I support the Professional Association of SQL Server (PASS) and it's
> community of SQL Server professionals.
> www.sqlpass.org
> "J. Clarke" <jaclarke01@.hotmail.comNOSPAM> wrote in message
> news:u8lQjW$MFHA.2604@.TK2MSFTNGP10.phx.gbl...
>> Hmmm...wouldn't this delete everything except the most recent record (or
>> am I missreading this)?
>> "Alejandro Mesa" <AlejandroMesa@.discussions.microsoft.com> wrote in
>> message news:E562DCB5-4F62-4DD2-B859-2F357856DBF4@.microsoft.com...
>> Any other column to identitfy the row?
>> delete t
>> where exists (select * from t as t1 where t1.col1 = t.col1 and
>> t1.col_datetime < t.col_datetime)
>>
>> AMB
>> "J. Clarke" wrote:
>> I have a script to remove duplicate records from a table. I have since
>> found out that I have dups with a date/time stamp that are a few
>> seconds off
>> (thus, I guess technically making them not duplicates). How can I
>> properly
>> get rid of the later records. Again the only difference is the seconds
>> in a
>> date/time field.
>> Jeff
>>
>>
>|||Thanks Hugo - I'm not sure if this will work for me as the time may expand
beyond a set periord of seconds (maybe to a few minutes?). I get the gist
though - I appreciate your help and explaination!
Jeff
"Hugo Kornelis" <hugo@.pe_NO_rFact.in_SPAM_fo> wrote in message
news:92nj411t3o257ghsm8jfm3oicddhrc6p3k@.4ax.com...
> On Mon, 28 Mar 2005 10:35:43 -0500, J. Clarke wrote:
>>I have a script to remove duplicate records from a table. I have since
>>found out that I have dups with a date/time stamp that are a few seconds
>>off
>>(thus, I guess technically making them not duplicates). How can I
>>properly
>>get rid of the later records. Again the only difference is the seconds in
>>a
>>date/time field.
>>Jeff
> Hi Jeff,
> Assuming Col1, Col2 and Col3 are exactly the same and Col4 is the
> datetime column with a few seconds difference, and that you want to
> delete the duplicates if the time difference is no more than 20 seconds,
> use:
> DELETE FROM MyTable
> WHERE EXISTS
> (SELECT *
> FROM MyTable AS b
> WHERE b.Col1 = MyTable.Col1
> AND b.Col2 = MyTable.Col2
> AND b.Col3 = MyTable.Col3
> AND b.Col4 < MyTable.Col4
> AND b.Col4 >= DATEDIFF(second, 20, MyTable.Col4))
> (untested)
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)|||Wow! Thanks for the explaination and the example. Your example I think may
work for the best of me. From one example I was peeking at I may up to 4000
'dups' off the 1st record (yikes!). I'm concerned tho, that I may have an
actual record on another day and I need to ensure I'm not including those
puppies
Jeff
"Geoffrey Kahan via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:b532bdf904ce437fb849575d7ccfd8fb@.SQLMonster.com...
> Just to make you feel better, you can try this test to prove the theory:
> (also note: I inadvertantly had commas in the "and" section of the left
> join)
> if exists(select name from sysobjects where name = 'MyTable' and type => 'U')
> drop table MyTable
> go
> create table MyTable(Col1 int, Col2 varchar(10), Col3 int, Col4 datetime)
> insert MyTable select 1, 'Lucy', 101, '1/1/2005'
> insert MyTable select 1, 'Lucy', 101, '1/2/2005'
> insert MyTable select 2, 'Ricky', 102, '1/1/2005'
> insert MyTable select 2, 'Ricky', 102, '1/2/2005'
> insert MyTable select 2, 'Ricky', 102, '1/3/2005'
> insert MyTable select 3, 'Fred', 103, '1/1/2005'
> insert MyTable select 4, 'Ethel', 104, '1/1/2005'
> insert MyTable select 4, 'Ethel', 104, '1/2/2005'
> insert MyTable select 4, 'Ethel', 104, '1/3/2005'
> insert MyTable select 4, 'Ethel', 104, '1/4/2005'
> DELETE MyTable
> from MyTable my
> LEFT JOIN (
> select Col1, Col2, Col3, MIN(Col4)as MinCol4 from MyTable
> group by Col1, Col2, Col3) as gb
> ON my.Col1 = gb.Col1
> and my.Col2 = gb.Col2
> and my.Col3 = gb.Col3
> and my.Col4 = gb.MinCol4
> where gb.MinCol4 IS NULL
> select * from MyTable
> --
> Message posted via http://www.sqlmonster.com|||I don't think this is gonna work Geoffrey - I have 928,457 records in the DB
and the scripts been running now for 16+ hours - I don't think my sites are
gonna want to be shutdown this long. It's on my local box, nothing other
than SQL QA is connected to it.
Here is my complete script (maybe I did it wrong):
DELETE dbo.TRM_VISN_REPORT
from dbo.TRM_VISN_REPORT my
LEFT JOIN (
select VistaUserName, MIN(DateTimeofCall)as MinDateTimeofCall,
UserLocStation, CallDuration, CPTCode, CPTDescription, ClinicalCall,
RegisteredPatient, ChiefComplaint, FollowupIntRec,
FollowupIntAct, FollowupLoc, CallerResponse, CallerArea, VEJDIFN,
TypeOfCall, CallFiledAtStation, PatientName, SSN
from dbo.TRM_VISN_REPORT
group by VistaUserName, UserLocStation, CallDuration, CPTCode,
CPTDescription, ClinicalCall, RegisteredPatient, ChiefComplaint,
FollowupIntRec,
FollowupIntAct, FollowupLoc, CallerResponse, CallerArea, VEJDIFN,
TypeOfCall, CallFiledAtStation, PatientName, SSN) as gb
ON my.VistaUserName = gb.VistaUserName
and my.DateTimeofCall = gb.MinDateTimeofCall
and my.UserLocStation = gb.UserLocStation
and my.CallDuration = gb.CallDuration
and my.CPTCode = gb.CPTCode
and my.CPTDescription = gb.CPTDescription
and my.ClinicalCall = gb.ClinicalCall
and my.RegisteredPatient = gb.RegisteredPatient
and my.ChiefComplaint = gb.ChiefComplaint
and my.FollowupIntRec = gb.FollowupIntRec
and my.FollowupIntAct = gb.FollowupIntAct
and my.FollowupLoc = gb.FollowupLoc
and my.CallerResponse = gb.CallerResponse
and my.CallerArea = gb.CallerArea
and my.VEJDIFN = gb.VEJDIFN
and my.TypeOfCall = gb.TypeOfCall
and my.CallFiledAtStation = gb.CallFiledAtStation
and my.PatientName = gb.PatientName
and my.SSN = gb.SSN
where gb.MinDateTimeofCall IS NULL
Jeff
"Geoffrey Kahan via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:b532bdf904ce437fb849575d7ccfd8fb@.SQLMonster.com...
> Just to make you feel better, you can try this test to prove the theory:
> (also note: I inadvertantly had commas in the "and" section of the left
> join)
> if exists(select name from sysobjects where name = 'MyTable' and type ='U')
> drop table MyTable
> go
> create table MyTable(Col1 int, Col2 varchar(10), Col3 int, Col4 datetime)
> insert MyTable select 1, 'Lucy', 101, '1/1/2005'
> insert MyTable select 1, 'Lucy', 101, '1/2/2005'
> insert MyTable select 2, 'Ricky', 102, '1/1/2005'
> insert MyTable select 2, 'Ricky', 102, '1/2/2005'
> insert MyTable select 2, 'Ricky', 102, '1/3/2005'
> insert MyTable select 3, 'Fred', 103, '1/1/2005'
> insert MyTable select 4, 'Ethel', 104, '1/1/2005'
> insert MyTable select 4, 'Ethel', 104, '1/2/2005'
> insert MyTable select 4, 'Ethel', 104, '1/3/2005'
> insert MyTable select 4, 'Ethel', 104, '1/4/2005'
> DELETE MyTable
> from MyTable my
> LEFT JOIN (
> select Col1, Col2, Col3, MIN(Col4)as MinCol4 from MyTable
> group by Col1, Col2, Col3) as gb
> ON my.Col1 = gb.Col1
> and my.Col2 = gb.Col2
> and my.Col3 = gb.Col3
> and my.Col4 = gb.MinCol4
> where gb.MinCol4 IS NULL
> select * from MyTable
> --
> Message posted via http://www.sqlmonster.comsql

Monday, March 19, 2012

"Slipt" rows based on datetime

Hello!

I have a table that, among other columns, has two datetime columns which indicate the initial and the final time. This would be an exemple of data in this table:

row1:

initial_time: 2006-05-24 8:00:00

final_time: 2006-05-24 8:30:00

row2:

initial_time: 2006-05-24 8:35:00

final_time: 2006-05-24 9:15:00

I would like to split a row in two new rows if final time's hour is different of initial time's hour, so I would like to split row2 into:

row2_a:

initial_time: 2006-05-24 8:35:00

initial_time: 2006-05-24 8:59:59

row2_b:

initial_time: 2006-05-24 9:00:00

initial_time: 2006-05-24 9:15:00

Is it possible to do it in a query, I mean, without using procedures?

Thank you!

? It gets a bit complex, but here's one way: create table #table ( initial_time datetime, final_time datetime) insert #table values( '2006-05-24 8:00:00', '2006-05-24 8:30:00') insert #table values( '2006-05-24 8:35:00', '2006-05-24 9:15:00') SELECT CASE x.N WHEN 0 THEN T.initial_time WHEN 1 THEN DATEADD(ms, ((DATEPART(minute, T.final_time) * 60000) + (DATEPART(second, T.final_time) * 1000) + DATEPART(millisecond, T.final_time)) * -1, T.final_time) END AS initial_time, CASE x.N WHEN 0 THEN CASE DATEDIFF(hour, T.initial_time, T.final_time) WHEN 0 THEN T.final_time WHEN 1 THEN DATEADD(millisecond, -3, DATEADD(millisecond, ((DATEPART(minute, T.final_time) * 60000) + (DATEPART(second, T.final_time) * 1000) + DATEPART(millisecond, T.final_time)) * -1, T.final_time)) END WHEN 1 THEN T.final_time END AS final_timeFROM #table TJOIN( SELECT 0 UNION ALL SELECT 1) x (N) ON x.N <= DATEDIFF(hour, T.initial_time, T.final_time) drop table #tablego -- Adam MachanicPro SQL Server 2005, available nowhttp://www..apress.com/book/bookDisplay.html?bID=457-- <AnaC@.discussions.microsoft..com> wrote in message news:bdb330d8-5a34-409c-aaf3-57044628fc13@.discussions.microsoft.com... Hello! I have a table that, among other columns, has two datetime columns which indicate the initial and the final time. This would be an exemple of data in this table: row1: initial_time: 2006-05-24 8:00:00 final_time: 2006-05-24 8:30:00 row2: initial_time: 2006-05-24 8:35:00 final_time: 2006-05-24 9:15:00 I would like to split a row in two new rows if final time's hour is different of initial time's hour, so I would like to split row2 into: row2_a: initial_time: 2006-05-24 8:35:00 initial_time: 2006-05-24 8:59:59 row2_b: initial_time: 2006-05-24 9:00:00 initial_time: 2006-05-24 9:15:00 Is it possible to do it in a query, I mean, without using procedures? Thank you!|||Thank you!

Friday, March 16, 2012

"sa" account and domain level rights

Hey everyone,
I apologize for the newbie question but I'm looking for the correct answer. We have 4 production SQL servers at this time. When we had originally set them up the "sa" account belonged to the domain administrators group. Since we have a SQL admin team and a domain admin team we would like to remove this privilege. Is this something we can and should do? Our SQL servers use mixed mode authentication and some databases are configured for Windows authentication. I would appreciate any input from the community.

You are mixing some things up:

"When we had originally set them up the "sa" account belonged to the domain administrators group."

As sa is a sql server login it cannot belong to any domain group.


"Our SQL servers use mixed mode authentication and some databases are configured for Windows authentication. "

Authentication takes place at the server level not the database level, so although you switched on Mixed Auth on Server level you can still have autorized users in the database who are authenticated via SQL Server authentication.

Server --Authentication
Database --Authorization


HT, Jens Suessmeyer.

|||Sorry for the confusion. Thanks for the response Jen. We have a domain account called "sa" which was used to create databases on our SQL server so it is the current db owner. We use Windows authentication so that credentials are passed via Active Directory from an end user's network logon to our SQL server. Also, we use the Active Directory "sa" account to log in to SQL and it allows our SQL administrators to create Active Directory / SQL accounts to facilitate Windows authentication. My concern is that if we remove domain administration rights from this account we risk breaking applications that have the "sa" account as the database owner.
|||

OK, lets clarify something.

For the User DOmain account sa, the following schema is valid:

Domain --Domain Adminstrator --> (inherits) local administrative rights on the SQL Server -- inherits serverole systemadministrator (in sql server because the local admin group and therefore also the domain admins) are systemadministrators --> inherits dbowner role on the database, because systemadministrator are in the role downers by default.

So breaking the chains:

-With revoking domain administrative rights from the user sa won′t break the chain if you grant him local admin rights.

-With revoking domain administrative rights from the user sa won′t, not giving him local administrative rights on the server won′t break the chain if you put him to the systemadmistrator role on the server.

-With revoking domain administrative rights from the user sa, not giving him local administrative rights on the server, not putting him in the role systeadminstrators break the chain, though he only has downer access to the database now.


It hoped I clarified that a bit. If you didn′t understand my explanation or you have a further question, don′t hesitate to ask :-)

HTH, Jens Suessmeyer.

|||That's a great explanation Jens. Thank you very much. We are going to schedule an off hours test. First step will be to add the domain "sa" account to the local administrator group on our SQL servers. The second step will be removing the account from the domain administrators group. Then we will test and hope for the best. From what I've read, I did not see anything about the "sa" account requiring domain level rights (only local). So, you re-affirmed my thinking. I really appreciate your input.
|||You don′t even need to put him in the local admin group. Local admins are by default in the systemadministrators groups. But you can also put specific users in that role. So you can further restrict the user to have local admin rights.

HTH, Jens Suessmeyer.

Sunday, March 11, 2012

"Now" for time dimesion selection

A common requirement we see is to report against a sliding time window.
However, as far as I can tell, SSAS has no native ability to give me the "Now" time dimension member (that is, the year, month, day, ..., dimension member for the current moment in time). Is that correct?

Are there any techniques people have used to derive that info? It seems like something other people must have run in to.

Kevin,

You can use the VB Now() function and then format the result to build a member matching the format you use for the date in your time dimension. Here is an example against "Adventure Works". I had to subtract 900 days from the current time to render a member that exists in the "Date" dimension included with "Adventure Works", but I think you will get the idea.

WITH

MEMBER [Now Formatted]

AS

Format(Now() - 900,"MMMM d, yyyy")

SET [Current Day]

AS

StrToSet("[Date].[Calendar].[Date].[" + [Now Formatted] + "]")

SELECT

{[Current Day] } ON COLUMNS,

{[Measures].[Order Count],[Now Formatted]} ON ROWS

FROM

[Adventure Works]

|||

It is not good style to concatenate the unique or fully qualified name of members.

Another much more elegant way in the AS2005 is to use MemberValue.
MemberValue is a strongly typed value. You shouldn't use awkward formatting.

|||

Vladimir is correct in making the observation that the previous solution I offerred is not optimal. I was posting a "hammer and nails" approach to the question. As Vladimir points out, a better method would be to set the "Member Value" property of your date attribute to be the date from your dimension table. You could then use the following MDX:

WITH

SET [Current Day]

AS

Filter([Date].[Calendar].[Date].Members,

[Date].[Calendar].CurrentMember.MemberValue = (Round(Now()) - 900))

SELECT

{[Current Day] } ON COLUMNS,

{[Measures].[Order Count]} ON ROWS

FROM

[Adventure Works]

HTH,

Steve

|||

There are many ways to make a sliding time member in MDX. It all depends on your business requirements and how you have constructed your cube.

One approach is already presented.

The second way will work if you have a time dimension with members only to the current date. In this case you start at the top of your time dimension and use(MDX) lastchild (from the top member)until you hit the right level and member.

The third approach will work if you have a time dimension that extends across the current date to the full year or an additional year. This is the approach when you have budgets or forcasts involved. In this case you will have to find the last measure that it is not empty. MDX Tail() and Filter() on a measure that shows the current date(like actual sales) will help you here. Make a named set of this Time member.

Mosha and his cowriters have an interesting solution to the third approach in "Fast Track To MDX-Second edition". They are talking about recursion in a chapter in this book.

HTH

Thomas Ivarsson

Thursday, March 8, 2012

"NAN" and "INFINITY".....

HELP!I have a report that is calculating properly 99% of the time but a small amount of the data is coming up "NAN" and "INFINITY" Any suggestions? This is baffeling me!!

You may want to read the following blog article: http://blogs.msdn.com/bwelcker/archive/2006/09/26/End-of-Amnesia-_2800_Avoiding-Divide-By-Zero-Errors_2900_.aspx

Besides the solutions described in the article, you can also write custom code functions using IF-ELSE-END IF VB statements to avoid division by zero.

-- Robert

|||Thanks! :)

Saturday, February 25, 2012

"context connection" and MultipleActiveResultSets ...Can have both at the same time?

Hi,

When I enable MultipleActiveResultSets in the "context connection" (SqlConnection), I get an error:

System.InvalidOperationException: The only additional connection string keyword that may be used when requesting the context connection is the Type System Version keyword.

Can we have MARS in the "context connection"?

note: I'm doing this to support multiple open datareaders in a CLR stored procedure.

Thanks!

Andy

While I could not find explicit documentation on WHY, from my tests you CANNOT leverage MARS for inprocess SQL Server connections. Since you cannot append the MARS= in the connection string I simply tried to use the feature hoping context connections would allow it...

I ran this code:

using System;

using System.Data;

using System.Data.SqlClient;

using System.Data.SqlTypes;

using Microsoft.SqlServer.Server;

public partial class StoredProcedures

{

[Microsoft.SqlServer.Server.SqlProcedure]

public static void usp_MARS()

{

string strSQLGetOrder = "Select * from Sales.SalesOrderDetail WHERE SalesOrderID = 43659";

string strSQLUpdateInv = "UPDATE Production.ProductInventory SET Quantity=Quantity-@.amt WHERE (ProductID=@.pid)";

SqlConnection marsConnection = new SqlConnection("context connection=true");

marsConnection.Open();

SqlCommand readCommand = new SqlCommand(strSQLGetOrder, marsConnection);

SqlCommand writeCommand = new SqlCommand(strSQLUpdateInv, marsConnection);

writeCommand.Parameters.Add("@.amt", SqlDbType.Int);

writeCommand.Parameters.Add("@.pid", SqlDbType.Int);

using (SqlDataReader rdr = readCommand.ExecuteReader())

{

while (rdr.Read())

{

writeCommand.Parameters["@.amt"].Value = rdr["OrderQty"];

writeCommand.Parameters["@.pid"].Value = rdr["ProductID"];

writeCommand.ExecuteNonQuery();

}

}

marsConnection.Close();

}

};

And I receive the following runtime error on execute of the proc:

Msg 6522, Level 16, State 1, Procedure usp_MARS, Line 0

A .NET Framework error occurred during execution of user defined routine or aggregate 'usp_MARS':

System.InvalidOperationException: There is already an open DataReader associated with this Command which must be closed first.

System.InvalidOperationException:

at System.Data.SqlClient.SqlInternalConnectionSmi.ValidateConnectionForExecute(SqlCommand command)

at System.Data.SqlClient.SqlConnection.ValidateConnectionForExecute(String method, SqlCommand command)

at System.Data.SqlClient.SqlCommand.ValidateCommand(String method, Boolean async)

at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe)

at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()

at StoredProcedures.usp_MARS()

|||

MARS is not supported for server-side CLR code. However, you can use cursors to simulate similar kinds of behavior in some circumstances. See the ResultSet sample for a fairly painless way to do that. The latest samples MSI is located at http://msdn.microsoft.com/sql/downloads/samples/default.aspx. After you install the Samples MSI, by default you'll find the ResultSet sample at drive:\Program Files\Microsoft SQL Server\90\Samples\Engine\Programmability\CLR\ResultSet. There is a readme file in that folder which explains how to compile the CLR code, create the database objects, and run the sample.

|||

Hi Andy!

MARS is not available with context connections in SQL Server 2005.

Of cause, you can always use out-of-proc connection (where MARS can be enabled) to own server from CLR UDP.

|||Thanks!

"Can't find PInvoke DLL 'dbnetlib.dll'."

every time when I opened the connection from my windows CE4.2 device to sql server I got the error "Can't find PInvoke DLL 'dbnetlib.dll'.

My OS is MS CE4.2, I've installed Compact Framework 2.0 and SQL client. It worked fine at OS CE5.0 but not at CE4.2. My application is working correctly, but when I started the connection I received this error.

Any idea?

thanks, Jan

If you indeed have SQL Client installed properly, I would guess some dependencies required for it to function are missing from the OS image. Contact device OEM for help.

Monday, February 13, 2012

%Disk Time value over 100

Hello!
I have collected counters using perfmon from our Production SQL Server. I
have noticed that %Disk Time counter averaging 111 with maximum value being
2249 for particular drive. I was wondering how I detect whether this value
indicates a bottleneck. Drive in question is RAID 10. Is there conversion
formula I should use?
I know that value over 60 indicates a potential problem. Average Disk Queue
lenght is 1 which means to me that disk is OK. I am confused by high %Disk
Time value.
Any advice is appreciated,
Igor
Used 100 - %disk idle time to get your utilization.
"imarchenko" wrote:

> Hello!
> I have collected counters using perfmon from our Production SQL Server. I
> have noticed that %Disk Time counter averaging 111 with maximum value being
> 2249 for particular drive. I was wondering how I detect whether this value
> indicates a bottleneck. Drive in question is RAID 10. Is there conversion
> formula I should use?
> I know that value over 60 indicates a potential problem. Average Disk Queue
> lenght is 1 which means to me that disk is OK. I am confused by high %Disk
> Time value.
>
> Any advice is appreciated,
> Igor
>
>
|||%Disk time is a useless counter on high-end IO systems. It was designed for
sequential command IO systems, not current SCSI systems that support Command
Tag Queuing or the new SATA Native Command Queuing options. These are the
device-level commands that support scatter-gather IO. Short description is
that they full-duplex command and response for IO. The device can queue a
large number of IO requests, sort them optimally, and respond to them
asychronously and asequentially. RAID subsystems further reduced the
effectiveness of this counter buy abstracting a large number of physical
devices into one logical device presented to the OS..
I use Physical disk | Disk Read Bytes/sec, Disk Write Bytes/sec, Disk
Reads/sec, Disk Writes/sec, and Disk Queue Length to determine if I am IO
bound. Of course, you need to know the maximum capabilities of your IO
subsystem to use these numbers effectively. But since you benchmarked the
IO system with IOMeter when you built the server that is an easy comparison.

Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"imarchenko" <igormarchenko@.hotmail.com> wrote in message
news:eKaIHNaqFHA.4044@.TK2MSFTNGP09.phx.gbl...
> Hello!
> I have collected counters using perfmon from our Production SQL Server.
> I have noticed that %Disk Time counter averaging 111 with maximum value
> being 2249 for particular drive. I was wondering how I detect whether this
> value indicates a bottleneck. Drive in question is RAID 10. Is there
> conversion formula I should use?
> I know that value over 60 indicates a potential problem. Average Disk
> Queue lenght is 1 which means to me that disk is OK. I am confused by high
> %Disk Time value.
>
> Any advice is appreciated,
> Igor
>
|||Jeffrey,
Thanks a lot!
Igor
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:22B02C35-4D78-47A1-B273-88ED2F5EDACA@.microsoft.com...[vbcol=seagreen]
> Used 100 - %disk idle time to get your utilization.
> "imarchenko" wrote:
|||Thanks, Geoff. I really appreciate your elaborate reply.
Igor
"Geoff N. Hiten" <sqlcraftsman@.gmail.com> wrote in message
news:eSP65saqFHA.272@.TK2MSFTNGP15.phx.gbl...
> %Disk time is a useless counter on high-end IO systems. It was designed
> for sequential command IO systems, not current SCSI systems that support
> Command Tag Queuing or the new SATA Native Command Queuing options. These
> are the device-level commands that support scatter-gather IO. Short
> description is that they full-duplex command and response for IO. The
> device can queue a large number of IO requests, sort them optimally, and
> respond to them asychronously and asequentially. RAID subsystems further
> reduced the effectiveness of this counter buy abstracting a large number
> of physical devices into one logical device presented to the OS..
> I use Physical disk | Disk Read Bytes/sec, Disk Write Bytes/sec, Disk
> Reads/sec, Disk Writes/sec, and Disk Queue Length to determine if I am IO
> bound. Of course, you need to know the maximum capabilities of your IO
> subsystem to use these numbers effectively. But since you benchmarked the
> IO system with IOMeter when you built the server that is an easy
> comparison.
> --
> Geoff N. Hiten
> Senior Database Administrator
> Microsoft SQL Server MVP
>
> "imarchenko" <igormarchenko@.hotmail.com> wrote in message
> news:eKaIHNaqFHA.4044@.TK2MSFTNGP09.phx.gbl...
>

%Disk Time value over 100

Hello!
I have collected counters using perfmon from our Production SQL Server. I
have noticed that %Disk Time counter averaging 111 with maximum value being
2249 for particular drive. I was wondering how I detect whether this value
indicates a bottleneck. Drive in question is RAID 10. Is there conversion
formula I should use?
I know that value over 60 indicates a potential problem. Average Disk Queue
lenght is 1 which means to me that disk is OK. I am confused by high %Disk
Time value.
Any advice is appreciated,
IgorUsed 100 - %disk idle time to get your utilization.
"imarchenko" wrote:

> Hello!
> I have collected counters using perfmon from our Production SQL Server.
I
> have noticed that %Disk Time counter averaging 111 with maximum value bein
g
> 2249 for particular drive. I was wondering how I detect whether this value
> indicates a bottleneck. Drive in question is RAID 10. Is there conversion
> formula I should use?
> I know that value over 60 indicates a potential problem. Average Disk Queu
e
> lenght is 1 which means to me that disk is OK. I am confused by high %Disk
> Time value.
>
> Any advice is appreciated,
> Igor
>
>|||%Disk time is a useless counter on high-end IO systems. It was designed for
sequential command IO systems, not current SCSI systems that support Command
Tag Queuing or the new SATA Native Command Queuing options. These are the
device-level commands that support scatter-gather IO. Short description is
that they full-duplex command and response for IO. The device can queue a
large number of IO requests, sort them optimally, and respond to them
asychronously and asequentially. RAID subsystems further reduced the
effectiveness of this counter buy abstracting a large number of physical
devices into one logical device presented to the OS..
I use Physical disk | Disk Read Bytes/sec, Disk Write Bytes/sec, Disk
Reads/sec, Disk Writes/sec, and Disk Queue Length to determine if I am IO
bound. Of course, you need to know the maximum capabilities of your IO
subsystem to use these numbers effectively. But since you benchmarked the
IO system with IOMeter when you built the server that is an easy comparison.

Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"imarchenko" <igormarchenko@.hotmail.com> wrote in message
news:eKaIHNaqFHA.4044@.TK2MSFTNGP09.phx.gbl...
> Hello!
> I have collected counters using perfmon from our Production SQL Server.
> I have noticed that %Disk Time counter averaging 111 with maximum value
> being 2249 for particular drive. I was wondering how I detect whether this
> value indicates a bottleneck. Drive in question is RAID 10. Is there
> conversion formula I should use?
> I know that value over 60 indicates a potential problem. Average Disk
> Queue lenght is 1 which means to me that disk is OK. I am confused by high
> %Disk Time value.
>
> Any advice is appreciated,
> Igor
>

%Disk Time value over 100

Hello!
I have collected counters using perfmon from our Production SQL Server. I
have noticed that %Disk Time counter averaging 111 with maximum value being
2249 for particular drive. I was wondering how I detect whether this value
indicates a bottleneck. Drive in question is RAID 10. Is there conversion
formula I should use?
I know that value over 60 indicates a potential problem. Average Disk Queue
lenght is 1 which means to me that disk is OK. I am confused by high %Disk
Time value.
Any advice is appreciated,
IgorUsed 100 - %disk idle time to get your utilization.
"imarchenko" wrote:
> Hello!
> I have collected counters using perfmon from our Production SQL Server. I
> have noticed that %Disk Time counter averaging 111 with maximum value being
> 2249 for particular drive. I was wondering how I detect whether this value
> indicates a bottleneck. Drive in question is RAID 10. Is there conversion
> formula I should use?
> I know that value over 60 indicates a potential problem. Average Disk Queue
> lenght is 1 which means to me that disk is OK. I am confused by high %Disk
> Time value.
>
> Any advice is appreciated,
> Igor
>
>|||%Disk time is a useless counter on high-end IO systems. It was designed for
sequential command IO systems, not current SCSI systems that support Command
Tag Queuing or the new SATA Native Command Queuing options. These are the
device-level commands that support scatter-gather IO. Short description is
that they full-duplex command and response for IO. The device can queue a
large number of IO requests, sort them optimally, and respond to them
asychronously and asequentially. RAID subsystems further reduced the
effectiveness of this counter buy abstracting a large number of physical
devices into one logical device presented to the OS..
I use Physical disk | Disk Read Bytes/sec, Disk Write Bytes/sec, Disk
Reads/sec, Disk Writes/sec, and Disk Queue Length to determine if I am IO
bound. Of course, you need to know the maximum capabilities of your IO
subsystem to use these numbers effectively. But since you benchmarked the
IO system with IOMeter when you built the server that is an easy comparison.
:)
--
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"imarchenko" <igormarchenko@.hotmail.com> wrote in message
news:eKaIHNaqFHA.4044@.TK2MSFTNGP09.phx.gbl...
> Hello!
> I have collected counters using perfmon from our Production SQL Server.
> I have noticed that %Disk Time counter averaging 111 with maximum value
> being 2249 for particular drive. I was wondering how I detect whether this
> value indicates a bottleneck. Drive in question is RAID 10. Is there
> conversion formula I should use?
> I know that value over 60 indicates a potential problem. Average Disk
> Queue lenght is 1 which means to me that disk is OK. I am confused by high
> %Disk Time value.
>
> Any advice is appreciated,
> Igor
>|||Jeffrey,
Thanks a lot!
Igor
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:22B02C35-4D78-47A1-B273-88ED2F5EDACA@.microsoft.com...
> Used 100 - %disk idle time to get your utilization.
> "imarchenko" wrote:
>> Hello!
>> I have collected counters using perfmon from our Production SQL
>> Server. I
>> have noticed that %Disk Time counter averaging 111 with maximum value
>> being
>> 2249 for particular drive. I was wondering how I detect whether this
>> value
>> indicates a bottleneck. Drive in question is RAID 10. Is there conversion
>> formula I should use?
>> I know that value over 60 indicates a potential problem. Average Disk
>> Queue
>> lenght is 1 which means to me that disk is OK. I am confused by high
>> %Disk
>> Time value.
>>
>> Any advice is appreciated,
>> Igor
>>|||Thanks, Geoff. I really appreciate your elaborate reply.
Igor
"Geoff N. Hiten" <sqlcraftsman@.gmail.com> wrote in message
news:eSP65saqFHA.272@.TK2MSFTNGP15.phx.gbl...
> %Disk time is a useless counter on high-end IO systems. It was designed
> for sequential command IO systems, not current SCSI systems that support
> Command Tag Queuing or the new SATA Native Command Queuing options. These
> are the device-level commands that support scatter-gather IO. Short
> description is that they full-duplex command and response for IO. The
> device can queue a large number of IO requests, sort them optimally, and
> respond to them asychronously and asequentially. RAID subsystems further
> reduced the effectiveness of this counter buy abstracting a large number
> of physical devices into one logical device presented to the OS..
> I use Physical disk | Disk Read Bytes/sec, Disk Write Bytes/sec, Disk
> Reads/sec, Disk Writes/sec, and Disk Queue Length to determine if I am IO
> bound. Of course, you need to know the maximum capabilities of your IO
> subsystem to use these numbers effectively. But since you benchmarked the
> IO system with IOMeter when you built the server that is an easy
> comparison. :)
> --
> Geoff N. Hiten
> Senior Database Administrator
> Microsoft SQL Server MVP
>
> "imarchenko" <igormarchenko@.hotmail.com> wrote in message
> news:eKaIHNaqFHA.4044@.TK2MSFTNGP09.phx.gbl...
>> Hello!
>> I have collected counters using perfmon from our Production SQL Server.
>> I have noticed that %Disk Time counter averaging 111 with maximum value
>> being 2249 for particular drive. I was wondering how I detect whether
>> this value indicates a bottleneck. Drive in question is RAID 10. Is there
>> conversion formula I should use?
>> I know that value over 60 indicates a potential problem. Average Disk
>> Queue lenght is 1 which means to me that disk is OK. I am confused by
>> high %Disk Time value.
>>
>> Any advice is appreciated,
>> Igor
>>
>

%disk time > 2500

Is the %disk time > 2500 bad ? What does that mean ? My pages/sec seem to be
low ..around 0 with a few spikes here and there..so why is the %disk time
high ?
ThanksHi,
%disk time > 2500 , are you saying that your disk is running at 2500%, if
you are I want one of those disk :)
Can you tell me if you are using a software RAID 5 config? If so this might
explain the numbers. If you are using a software raid try dividing this
number by the number of disks. Although thinking a bit more I doubt
software RAID would explain the figure.
Basically PhysicalDisk:% Disktime shows how busy each disk is. long period
of busy times say over 50% for 10 minutes indicates problems (too many
writes/reads , slow disk) So a figure of 2500 is really saying something.
You should look to other counters like %disk read time %disk write time and
Avg Disk queue length to see where the issues are . If this activity is not
caused by paging then you have a very very busy machine.
I hope this helps
regards
Greg O MCSD
http://www.ag-software.com/ags_scribe_index.asp. SQL Scribe Documentation
Builder, the quickest way to document your database
http://www.ag-software.com/ags_SSEPE_index.asp. AGS SQL Server Extended
Property Extended properties manager for SQL 2000
http://www.ag-software.com/IconExtractionProgram.asp. Free icon extraction
program
http://www.ag-software.com. Free programming tools
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:O30Ol8LWDHA.392@.TK2MSFTNGP11.phx.gbl...
> Is the %disk time > 2500 bad ? What does that mean ? My pages/sec seem to
be
> low ..around 0 with a few spikes here and there..so why is the %disk time
> high ?
> Thanks
>

% Disk Time counter

I'm monitoring our database server and the "% Disk Time" counter in the
"Physical Disk" object is confusing me. Over a 24 hour period I'm getting
Min = 0.15, Max = 20269 and Average = 136. I am monitoring the database
drive itself not transaction logs and the database is on a RAID 10 SAN. Can
anybody shed any light on how I should read these figures?
thanks
Gav% Disk Time is more or less meaningless. It's just (average disk queue
length) * 100. Better to monitor the queue length itself, and correlate
that number with counters like Avg Seconds/Read and Avg Seconds/Write. A
queue of 202 may indicate a problem, but you really need those average
counters to find out how bad things are really getting.
Adam Machanic
SQL Server MVP - http://sqlblog.com
Author, "Expert SQL Server 2005 Development"
http://www.apress.com/book/bookDisplay.html?bID=10220
"Gav" <gav@.nospam.com> wrote in message
news:uTHfuPy8HHA.1164@.TK2MSFTNGP02.phx.gbl...
> I'm monitoring our database server and the "% Disk Time" counter in the
> "Physical Disk" object is confusing me. Over a 24 hour period I'm getting
> Min = 0.15, Max = 20269 and Average = 136. I am monitoring the database
> drive itself not transaction logs and the database is on a RAID 10 SAN.
> Can anybody shed any light on how I should read these figures?
> thanks
> Gav
>|||Also:
For calculation, you need to divide the test result to your physical disks'
count. (If you selected "Total_" object instead of a specific disk.)
For example there could be 8 disks on your SAN then you need to divide the
result to 8 to find the correct analysis.
--
Ekrem Önsoy
"Gav" <gav@.nospam.com> wrote in message
news:uTHfuPy8HHA.1164@.TK2MSFTNGP02.phx.gbl...
> I'm monitoring our database server and the "% Disk Time" counter in the
> "Physical Disk" object is confusing me. Over a 24 hour period I'm getting
> Min = 0.15, Max = 20269 and Average = 136. I am monitoring the database
> drive itself not transaction logs and the database is on a RAID 10 SAN.
> Can anybody shed any light on how I should read these figures?
> thanks
> Gav
>

% Disk Time (Perfmon)

I have a Windows 2000 Advanced Server with SQL Server 2000
Enterprise Edition on a SAN. What units are the %Disk
Time measured in from Perfmon?
Is the (% Disk Time E) = (Disk Time E) / (Total % Disk
Time) ?
Please help me with these questions.
Thanks,
MikeMike
This counter measured how busy a physical array is. In general it should be
less 55% otherwise you probably jave IO bottleneck.
"Mike" <anonymous@.discussions.microsoft.com> wrote in message
news:c13201c47a1b$4db045c0$a401280a@.phx.gbl...
> I have a Windows 2000 Advanced Server with SQL Server 2000
> Enterprise Edition on a SAN. What units are the %Disk
> Time measured in from Perfmon?
> Is the (% Disk Time E) = (Disk Time E) / (Total % Disk
> Time) ?
> Please help me with these questions.
> Thanks,
> Mike|||I don't tend to agree with that advice.
On it's own
<<
'%Disk Time
Is not a very reliable indicator of an IO bottleneck. You could easily have
a bottleneck if this counter is much lower than 55%. You might NOT have a
bottleneck if this counter is 55% or higher.
There are many, many more counters which you should take a look. You need to
take a look at queue lenght, wait times for disk transfers, time it takes
per read and write. In addition, there are a host of other counters
specific to your SAN that you should take a look at.
Unfortunately, I'm running a bit late right now and I don't have time to
write a long message. Tom Davidson from MS has a nice article in SQL Server
Magazine that dicusses some of these counters. I believe it might also be on
MSDN. It shouldn't be too hard to track down if you search by his name on
each site.
--
Brian Moran
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:u1rJ0yheEHA.3520@.TK2MSFTNGP10.phx.gbl...
> Mike
> This counter measured how busy a physical array is. In general it should
be
> less 55% otherwise you probably jave IO bottleneck.
>
>
> "Mike" <anonymous@.discussions.microsoft.com> wrote in message
> news:c13201c47a1b$4db045c0$a401280a@.phx.gbl...
> >
> > I have a Windows 2000 Advanced Server with SQL Server 2000
> > Enterprise Edition on a SAN. What units are the %Disk
> > Time measured in from Perfmon?
> >
> > Is the (% Disk Time E) = (Disk Time E) / (Total % Disk
> > Time) ?
> >
> > Please help me with these questions.
> >
> > Thanks,
> >
> > Mike
>|||Brian
First of all i have said 'probably'
Secondly if you have this counter higher than 55% for continuous periods
(let me say 15 min)then your SQL Server
may be experiencing an I/O bottleneck.
"Brian Moran" <brian@.solidqualitylearning.com> wrote in message
news:%23r7TWIieEHA.2848@.TK2MSFTNGP10.phx.gbl...
> I don't tend to agree with that advice.
> On it's own
> <<
> '%Disk Time
> >>
> Is not a very reliable indicator of an IO bottleneck. You could easily
have
> a bottleneck if this counter is much lower than 55%. You might NOT have a
> bottleneck if this counter is 55% or higher.
> There are many, many more counters which you should take a look. You need
to
> take a look at queue lenght, wait times for disk transfers, time it takes
> per read and write. In addition, there are a host of other counters
> specific to your SAN that you should take a look at.
>
> Unfortunately, I'm running a bit late right now and I don't have time to
> write a long message. Tom Davidson from MS has a nice article in SQL
Server
> Magazine that dicusses some of these counters. I believe it might also be
on
> MSDN. It shouldn't be too hard to track down if you search by his name on
> each site.
> --
> Brian Moran
> "Uri Dimant" <urid@.iscar.co.il> wrote in message
> news:u1rJ0yheEHA.3520@.TK2MSFTNGP10.phx.gbl...
> > Mike
> > This counter measured how busy a physical array is. In general it should
> be
> > less 55% otherwise you probably jave IO bottleneck.
> >
> >
> >
> >
> >
> > "Mike" <anonymous@.discussions.microsoft.com> wrote in message
> > news:c13201c47a1b$4db045c0$a401280a@.phx.gbl...
> > >
> > > I have a Windows 2000 Advanced Server with SQL Server 2000
> > > Enterprise Edition on a SAN. What units are the %Disk
> > > Time measured in from Perfmon?
> > >
> > > Is the (% Disk Time E) = (Disk Time E) / (Total % Disk
> > > Time) ?
> > >
> > > Please help me with these questions.
> > >
> > > Thanks,
> > >
> > > Mike
> >
> >
>

% Disk Time (Perfmon)

I have a Windows 2000 Advanced Server with SQL Server 2000
Enterprise Edition on a SAN. What units are the %Disk
Time measured in from Perfmon?
Is the (% Disk Time E) = (Disk Time E) / (Total % Disk
Time) ?
Please help me with these questions.
Thanks,
Mike
Mike
This counter measured how busy a physical array is. In general it should be
less 55% otherwise you probably jave IO bottleneck.
"Mike" <anonymous@.discussions.microsoft.com> wrote in message
news:c13201c47a1b$4db045c0$a401280a@.phx.gbl...
> I have a Windows 2000 Advanced Server with SQL Server 2000
> Enterprise Edition on a SAN. What units are the %Disk
> Time measured in from Perfmon?
> Is the (% Disk Time E) = (Disk Time E) / (Total % Disk
> Time) ?
> Please help me with these questions.
> Thanks,
> Mike
|||I don't tend to agree with that advice.
On it's own
<<
'%Disk Time[vbcol=seagreen]
Is not a very reliable indicator of an IO bottleneck. You could easily have
a bottleneck if this counter is much lower than 55%. You might NOT have a
bottleneck if this counter is 55% or higher.
There are many, many more counters which you should take a look. You need to
take a look at queue lenght, wait times for disk transfers, time it takes
per read and write. In addition, there are a host of other counters
specific to your SAN that you should take a look at.
Unfortunately, I'm running a bit late right now and I don't have time to
write a long message. Tom Davidson from MS has a nice article in SQL Server
Magazine that dicusses some of these counters. I believe it might also be on
MSDN. It shouldn't be too hard to track down if you search by his name on
each site.
Brian Moran
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:u1rJ0yheEHA.3520@.TK2MSFTNGP10.phx.gbl...
> Mike
> This counter measured how busy a physical array is. In general it should
be
> less 55% otherwise you probably jave IO bottleneck.
>
>
> "Mike" <anonymous@.discussions.microsoft.com> wrote in message
> news:c13201c47a1b$4db045c0$a401280a@.phx.gbl...
>
|||Brian
First of all i have said 'probably'
Secondly if you have this counter higher than 55% for continuous periods
(let me say 15 min)then your SQL Server
may be experiencing an I/O bottleneck.
"Brian Moran" <brian@.solidqualitylearning.com> wrote in message
news:%23r7TWIieEHA.2848@.TK2MSFTNGP10.phx.gbl...
> I don't tend to agree with that advice.
> On it's own
> <<
> '%Disk Time
> Is not a very reliable indicator of an IO bottleneck. You could easily
have
> a bottleneck if this counter is much lower than 55%. You might NOT have a
> bottleneck if this counter is 55% or higher.
> There are many, many more counters which you should take a look. You need
to
> take a look at queue lenght, wait times for disk transfers, time it takes
> per read and write. In addition, there are a host of other counters
> specific to your SAN that you should take a look at.
>
> Unfortunately, I'm running a bit late right now and I don't have time to
> write a long message. Tom Davidson from MS has a nice article in SQL
Server
> Magazine that dicusses some of these counters. I believe it might also be
on
> MSDN. It shouldn't be too hard to track down if you search by his name on
> each site.
> --
> Brian Moran
> "Uri Dimant" <urid@.iscar.co.il> wrote in message
> news:u1rJ0yheEHA.3520@.TK2MSFTNGP10.phx.gbl...
> be
>

% Disk Time (Perfmon)

I have a Windows 2000 Advanced Server with SQL Server 2000
Enterprise Edition on a SAN. What units are the %Disk
Time measured in from Perfmon?
Is the (% Disk Time E) = (Disk Time E) / (Total % Disk
Time) ?
Please help me with these questions.
Thanks,
MikeMike
This counter measured how busy a physical array is. In general it should be
less 55% otherwise you probably jave IO bottleneck.
"Mike" <anonymous@.discussions.microsoft.com> wrote in message
news:c13201c47a1b$4db045c0$a401280a@.phx.gbl...
> I have a Windows 2000 Advanced Server with SQL Server 2000
> Enterprise Edition on a SAN. What units are the %Disk
> Time measured in from Perfmon?
> Is the (% Disk Time E) = (Disk Time E) / (Total % Disk
> Time) ?
> Please help me with these questions.
> Thanks,
> Mike|||I don't tend to agree with that advice.
On it's own
<<
'%Disk Time
Is not a very reliable indicator of an IO bottleneck. You could easily have
a bottleneck if this counter is much lower than 55%. You might NOT have a
bottleneck if this counter is 55% or higher.
There are many, many more counters which you should take a look. You need to
take a look at queue lenght, wait times for disk transfers, time it takes
per read and write. In addition, there are a host of other counters
specific to your SAN that you should take a look at.
Unfortunately, I'm running a bit late right now and I don't have time to
write a long message. Tom Davidson from MS has a nice article in SQL Server
Magazine that dicusses some of these counters. I believe it might also be on
MSDN. It shouldn't be too hard to track down if you search by his name on
each site.
--
Brian Moran
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:u1rJ0yheEHA.3520@.TK2MSFTNGP10.phx.gbl...[vbcol=seagreen]
> Mike
> This counter measured how busy a physical array is. In general it should
be
> less 55% otherwise you probably jave IO bottleneck.
>
>
> "Mike" <anonymous@.discussions.microsoft.com> wrote in message
> news:c13201c47a1b$4db045c0$a401280a@.phx.gbl...
>|||Brian
First of all i have said 'probably'
Secondly if you have this counter higher than 55% for continuous periods
(let me say 15 min)then your SQL Server
may be experiencing an I/O bottleneck.
"Brian Moran" <brian@.solidqualitylearning.com> wrote in message
news:%23r7TWIieEHA.2848@.TK2MSFTNGP10.phx.gbl...
> I don't tend to agree with that advice.
> On it's own
> <<
> '%Disk Time
> Is not a very reliable indicator of an IO bottleneck. You could easily
have
> a bottleneck if this counter is much lower than 55%. You might NOT have a
> bottleneck if this counter is 55% or higher.
> There are many, many more counters which you should take a look. You need
to
> take a look at queue lenght, wait times for disk transfers, time it takes
> per read and write. In addition, there are a host of other counters
> specific to your SAN that you should take a look at.
>
> Unfortunately, I'm running a bit late right now and I don't have time to
> write a long message. Tom Davidson from MS has a nice article in SQL
Server
> Magazine that dicusses some of these counters. I believe it might also be
on
> MSDN. It shouldn't be too hard to track down if you search by his name on
> each site.
> --
> Brian Moran
> "Uri Dimant" <urid@.iscar.co.il> wrote in message
> news:u1rJ0yheEHA.3520@.TK2MSFTNGP10.phx.gbl...
> be
>

% Disk Time

I have separate drives each SQL Server database:
database file (E
database transaction log (K
tempdb (M
database system files (F
My database % Disk Time: (Average = 2541, STDEV = 4316,
Maximum = 86770)
How is the % Disk Time measured?
The % Disk Time high at 55%?
Thanks,
__________________________________________________ _________
Brian
First of all i have said 'probably'
Secondly if you have this counter higher than 55% for
continuous periods
(let me say 15 min)then your SQL Server
may be experiencing an I/O bottleneck.
"Brian Moran" <brian@.solidqualitylearning.com> wrote in
message
news:%23r7TWIieEHA.2848@.TK2MSFTNGP10.phx.gbl...
> I don't tend to agree with that advice.
> On it's own
> <<
> '%Disk Time
> Is not a very reliable indicator of an IO bottleneck.
You could easily
have
> a bottleneck if this counter is much lower than 55%. You
might NOT have a
> bottleneck if this counter is 55% or higher.
> There are many, many more counters which you should take
a look. You need
to
> take a look at queue lenght, wait times for disk
transfers, time it takes
> per read and write. In addition, there are a host of
other counters
> specific to your SAN that you should take a look at.
>
> Unfortunately, I'm running a bit late right now and I
don't have time to
> write a long message. Tom Davidson from MS has a nice
article in SQL
Server
> Magazine that dicusses some of these counters. I believe
it might also be
on
> MSDN. It shouldn't be too hard to track down if you
search by his name on[vbcol=seagreen]
> each site.
> --
> Brian Moran
> "Uri Dimant" <urid@.iscar.co.il> wrote in message
> news:u1rJ0yheEHA.3520@.TK2MSFTNGP10.phx.gbl...
general it should[vbcol=seagreen]
> be
I have a Windows 2000 Advanced Server with SQL Server 2000
Enterprise Edition on a SAN. What units are the %Disk
Time measured in from Perfmon?
Is the (% Disk Time E) = (Disk Time E) / (Total % Disk
Time) ?
Please help me with these questions.
Thanks,
Mike
%Disk time is a bogus number and is useless for performance measurement. It
does not take into account asychronous I/O requests and cannot tell what the
real performance capability of an underlying RAID set may be. I prefer
using Transfers/sec, Read Bytes/sec and Write Bytes/sec, and Disk Queue
length to measure the performance of my disk subsystems. It is important
to take a few hours and benchmark your system with an I/O stress tool before
going live so you will know what your maximum capacities really are.
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"Mike" <anonymous@.discussions.microsoft.com> wrote in message
news:192c01c47bea$40735340$a601280a@.phx.gbl...
> I have separate drives each SQL Server database:
> database file (E
> database transaction log (K
> tempdb (M
> database system files (F
> My database % Disk Time: (Average = 2541, STDEV = 4316,
> Maximum = 86770)
> How is the % Disk Time measured?
> The % Disk Time high at 55%?
> Thanks,
> __________________________________________________ _________
> Brian
> First of all i have said 'probably'
> Secondly if you have this counter higher than 55% for
> continuous periods
> (let me say 15 min)then your SQL Server
>
> may be experiencing an I/O bottleneck.
> "Brian Moran" <brian@.solidqualitylearning.com> wrote in
> message
> news:%23r7TWIieEHA.2848@.TK2MSFTNGP10.phx.gbl...
> You could easily
> have
> might NOT have a
> a look. You need
> to
> transfers, time it takes
> other counters
> don't have time to
> article in SQL
> Server
> it might also be
> on
> search by his name on
> general it should
> I have a Windows 2000 Advanced Server with SQL Server 2000
> Enterprise Edition on a SAN. What units are the %Disk
> Time measured in from Perfmon?
> Is the (% Disk Time E) = (Disk Time E) / (Total % Disk
> Time) ?
> Please help me with these questions.
> Thanks,
> Mike
>
|||That counter is pretty much useless in my opinion. I have seen this vary
greatly with the different types of hardware used. It's better to use the
Avg and current Disk queues instead. They give a much better view of how
your drives are able to handle the load. By the way are these drives
Logical or Physical?
Andrew J. Kelly SQL MVP
"Mike" <anonymous@.discussions.microsoft.com> wrote in message
news:192c01c47bea$40735340$a601280a@.phx.gbl...
> I have separate drives each SQL Server database:
> database file (E
> database transaction log (K
> tempdb (M
> database system files (F
> My database % Disk Time: (Average = 2541, STDEV = 4316,
> Maximum = 86770)
> How is the % Disk Time measured?
> The % Disk Time high at 55%?
> Thanks,
> __________________________________________________ _________
> Brian
> First of all i have said 'probably'
> Secondly if you have this counter higher than 55% for
> continuous periods
> (let me say 15 min)then your SQL Server
>
> may be experiencing an I/O bottleneck.
> "Brian Moran" <brian@.solidqualitylearning.com> wrote in
> message
> news:%23r7TWIieEHA.2848@.TK2MSFTNGP10.phx.gbl...
> You could easily
> have
> might NOT have a
> a look. You need
> to
> transfers, time it takes
> other counters
> don't have time to
> article in SQL
> Server
> it might also be
> on
> search by his name on
> general it should
> I have a Windows 2000 Advanced Server with SQL Server 2000
> Enterprise Edition on a SAN. What units are the %Disk
> Time measured in from Perfmon?
> Is the (% Disk Time E) = (Disk Time E) / (Total % Disk
> Time) ?
> Please help me with these questions.
> Thanks,
> Mike
>
|||I mostly use Average disk queue length... Any average disk queue length > 2
on a single spindle = bad.
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Mike" <anonymous@.discussions.microsoft.com> wrote in message
news:192c01c47bea$40735340$a601280a@.phx.gbl...
> I have separate drives each SQL Server database:
> database file (E
> database transaction log (K
> tempdb (M
> database system files (F
> My database % Disk Time: (Average = 2541, STDEV = 4316,
> Maximum = 86770)
> How is the % Disk Time measured?
> The % Disk Time high at 55%?
> Thanks,
> __________________________________________________ _________
> Brian
> First of all i have said 'probably'
> Secondly if you have this counter higher than 55% for
> continuous periods
> (let me say 15 min)then your SQL Server
>
> may be experiencing an I/O bottleneck.
> "Brian Moran" <brian@.solidqualitylearning.com> wrote in
> message
> news:%23r7TWIieEHA.2848@.TK2MSFTNGP10.phx.gbl...
> You could easily
> have
> might NOT have a
> a look. You need
> to
> transfers, time it takes
> other counters
> don't have time to
> article in SQL
> Server
> it might also be
> on
> search by his name on
> general it should
> I have a Windows 2000 Advanced Server with SQL Server 2000
> Enterprise Edition on a SAN. What units are the %Disk
> Time measured in from Perfmon?
> Is the (% Disk Time E) = (Disk Time E) / (Total % Disk
> Time) ?
> Please help me with these questions.
> Thanks,
> Mike
>

% Disk Time

I have separate drives each SQL Server database:
database file (E
database transaction log (K
tempdb (M
database system files (F
My database % Disk Time: (Average = 2541, STDEV = 4316,
Maximum = 86770)
How is the % Disk Time measured?
The % Disk Time high at 55%?
Thanks,
________________________________________
___________________
Brian
First of all i have said 'probably'
Secondly if you have this counter higher than 55% for
continuous periods
(let me say 15 min)then your SQL Server
may be experiencing an I/O bottleneck.
"Brian Moran" <brian@.solidqualitylearning.com> wrote in
message
news:%23r7TWIieEHA.2848@.TK2MSFTNGP10.phx.gbl...
> I don't tend to agree with that advice.
> On it's own
> <<
> '%Disk Time
> Is not a very reliable indicator of an IO bottleneck.
You could easily
have
> a bottleneck if this counter is much lower than 55%. You
might NOT have a
> bottleneck if this counter is 55% or higher.
> There are many, many more counters which you should take
a look. You need
to
> take a look at queue lenght, wait times for disk
transfers, time it takes
> per read and write. In addition, there are a host of
other counters
> specific to your SAN that you should take a look at.
>
> Unfortunately, I'm running a bit late right now and I
don't have time to
> write a long message. Tom Davidson from MS has a nice
article in SQL
Server
> Magazine that dicusses some of these counters. I believe
it might also be
on
> MSDN. It shouldn't be too hard to track down if you
search by his name on[vbcol=seagreen]
> each site.
> --
> Brian Moran
> "Uri Dimant" <urid@.iscar.co.il> wrote in message
> news:u1rJ0yheEHA.3520@.TK2MSFTNGP10.phx.gbl...
general it should[vbcol=seagreen]
> be
I have a Windows 2000 Advanced Server with SQL Server 2000
Enterprise Edition on a SAN. What units are the %Disk
Time measured in from Perfmon?
Is the (% Disk Time E) = (Disk Time E) / (Total % Disk
Time) ?
Please help me with these questions.
Thanks,
Mike%Disk time is a bogus number and is useless for performance measurement. It
does not take into account asychronous I/O requests and cannot tell what the
real performance capability of an underlying RAID set may be. I prefer
using Transfers/sec, Read Bytes/sec and Write Bytes/sec, and Disk Queue
length to measure the performance of my disk subsystems. It is important
to take a few hours and benchmark your system with an I/O stress tool before
going live so you will know what your maximum capacities really are.
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"Mike" <anonymous@.discussions.microsoft.com> wrote in message
news:192c01c47bea$40735340$a601280a@.phx.gbl...
> I have separate drives each SQL Server database:
> database file (E
> database transaction log (K
> tempdb (M
> database system files (F
> My database % Disk Time: (Average = 2541, STDEV = 4316,
> Maximum = 86770)
> How is the % Disk Time measured?
> The % Disk Time high at 55%?
> Thanks,
> ________________________________________
___________________
> Brian
> First of all i have said 'probably'
> Secondly if you have this counter higher than 55% for
> continuous periods
> (let me say 15 min)then your SQL Server
>
> may be experiencing an I/O bottleneck.
> "Brian Moran" <brian@.solidqualitylearning.com> wrote in
> message
> news:%23r7TWIieEHA.2848@.TK2MSFTNGP10.phx.gbl...
> You could easily
> have
> might NOT have a
> a look. You need
> to
> transfers, time it takes
> other counters
> don't have time to
> article in SQL
> Server
> it might also be
> on
> search by his name on
> general it should
> I have a Windows 2000 Advanced Server with SQL Server 2000
> Enterprise Edition on a SAN. What units are the %Disk
> Time measured in from Perfmon?
> Is the (% Disk Time E) = (Disk Time E) / (Total % Disk
> Time) ?
> Please help me with these questions.
> Thanks,
> Mike
>|||That counter is pretty much useless in my opinion. I have seen this vary
greatly with the different types of hardware used. It's better to use the
Avg and current Disk queues instead. They give a much better view of how
your drives are able to handle the load. By the way are these drives
Logical or Physical?
Andrew J. Kelly SQL MVP
"Mike" <anonymous@.discussions.microsoft.com> wrote in message
news:192c01c47bea$40735340$a601280a@.phx.gbl...
> I have separate drives each SQL Server database:
> database file (E
> database transaction log (K
> tempdb (M
> database system files (F
> My database % Disk Time: (Average = 2541, STDEV = 4316,
> Maximum = 86770)
> How is the % Disk Time measured?
> The % Disk Time high at 55%?
> Thanks,
> ________________________________________
___________________
> Brian
> First of all i have said 'probably'
> Secondly if you have this counter higher than 55% for
> continuous periods
> (let me say 15 min)then your SQL Server
>
> may be experiencing an I/O bottleneck.
> "Brian Moran" <brian@.solidqualitylearning.com> wrote in
> message
> news:%23r7TWIieEHA.2848@.TK2MSFTNGP10.phx.gbl...
> You could easily
> have
> might NOT have a
> a look. You need
> to
> transfers, time it takes
> other counters
> don't have time to
> article in SQL
> Server
> it might also be
> on
> search by his name on
> general it should
> I have a Windows 2000 Advanced Server with SQL Server 2000
> Enterprise Edition on a SAN. What units are the %Disk
> Time measured in from Perfmon?
> Is the (% Disk Time E) = (Disk Time E) / (Total % Disk
> Time) ?
> Please help me with these questions.
> Thanks,
> Mike
>|||I mostly use Average disk queue length... Any average disk queue length > 2
on a single spindle = bad.
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Mike" <anonymous@.discussions.microsoft.com> wrote in message
news:192c01c47bea$40735340$a601280a@.phx.gbl...
> I have separate drives each SQL Server database:
> database file (E
> database transaction log (K
> tempdb (M
> database system files (F
> My database % Disk Time: (Average = 2541, STDEV = 4316,
> Maximum = 86770)
> How is the % Disk Time measured?
> The % Disk Time high at 55%?
> Thanks,
> ________________________________________
___________________
> Brian
> First of all i have said 'probably'
> Secondly if you have this counter higher than 55% for
> continuous periods
> (let me say 15 min)then your SQL Server
>
> may be experiencing an I/O bottleneck.
> "Brian Moran" <brian@.solidqualitylearning.com> wrote in
> message
> news:%23r7TWIieEHA.2848@.TK2MSFTNGP10.phx.gbl...
> You could easily
> have
> might NOT have a
> a look. You need
> to
> transfers, time it takes
> other counters
> don't have time to
> article in SQL
> Server
> it might also be
> on
> search by his name on
> general it should
> I have a Windows 2000 Advanced Server with SQL Server 2000
> Enterprise Edition on a SAN. What units are the %Disk
> Time measured in from Perfmon?
> Is the (% Disk Time E) = (Disk Time E) / (Total % Disk
> Time) ?
> Please help me with these questions.
> Thanks,
> Mike
>