Microsoft SQL Server tables should never contain duplicate rows, nor non-unique primary keys. For brevity, we will sometimes refer to primary keys as "key" or "PK" in this article, but this will always denote "primary key." Duplicate PKs are a violation of entity integrity, and should be disallowed in a relational system. SQL Server has various mechanisms for enforcing entity integrity, including indexes, UNIQUE constraints, PRIMARY KEY constraints, and triggers.
Despite this, under unusual circumstances duplicate primary keys may occur, and if so they must be eliminated. One way they can occur is if duplicate PKs exist in non-relational data outside SQL Server, and the data is imported while PK uniqueness is not being enforced. Another way they can occur is through a database design error, such as not enforcing entity integrity on each table.
Often duplicate PKs are noticed when you attempt to create a unique index, which will abort if duplicate keys are found. This message is:
Msg 1505, Level 16, State 1 Create unique index aborted on duplicate key.
If you are using SQL Server 2000 or SQL Server 2005, you may receive the following error message:
Msg 1505, Level 16, State 1 CREATE UNIQUE INDEX terminated because a duplicate key was found for object name '%.*ls' and index name '%.*ls'. The duplicate key value is %ls.
This article discusses how to locate and remove duplicate primary keys from a table. However, you should closely examine the process which allowed the duplicates to happen in order to prevent a recurrence.
MORE INFORMATION
For this example, we will use the following table with duplicate PK values. In...
For this example, we will use the following table with duplicate PK values. In this table the primary key is the two columns (col1, col2). We cannot create a unique index or PRIMARY KEY constraint since two rows have duplicate PKs. This procedure illustrates how to identify and remove the duplicates.
create table t1(col1 int, col2 int, col3 char(50))
insert into t1 values (1, 1, 'data value one')
insert into t1 values (1, 1, 'data value one')
insert into t1 values (1, 2, 'data value two')
The first step is to identify which rows have duplicate primary key values:
SELECT col1, col2, count(*)
FROM t1
GROUP BY col1, col2
HAVING count(*) > 1
This will return one row for each set of duplicate PK values in the table. The last column in this result is the number of duplicates for the particular PK value.
Collapse this tableExpand this table
col1 col2
1 1 2
If there are only a few sets of duplicate PK values, the best procedure is to delete these manually on an individual basis. For example:
set rowcount 1
delete from t1
where col1=1 and col2=1
The rowcount value should be n-1 the number of duplicates for a given key value. In this example, there are 2 duplicates so rowcount is set to 1. The col1/col2 values are taken from the above GROUP BY query result. If the GROUP BY query returns multiple rows, the "set rowcount" query will have to be run once for each of these rows. Each time it is run, set rowcount to n-1 the number of duplicates of the particular PK value.
Before deleting the rows, you should verify that the entire row is duplicate. While unlikely, it is possible that the PK values are duplicate, yet the row as a whole is not. An example of this would be a table with Social Security Number as the primary key, and having two different people (or rows) with the same number, each having unique attributes. In such a case whatever malfunction caused the duplicate key may have also caused valid unique data to be placed in the row. This data should copied out and preserved for study and possible reconciliation prior to deleting the data.
If there are many distinct sets of duplicate PK values in the table, it may be too time-consuming to remove them individually. In this case the following procedure can be used:
1. First, run the above GROUP BY query to determine how many sets of duplicate PK values exist, and the count of duplicates for each set.
2. Select the duplicate key values into a holding table. For example:
SELECT col1, col2, col3=count(*)
INTO holdkey
FROM t1
GROUP BY col1, col2
HAVING count(*) > 1
3. Select the duplicate rows into a holding table, eliminating duplicates in the process. For example:
SELECT DISTINCT t1.*
INTO holddups
FROM t1, holdkey
WHERE t1.col1 = holdkey.col1
AND t1.col2 = holdkey.col2
4. At this point, the holddups table should have unique PKs, however, this will not be the case if t1 had duplicate PKs, yet unique rows (as in the SSN example above). Verify that each key in holddups is unique, and that you do not have duplicate keys, yet unique rows. If so, you must stop here and reconcile which of the rows you wish to keep for a given duplicate key value. For example, the query:
SELECT col1, col2, count(*)
FROM holddups
GROUP BY col1, col2
should return a count of 1 for each row. If yes, proceed to step 5 below. If no, you have duplicate keys, yet unique rows, and need to decide which rows to save. This will usually entail either discarding a row, or creating a new unique key value for this row. Take one of these two steps for each such duplicate PK in the holddups table.
5. Delete the duplicate rows from the original table. For example:
DELETE t1
FROM t1, holdkey
WHERE t1.col1 = holdkey.col1
AND t1.col2 = holdkey.col2
6. Put the unique rows back in the original table. For example:
INSERT t1 SELECT * FROM holddups
APPLIES TO
* Microsoft SQL Server 6.0 Standard Edition
* Microsoft SQL Server 6.5 Standard Edition
* Microsoft SQL Server 7.0 Standard Edition
* Microsoft SQL Server 2000 Standard Edition
* Microsoft SQL Server 2005 Standard Edition
* Microsoft SQL Server 2005 Express Edition
* Microsoft SQL Server 2005 Developer Edition
* Microsoft SQL Server 2005 Enterprise Edition
* Microsoft SQL Server 2005 Workgroup Edition
Sunday, December 7, 2008
Subscribe to:
Post Comments (Atom)
1 comment:
I think that you may be interested in another application that quickly eliminates data corruption issues in database files, please take a look at how to open a pst database tool and let me know what do you think
Post a Comment