View Single Post
  #4   Report Post  
Dave Peterson
 
Posts: n/a
Default

I think you'll find that deleting a bunch of rows in an autofilter goes much
more smoothly if your data is sorted to group the rows to be deleted.

If you have to have your data in the current sorted order, add another helper
column. And put the row number in each cell in that column.

I use
=row()
then copy down
edit|copy
edit|paste special|values

Then I do all the duplicate formula stuff. Then I convert that column to
values, too (just to make things quicker).

Then I sort by that duplicate indicator column and delete the duplicates. Then
remove the filter, and finally sort by that helper (row indicator) column (and
really finally), delete that helper column.

bob z wrote:

While this is helpful info, with a file of 23,000 names and addresses, there
were well over a thousand dups that showed up, some with 3 or more dups.
Since all of the duplicate data including the first occurence had a
"Duplicate" in the next column, I used autofilter to list all the dups, but
I still had to delete the actual dups manually. Also, in excel 2000 the
method mentioned at cpearson to add "duplicate" to an adjacent column so you
could autofilter the results caused my computer to repeatedly crash until i
did smaller sections of data at a time.

What i would like to know is : how do you leave the first occurence of the
data, and mark only the dups (trips etc). The suggestions in this forum mark
all occurences of the duplicate data. I would like the first occurence of the
same address to be a 0 and each succeeding occurence to be a 1 or greater so
that i can autofilter and then delete everything over 0, not have to delete
the rows out myself.

" wrote:

See;

http://www.cpearson.com/excel/duplicat.htm



--

Dave Peterson