Home |
Search |
Today's Posts |
#1
![]()
Posted to microsoft.public.excel.programming
|
|||
|
|||
![]()
Given some very large .csv files (200,000+ rows x 200 columns),
how would one optimize the deletion of a series of disjoint columns. For example, what would be the best way to delete the following columns: column1, 3, 8, 9, 10, 23, 24, 67, 89, 95 from a table with 200,000 rows? Currently, I read the .csv file into Excel, create a table, set Application.ScreenUpdating = False Application.DisplayAlerts = False then I call table.ListColumns(columnName).Delete for each columnName I want to delete. then set Application.ScreenUpdating = True Application.DisplayAlerts = True However, .Delete is an expensive operation and takes about 15-20 seconds to complete the deletion of each column. Is there a better way to delete a series of disjoint columns? Any help is appreciated. Tom |
Thread Tools | Search this Thread |
Display Modes | |
|
|
![]() |
||||
Thread | Forum | |||
How to optimize? large non-contig cell range for data validation l | Excel Discussion (Misc queries) | |||
How can I create one column (stacked) from a large table of data.. | Excel Worksheet Functions | |||
large data file problems - pivot table with vba | Excel Programming | |||
calculations in large data set and in pivot table | Excel Programming | |||
organizing a large amount of data into a table | Excel Programming |