Thanks for your reply.
Actually it is a 2209 column file which I am vertically partitioning into two tables!
You suggestion is precisely what I have done to create a dummy file of all character types maxing out the fields, however it is difficult to do it for the varying types as I would have to populate those 100 lines manually or using a data generator, and even
then as you say the discovery would not necessarily be accurate, so I would still have to check them all!
Its a case of weighing up that effort against the performance gain of the switch operation... and I'm sure its worth but really am not looking forward to doing it!
I wonder if some sort of "use target DT's" could be added for future versions, I imagine I would use that for nearly every text file import I do, since I define the table from the provided spec before I do and transformations.