Maybe, but it's much simpler to save a copy of your file before importing, then just replace your current file with the older back up.
If you don't want to do that, here's what you need to consider to "roll back" to the state you had before importing:
Immediately after the import, your newly imported records will form the current found set on the layout where you imported the data. Delete Found Set can then remove all the records you just created by this import. However, if you enabled auto-enter and have an auto-entered serial number field, the next serial value setting on this field will still show the next serial value one greater than what was entered for the last imported record. In most database designs, this is harmless and you can leave it be. If you do want to reset the next serial value, you'll need to figure out what that value was and reset it.
You can do both of these things manually and in a script.
And if you by some evil chance have already done something that changed the found set so that this set is no longer the set of most recently imported records, you may still be able to identify them, but it will depend on the import your performed on the fields in your table. If you show all records with them unsorted, the most recently imported records will be listed at the end of the found set, if you can then identify the first record of this set of imported data, you can then omit all records from it to the end of the found set, show omitted only and then you have them back in a found set where you can delete them.
If this is a question in desperation and want to avoid the issue in the future there are a number of ways to do this.
In general I use temporary tables to import my records into for validation prior to updating my data tables.
This allows me to tag the import set with some audit flags and even update multiple tables from a single import by using a script process.
The use of audit flags (even creating a serialized import ID) lets me back out an import set at any time.
There are other methods but this is one of the safest ways to prevent a bad data import. Once Ive completed my update I can delete the records in the temp table (or at the start of a new import script)
It does add some processing time but if you are preforming lots of imports its a really good way to go.
A simple method if you dont want to go to the effort of staging tables is to add a auto enter timestamp to the data table and you may be able to select a found set to delete by preforming finds on the timestamp entries. (If you know when the import ran) this only really works for new records. If you were preforming an update on match fields then you are more than likely out of luck. (Unless you have set some script triggers to collect audt information on changed fields.)
Thanks on the suggestions. I've added my data file to SVN so I can keep track of changes. Similar to making a backup copy but more robust.
Just to mention in addition you could use a temp table to import the records into and then use a script to find and delete the records if the found set logic Phil mentions isnt an option.