You can use the Month, Day, and Year functions to retrieve the parts of the date as variables then manipulate them into a file name variable and use that as your import filename if its a consistent format. You might want to show your variables in a custom dialog box while testing to be sure its formatted correctly.
Every 10 seconds?
I'm wondering if it might be simpler and give you better performance to set up an ODBC connection to the file so that the file functions as another table in the database instead of importing from it so frequently.
Thought that was a bit odd too because some imports may take longer than 10 seconds.
It sounds like the file is updated daily and the 10 seconds is simply a timed test.
That is a good idea but I don't think it will work for what we are doing. Here is what I am trying to do.
We have a farm, an animal is processed and an item is weighed (NY Strip Steak)
1. Our scale spits out the SKU, Lot Number, Weight and Time to a spreadsheet. This can be hundreds of entries per day.
2. Our POS system (Filemaker) needs to import the sheet throughout the day and add each line as new serial number/lot number.
3. When we sell an item it can pull a lot number for the line item.
This all works when I point the script to the test.xls file. I am stuck at getting filemaker to use xx/xx/xxxx.xls in the script.
The slashes aren't allowed in a file name. You'd need a file name format such as xx-xx-xxxx.xls
If your scale is updating the same spreadsheet file all day, you should be able to establish an ODBC connection to it and each time your scale updates the spreadsheet, the new data will be accessible in FileMaker. Haven't tried setting up an ODBC connection to an Excel Spreadsheet, but other posts here indicate that this can be done...
You might also investigate whether you can directly connect your scale to your databse. Haven't researched this in a while, but there used to be a plug in that could input data from hardware connected to an RS232 port. An updated version that uses USB, might be just the ticket for not using the spreadsheet at all.
Am I doing this correctly? I am using Set Variable
Get ( CurrentDate )
When the script tries to run it tells me it can't find the file named 2011. It seems to be pulling in the year but not the date in xx-xx-xxxx format
It's returning the date in xx/xx/yyyy format which is not a valid file name. (it's a path to a file named yyyy.)
Substitute ( Get ( CurrentDate ) ; "/" ; "-" )
We would have to see the entire import script
That got me a little further. Now it says File 7-19-2001 cannot be found
Script is very basic
1.Go to Layout ["Serial Number" (serialnums)]
2.Show all records
3.Set Variable [$$Date;Value:Substitute (Get (CurrentDate); "/";"-")]
4.Import Records [No Dialog; "$$Date"; Update matching; Mac Roman]
You need more than the file name, you also need the file path.
This will look something like this on a windows system:
Part of the import records step is choose filepath.
I have it set as
Perform Without Dialog, Specify Data Source = File
The path in the next window is
The date value is pulled from the previous script step
Set Variable [$$Date;Value:Substitute (Get (CurrentDate); "/";"-")]
I still get the error, The file 7-20-2011 cannot be found. It then opens the find window to the folder where I can see the file.
I got it working. Here is how. Let me know if there is a better way.
Created a field "ExcelImportPath" with a calculated value
"file:../../Users/afadmin/Desktop/Processing/"&Substitute (Get (CurrentDate); "/";"-")&".xlsx"
Script is now
1. Go to Layout
2. Show All Records
3. Set Variable
4.Import Records File Path
What you have works, but it can be simplified if you want:
Instead of a field with a calculated value, you can just use that calculation in your Set Variable step. Good to see you found the error, (The missing file extension, ".xlsx").
Just be aware that if you make a change to the structure of the record where the import is occuring (even if its to add a global or calculation field) the stored import settings will need to be reset. This is one reason why I generally set recurring imports to populate temp/staging tables then use scripts to process the temp records into my main data tables. This gives you a bit more control over preventing adding duplicate records and so forth.