How WERE you doing this before it stopped working?
This is a pretty straightforward scripted process:
set variable $id = parentTable::id
duplicate record (parent record)
set variable $newid = parentTable::id
enter find mode
Go to layout - child layout
set field childTable::parentID = $id
then you loop through, duplicate the child records, and set the parentID field with $newid. Returning to the the parent layout when complete.
also, you can have a drop down, for say the user to select "add 3 days", and then loop through three times, auto-increasing the date each time based on the first parent record.
<stupid jive mixup put in an empty message instead of my email....>
We've just been discussing this is a thread 'Quote into Reservation' [https://fmdev.filemaker.com/message/121363], and,
At 6:36 AM -0700 7/30/13, Bruce Herbach wrote:
Another effective method was Ray Cologon's Duplicate Hierarchic set. <http://www.nightwing.com.au/FileMaker/demosX/demoX06.html>http://www.nightwing.com.au/FileMaker/demosX/demoX06.html This was developed for FM10 but will work in 12 and could be adapted to your requirements.
Shows a rather elegant way to do this with minimal overhead. You might want to check it out.
I use a slightly different approach with a generalized script I call 'New Record' in every file which takes a complex parameter that includes the destination table/table occurrence for a new record as well as the field data and does just what you think. I then create another script that can repeat the process as many times as needed for a specific 'duplication' event and call the 'New Record' script. I can do this with 'separated' files or local ones.
Ray used relationships which auto-populate the parent keys, and stays on a single layout which is very slick, His process doesn't commit the records until all of the children (and grandchildren, and great-grandchildren, and ...) are all created. That can certainly save time in looping process.
-- Drew Tenenholz
That's basically what I was doing. The script was getting "lost" in the child records. At some point it started duplicating the dupes rather than the next records in the found set.
For example, the found set in the child record table are records A, B, and C.
The script duplicates the first record. Now we have A,A-dupe, B, and C.
That should make "A-dupe" the current record.
The script step to go to the next record should then go to B, where the routine repeats.
This works fine if there are only three or four child records in the original. When there are more than that, it works correctly for the first one or two child records, then all the new child records become duplicates of record B or C, but each parent record has the correct NUMBER of child records.
I tried to get around this issue by using a portal on the first record and replacing the steps in the loop that go to the next record with steps that go to the next portal row and then go to the related child record. Unfortunately, I got the same result.
I was hoping there would be a different approach I could try.
Can you post your script? Are you omitting the records that have been duplicated?
1 of 1 people found this helpful
You have to know exactly where a duplicate lands and where your record pointer is, which can get a bit awkward. The whole process is easier if you work by omitting the duplicated and old records, and easier still if you start from the end, omit 2 records per duplicate/adjust step and loop until the found set is empty:
You have A B C D, go to last and duplicate, which gives you A B C D D, and your record pointer is on the duplicate; adjust, omit 2, which leaves you with A B C; etc.
1 of 1 people found this helpful
What if you start at the last record in the found set and then work backwards? In other words, instead of Go to Record First, you use a Go to Record Last and then work your way back to the top of the list using Previous instead of Next. Another technique is to omit the records you've duplicated from the found set so that there's little risk of duplicating the same one twice.
You can even combine the techniques by starting at the last record, duplicating it (you're now on the new last record) and then call Omit Record twice. It's not particularly pretty code, but it removes the duplicate and the first original from the found set, and you're now standing on the next record you need to duplicate. You don't even need a Go to Record Next step. Just exit the loop when the found count is zero.
Hope this helps,
Co-author of O'Reilly's FileMaker Pro 12: The Missing Manual
I find I first have to sort the list of records I want to duplicate if I want it to work reliably.
Depending on the sort key, the duplicated records can either land just below the original or at the bottom of the list.
They will land at the bottom if you sort by Primary Key - they will land just below the original if you sort on something like Make and Model.
Try sorting and I think you will get reliable results.
Best wishes - Alan Stirling - London UK.
Actually, an easier technique would be to export the found set of child records to a temp file, re-import the temp file, and do a replace with the $newid
So in addition to this:
Go to layout - child layout
set field childTable::parentID = $id
add this on the end:
set variable $path - get(temporarypath) & "temp.fmp12"
export records to $path
import records from $path
replace field contents childTable::parentID with $newid
return to parent layout
This would take care of the danger of duplicating or getting lost in the child records.
And a better, more reliable technique, that works on FMServer as a scheduled script or on a client machine, doesn't depend on the external file system (with potential delays) or the sort order (with issues around persistent sorting) which won't run into issues with multiple users trying to do the same steps at virtually the same time and stepping on each other's record IDs is to use Ray Cologon approach with portals. Brice gave a URL for this: <http://www.nightwing.com.au/FileMaker/demosX/demoX06.html>http://www.nightwing.com.au/FileMaker/demosX/demoX06.html
I dislike 'Duplicate Record' in so many ways. Lookups don't re-lookup and desired auto-enter data doesn't. Special one-time pricing IS duplicated. Item ordered dates and promised delivery dates are all wrong.
Export/Import will fix some of this (so long as you export the correct fields), but then the records are temporarily orphans or matched to the wrong parent until you 'Replace All'. If the replace fails, you don't know which specific records weren't updated, just that there was a problem you can't fix without user intervention. Same goes for any other 'replacements' you need to do to fix the bad data you started with.
Using a script that creates new records based on the original data either through a portal where the parent ID is inserted by the relationship or through a script that goes to the right layout context and sets the parent ID will not run into any of these problems. Further, the script can be custom fit to take only the field data necessary to complete the child record. If you want to leave the auto-enter & lookup fields alone, you do that. If you want a warning that item #3 has a very special price has expired and the standard price is going to be used, you can do that too.
I don't think this is much easier, I think it is much better than the methods discussed so far. It is also not that hard to do. With a bit of 'Set Field by Name', you can write a quite elegant script that doesn't need to have lots of specific 'Set Field' steps, and becomes very flexible if you change your fields or TO names later on.
My 2 cents,
Thanks for all the feedback. I did say that my description was a simplified description of the issue. I did, in fact, start with the last record and move backward through the set. I did not omit the last copied and created record, which I think is a great idea. (Thanks to those who suggested it.) I did use a portal, but the script seems to be failing when the user needs more than just a couple of dupes. I downloaded the Cologon file and will take a look at it in the morning, in the hope that I can learn something from that approach, too, and then try rewriting my code.
Okay, I think I have a solution that works. (I'm getting the correct results in my preliminary tests and the client will let me know whether it holds up when they do their in-house testing).
Here's the basics of what I did:
The script is run from a layout based on the parent record, which contains a portal to the child record table. I start from the context of the last row of the portal and Go to Related Record displayed in a layout based on the child table.I duplicate that record, then loop through the remaining portal rows, using a global field as a counter that deincriments until we reach the top row and the loop ends. Another global increments the parent record key by +1 and enters it into each new child record when it is created. The new parent record with that value is created last. Then we go to that record and copy it in the same way ... until the third global counter field (holding the number of copies to be made) reaches zero.
This is pretty much what I had at the start. One loop for child record creation, inside another for the parent records. What I changed was not using the same relationship for both loops. I created a second relationship for the duplication of the child records which is a self-join on the child record table (primary key = primary key). This insures that, when it GTRR, only one record is displayed. When it is duplicated, the active record is the dupe and values can be entered or changed. The next step takes us back to the portal on the original layout and moves up one row. This keeps FMP from displaying all the other child records when they are not needed and getting "lost" when there are too many of them. It also eliminates the need for repeatedly doing Finds (or Omit and showing only omitted records) but accomplishes the same thing (potentially a bit faster? -- I'm not sure at this point).
Okay, I know I am going to get comments on this: Why use global fields instead of variables? Before becoming a FMP developer, I worked with several other DBMS apps, all which used variables. So, I am by no means variable phobic. I've just heard too many complaints from other developers about variables not working, or not working as well as globals, to want to use them except in cases where ONLY a variable would work. It's just one thing less to have to worry about when troubleshooting.