Last time I checked, Perform Script on Server will make global variables "stick" the way they do in single-user mode.
Alternately, a common practice is to create a single-record "prefs" table, with non-global fields, and then use a triggered script to write those values into their corresponding global fields at startup.
I'd strongly recommend never using Globals for data like that. At all.
Like Tom said, having a single-record table makes more sense, and allows editing like any other fields, without the gymnastics of making globals do something that they're not for .(I call my table "system" because there are many levels of "preferences" in different tables, where as this defines system-wide settings.)
Also, Tom mentioned copying system values into globals, which I've done up until recently, when I realized that executeSQL made getting a value from a single-record table just as easy as grabbing it from a global. I now have a very simple custom function that takes only a field as a parameter, and does executeSQL in my system table for just that field.
Just from my personal preference, my inclination is to explore writing the user's global to the host's global. There's just something about having a data field that can hold many, many records and on top of that having to build YAT (Yet Another Table) just to write data to a field (that should have one and only one value) goes against my instinct.
I know, I know.... I'm not trying to get all Edgar Codd over it and get to "6th form normal". It's just my preference to try and avoid designing tables that should never have more than 1 record in it.
So I have a script that does this...
- Set Variable [ $$Version_Number; Value:Prefs::Version_Number ]
- Perform Script on Server [ “Save_Globals” ] [ Wait for completion ]
And in that Save_Globals script I just have...
- Set Field [ Prefs::Version_Number; $$Version_Number ]
Of course it doesn't do what I thought it would so I'm missing some secret in getting that to work.
Any process that is run on the server (PSOS or Server Scheduled Script ) runs in a session of its own. setting $$global in PSOS or SSS sets it for the scope of the session only. Only the server session has access to it. It will NOT affect any client side instantiations of $$global. The only way to get the value back from the $$global on server it to set it into a non global field OR pass the value as an Exit Script ($$global) parameter.
Just to clarify what I'm doing above (or trying to do) is I'm taking the contents of a field that is my 'global' field and stuffing it in variable (not a field) just to hold on to that number
I'm invoking a "Perform Script on Server" so that I can take that value from the variable and put it in the host server's global field.
I'm thinking that the variable $$Version_Number should still be 'intact' as far as both my session and the server's session goes. Then when I do the "Perform script on server" that field (Prefs::Version_Number) should be the global field belonging to the server, shouldn't it?
And it's easy in this conversation to think about variables and their scope but this is a global field and not just a variable. Understood... I have my global field and the server has his global field.
I guess the one part of that I'm not getting is what you mean by passing the value as an Exit Script ($$global) parameter. But it's a lead and I'll do some research on that.
Thanks much for the help!
Perhaps this technique will help you. I use it all of the time when I need a global field on the server that the user can edit. I just used it a few minutes ago so I could give the user a way to enter terms and conditions into a field that needs to be the same on all invoices.
Step 1: Create a new table and add a new field as a standard field (not as a global).
Step 2: Create a new record and enter the data that you want to be used as your global data.
Step 3: Open the relationship graph and join the new table to the table where you want the global data to appear. Make sure that you use the "X" join (a cartesian join) option to define the relationship. It does not really make a difference which field you use to join them so long as you use the "X" join.
Step 4: Your done...no scripts required.
Here is a screen capture of the relationship example. The Terms and Conditions is the field that I wanted to behave like a global field.
In FileMaker help look at the Exit Script control script step.
That sctipt step coupled with the Set Variable ( variable name ; Get ( ScriptResult ) ) script step is very useful.
Look in help for details on the two script steps and one function i mentioned.
Well, then, create an onfirstwindow that sets all these globals... or just set them into global variables, for that matter.
If it's persistent data, though (which it is) or data that could change (which it is) or data that the user should be able to update on their own (which it is) then I'd make it data in a table. If it makes you feel better, create two records and let an administrator decide which set to use at any given time.
I really don't believe that even Mr. Codd would object.
Global fields are roughly equivalent to temporary variables. (And in fact, that's what we had to use before $variables were introduced.)
I don't mean to be snarky (although I just reread my post and it does sound a bit like that, sorry ) but I've seen this employed before, and it's horrible to run across when you're trying to do something like a simple address update for the company. The initial value in a global field is the value that was in that global field the last time it was closed on the host. So I've had the experience of having to take a whole solution down so I can change an address and then re-host the file. Can't see anything but downside here.
Global variables and storing data do not belong together.
You should even have routines emptying all your globals, to be run before uploading a DB to a server.
Indeed. One good reason is anyone with a copy of FileMaker Advanced can reset the values "stored" in global variables. So don't depend on them for process control, and for goodness' sake, don't store anything sensitive (like security information) in them.
Also, Tom mentioned copying system values into globals, which I've done up until recently, when I realized that executeSQL made getting a value from a single-record table just as easy as grabbing it from a global.
Sure -- it just depends what you want to do with it. Global fields can do things that global variables can't, e.g. drive a relationships. And access to global fields can be controlled in ways that variables cannot, as Mike mentions.
Tables are cheap. I'll say again: it's very, very common to use a "system" or "prefs" table for stuff like this.
That said -- the only way to make a server-side script do something with non-stored data is to pass that data as a script parameter. Your script that runs on the server would then use Get(ScriptParameter) to receive that data and act on it.
Agreed. Persistent data is what tables are for. Tables are cheap and reliable and easy to work with, no clever tricks required.
The approach you describe with variables and things is fine, and using a script to set a bunch of variables at startup works, but as your solution goes along, you'll find more and more happens in that startup script, which slows down startup, which turns the thing into a dog, which degrades the experience for your users and limits your flexibility in developing new solutions based on the existing system.
You can avoid all that by minimizing this kind of initialization activity, and you can minimize the need for initialization by using tables to store persistent data.