Use the Security options to be Read Only based on that field or if a single record is shown only, switch layouts to a Layout is no browse entry if Value of Field = Accepted.
It is kinda the answer of how big can FMP 12 get?
It depends on you file size and you file backup methods.
Some of my files are measured in Gigabytes. There are issues with working with tables that contain millions of records--care must be taken to not display summary fields, for example on your layout while all records are in the found set and some searches of the data can take longer than is acceptable unless care is taken to get the desired results without having to index a field for the needed search.
To expand on Jim's suggestion: See "Editing record access privileges" in FileMaker Help and check out this particular sub section: "Entering a formula for limiting access on a record-by-record basis" for a description of how to set this up.
Expanding on Phil's thought of file size. I like to think about the namesake... FileMaker... I think about paper files and when the number of file cabinets exceeds the working space of the office, what do you do?
When you design your DB, keep that in mind. You can still scan history if you want to go to the "Warehouse".
I typically purge on an annual basis and historically save purged data if needed.
The nice feature of FMP 12 is Containers field can be stored externally and not embedded in the file. Things in Container field can be PDF, photos, movies and other large MBytes stuff. Thus, the sorting, indexing, finding, uploading, etc. is not having to be moved around in RAM memory.
Thanks guys for responding quickly!
I'll look into your suggestions for blocking field/records!
As for the size of the file, it's weird that for such a basic question is hard to find an answer....So thanks for that. I'll start investing in a warehouse :)
it's weird that for such a basic question is hard to find an answer
It's a question where many different variables affect any answer we might give you so it's really not that simple of a question.
It is a good question, but had no contraints. Thus my Ball of String to the moon Joke.
I learned by "smacking" into a backup Wall. When my backups need more that a DVD could hold, i went to a 16GB Flash drive. Then...
I smacked again and started noticing large times for certain type of Searches. Then i bought a 2 TByte Back up drive... then I realized My Office was full of File Storage and 50 minute Server downtime for backups.
So if you dont ask the Question by decribing your system limits on file size for backups, Server or Computer speeds, and what you are sorting you will get a fuzzy answer.
i would approach it from a Record Size in bytes multiplied by the maximum number of records, as a starting point. I started seeing the effects on a stardard vintage computer of 2year age at about 8-10GB with 7 users on high speed local ethernet.
@Phil the reason i was forced to use a backup, such as DVD, Flash Drive, or external drive, i live this Hurrican Katrina target area and all records were washed away by the "Wrath of God". So we must store backup in 2 different areas and the time to back up using the internet is too slow (18 hours/day).
Phil and Jim,
I realise that it's not that simple of a question but when I was asking, I really wasn't at the point of taking all of this into consideration..
Being fairly new to filemaker, I can see huge size difference on the file when I spend a night developping scripts.. by huge I mean 0.5mo...it's not much but when you db is 4 mo it seems like a lot.
So I guess by asking, I was expecting your answers or wouldnt have been surprise if you would have told me that I'm doing something wrong and that an excel file with thousands records should only be around 10mo..
Also, back to my original question. I checked what you guys told me. I can see the use of this but is there a way to only make one field unchangeblae base on another field's value?
when the statut is ''accepted''
people can modify the record BUT the delivery date field AND the status field become blocked to any changes?
Thanks a lot
Yes. Easy is to check in a script before deciding to use the Data Entry layout.
or if a single record is shown only, switch layouts to a Layout is no browse entry if Value of Field = Accepted.
What this means...
You can block data entry on any field. How?
Layout mode | Select a field | Use Inspector Data Tab | Behavior | Field Entry and Un-check Browse Mode.
This will prevent Data entry!
Before using a GoToLayout [YourStandardEntryLayout} in a script look at the field "accepted". If it is "Accepted" switch to a copy of
YourStandardEntryLayout that has all appropriate fields "locked" by Un-check Browse Mode. Perhaps call your new layout YourRestrictedLayout.
PS: I am sorry what does 10mo mean, i assumed 10Mb or 10 megabytes.
AH! ok, now I get it!
Yes, 10 mo means 10 mbs...My first language being french you may see little spelling mistakes here and there.
Your back up system (having more than one physical location for your files) sounds like the minimum requirement for any back up methodology.
Another factor in the "size" question is the actual data model used (what fields, tables and relationships are defined in your system). Simple indexed tables of just a few fields can get quite huge and perform just fine for searches, sorts. A much smaller number of records in a table with lots of unstored calculations, fields with indexing turned off, summary fields, etc may, depending on the design of your layout and how you manage found sets on that layout, result in long delays--even delays on the server hosting the database that can affect performance for other users. Often, just a careful layout design with scripted support to make sure that 100's of thousands of records aren't being indexed or for which summary values are being computed will mitigate that issue and make working with that table quite functional.
About design for speed and size.....
I think the two words are almost equivalent. Before the advent of FMP12 and external file referencing, I would have recomended a separate File for what I call "Limitless" data sizes. An examples are movie files [.mov] that could be GB size, or scans of document files that could be 100+ page pdf files. These Container data sizes had no limits in size, so I always separated them from the main files. External referencing does the same thing.
If you have a backup routine, it normally checks on last modification time to know what files need a backup. Embedding such large data files would trigger a huge backup of all records, even though 99% had not changed. In addition, it was rare to need searches, finds or calculations on large Container files.
Indexing is done for speed, but can be triggered by a single rare search. I like to make mine Automatic until a few tests have shown what is the most frequent use. Then turn off the rare indexing for increase in normal speed,
I would expect turning off indexing on a field to slow down any searches and sorts that reference that field, but speed up data imports and block data updates such as a Replace Field Contents operation that modify data in this field.
The trade off in indexing is that fields with full indexing permit faster searches and sorts but slower operations that require updating the contents of that fields indexes. When you specify a search on an unindexed field, Filemaker slows to a crawl (on large tables) while it first builds the missing index--even if this is a temporary index for a field that cannot have indexing enabled in Field options. Thus it's a size versus speed issue. If all your fields are indexed, the file is much larger and operations that update the index are much slower but searches and sorts are much faster.
One trick that I've discovered for use on large tables is to perform a find in two stages:
Stage 1; Perform a find only entering criteria into indexed fields.
Stage 2: Return to find mode and specify criteria in unstored or unindexed fields. Use Constrain Found Set insted of Perform find to constrain the found set produced by stage 1.
My experience has shown that some searches on very large data sets are many, many times faster with this two stage search than if I enter find mode and specify all the criteria in a single request.