How do I search accross many tables?
More detail is probably needed in order to give you a good answer. What exactly are you trying to accomplish?
I built a bible database with 66 tables/layouts with FM12A. I want to search accross all of the tables for a word or phrase.
I'm not sure why you have 66 tables here (perhaps one for each book of the Bible?). But that's a significant data modeling issue, as you're quickly finding out. How do you plan to display the results once the found sets are returned? Perhaps a Virtual List technique? Otherwise, you have 66 separate layouts, each of which potentially has results in a found set that you might want to display. Ouch.
In any event, to answer the original question, you'll have to script this Find. Allow the user to enter the search term into a global field. Then write a script that navigates to layouts based on each of the 66 tables, performing a Find on each layout in turn. Once you've done that, you have a found set (including perhaps an empty found set) on each layout, which you can then assemble into a Virtual List for user interaction.
Mike_Mitchell wrote: I'm not sure why you have 66 tables here (perhaps one for each book of the Bible?). But that's a significant data modeling issue, as you're quickly finding out. How do you plan to display the results once the found sets are returned? Perhaps a Virtual List technique? Otherwise, you have 66 separate layouts, each of which potentially has results in a found set that you might want to display. Ouch.
I second that. You should preferably have all texts in one table (broken down into whatever portions are appropriate), using attributes and maybe related data to denote their position within the larger text. Then you would just search in this one table.
I think your effort would be better spent designing a good data model, instead of cooking up a (necessarily complex) script that needs to overcome the shortcomings of the existing one. Once you have that model, this script (and other ones) will be much easier to write.
I heartily endorse the observations by Mike and erolst—the data model needs to be rethought. Consider this:
1. If you set out to make a DB of every letter you have ever received, would you create a separate table for each author? Or would the identity of the author simply be a characteristic of each letter?
2. And how would you manage whatever details of each author you wanted to store? Would you store your mother's details over and opver again in the record of each letter you received from her? Or would you create a separate table for authors and then link this to the letters they sent you?
3. How granular would you want records to be? Would you want the entire text of a letter to be in a single record? Or would you want to break it down into smaller units? A paragraph? A sentence? A single word? How would breaking it into smaller elements be of benefit? And if you do break it down, how would you maintain links between the elements?
If 66 tables is indeed, as Mike infers, simply because you have each book of the Bible as a separate table then you need to think beyond that. If all you want is a digital version of the Bible why not an eBook? But if you want a database you need to think about the DATA aspect of the project.
And while FileMaker might be a good tool, it might not be the best tool for indexing large document collections.
You may also consider a probabilistic search engine such as Xapian, (http://xapian.org , see also Omega), that allows relevance ranking of your result sets and has a good word stemmer for both queries and indexer.
It all depends on what you intend to do - if you want to build a structure over your documents (books, chapters, verses) and to access your text also via this structure, a relational database such as FileMaker is a good and feasible approach - if you want to rely on fulltext search essentially, I would go for a good search engine in your place.
2 cents from somebody who had processed a document collection that has approximately 700 times the size of the bible.
1 on that. I did my "index" for the FM/XML/XSLT book using Verity Collection (ColdFusion/MS SQL). What would have taken days, took less than 4 hrs. Make no mistake! The original text was in FMP.
It really does depend on what you need to do.
How do I keep data separate coming from one table? I understand I can do a find. I'm a little lost as to where to put the script. I created this file with 66 tables and layouts. Do I need to create another field to identify different data?
But you've already got the answer.
Do NOT create a file with 66 tables and 66 layouts.
Read the previous messages again.
What do you mean, "where to put the script?" Scripts are "put" in the Manage Scripts dialog.
As for how to identify different data, yes, each record will have one or more fields to discriminate it from other records. In this case, you would have, perhaps, a field for Book, Chaper, and Verse. (These would likely be unique keys rather than literal text, but they could be literal text since the books, chapters, and verses of the Bible won't change.) When you perform a Find, you can sort the results by Book, Chapter, and Verse.
I would echo other wise advice you have received under this question and suggest you take a look at this recent thread on Single Table Architecture
Then - having thought carefully about the advice you have received - what you could do very easily given the nature of your data - which I believe has one table for each book of the Bible (?) - is this:
(1) Create a new file in FM13 - with an appropriate theme for your content - and import a typical table from your existing solution
(2) Add a key field which will serve to identify the Book from which data is derived (use an alphanumeric key) and either a second field to hold the textual name of the relevant Book - or - a little more complex but better in the long run - a second table to just hold the book names linked by the new key field
(3) Import the data from one short Book table in the old system - I imagine each verse is a record - on an isolated import layout for speed - ie. based on a Table Occurrence with zero relationships - and add your Book Identifiers using Replace or something similar and deal with the Book Name as above
(4) now build how you will search it - experimenting with this small data set - I use portal filters for this sort of thing - others will prefer other methods - see my recent piece on how to build a good portal filter here
(5) Test it on WebDirect - if you have access to FMServer13 and perhaps fine tune your Theme styles if required
(6) Finally import each of your other Books in turn - marking the set of records from each with their alotted Book Identifier and Book Name
You now have your entire data set in a single table and can do whatever you wish with it - including search it all - and only have 1 table and layout to maintain and improve - not 66.
This method and the inherant simplicity you are creating will not take you long - since you only need do things once and its plays to Filemaler's inherant strengths - it also - if you use FMS13 and WebDirect - gives you an easy means of making your system accessible on mobile device - so someone can search for Biblical quotes on their iPhone - for example.
This process is refered to traditionally as data normalisation in that you reduce your data to the simplest structure that fits its real character and here - of course - each Book in the Bible is structurally the same as any other Book - just the data differs - which is why they are normally published in a traditional normalised structure called a Book - in the printing sense as opposed to the Biblical sense.
Do feedback what you actually do - I am sure that we will all be interested to see you reach a great result.
Retrieving data ...