1 of 1 people found this helpful
For searching you could use a calculation field that strips off the diacritical marks using the Substitute function.
In the Options... Storage for the relevant fields, specify the "Default language:" to be English. This affects the indexing. The actual storage of these fields will remain unicode.
I tested this with a couple of accent marks and it appears to create the behavior you need.
The search for "Hello" brings up records with "Hello" and "Héllo" (the later having an accent over the e).
I've tried this and I still have problems because for whatever reason the ‘ mark (it looks like a backwards apostrophy) is seen more of a special character than whatever it is an accent mark is seen as.
I'm thinking it'd be ideal if I could make a script that tells FM to ignore or skip over the ‘ value . But I don't know of a command like that. The closest I know to that is the Filter function...but I feel like ther's a better way.
As Martin recommended the only way to properly ignore it is to strip it off using Substitute(). Either create a searchable value in a new calculated field or replace the value in the original field. It is most likely that a new calculated field is a better choice.
Filter() can be a good choice too if you want to restrict yourself to a specific alphabet and know that any other characters now and in the future will be handled automatically.
In order to have a user's search for Oʻahu or Oahu come up with the same result, you would need to create a calculated field that has both. The simplest may be to append the calculated text to the original in a new field.
MyText & "¶" &
Substitute(MyText; "ʻ"; "")
If you discover more characters you want to eliminate when searching, add them to the substitute in a series using the  notation.
MyText & "¶" &
Substitute(MyText; ["ʻ"; ""]; ["another letter"; ""])