I’m wondering what would be the best database strategy to apply to a spectral analysis project that I’m pursuing.
For a number of fluorochromes (around 50) I have measurement data over a range of wavelengths, corresponding to their fluorescence intensity. Wavelengths run from 300 to 900 (nm), in increments of 1 (integer). Fluorescence intensities run from 0 to 1 (real).
I have the same kind of measurement values for a number of optical filters (around 50 also) that we use for measuring fluorescence intensities. Over the same range of wavelengths (300 to 900 nm), I have values for filter transmission running from 0 to 1.
I’d like to be able to multiply values of fluorescence intensity by filter transmission values at each given wavelength, and sum over the entire wavelength scale. In other words, multiply one column of data by another column of data in a line-by-line fashion ...
As an extra, it would be nice to be able to do some graphing of curves (wavelength versus fluorescence intensity), but that could also be achieved after exporting to more appropriate graphing software.
For the moment, I’m using an Excel spreadsheet that contains all the data, but I’m not happy with it’s performance. Especially at the level of user interaction, when choosing what filters and what fluorochromes to add to the equation, it’s tedious.
Would someone have a suggestion of how to wrap this kind of a problem in a database structure that would be able to efficiently deal with this ?! Should I use repeating fields in a simple table, that I could “self-join” based on a user-selected filter-value. I’m not sure if that would be a smart approach …
Perhaps my explanation needs more detail. If so, excuse me for not being clear, and please let me know.