View Single Post
  #9  
Old May 22nd, 2010, 01:41 AM posted to microsoft.public.access.gettingstarted
Alia
external usenet poster
 
Posts: 10
Default this might be about linking tables

hello Ken,

I did some reading through the Wiki entry on normalization and it helped
clarify things for me, so I've taken a stab at reorganizing the db I was
working on based on that and your comments.

I have separate tables for Location, Sample, and AssayResult. The 'region'
column refers to the named area a location is found in, and I think isn't
really necessary for the Access side of things. (I'm not sure it's even
relevant in MapInfo, because the points all have UTM coordinates that could
be pulled out with a query that would pick out point locations inside
specified polygon regions. So I may just remove that information all
together.)

The 'append' process was the part I was especially wondering about, as far
as getting the data in to the db, so thank you for that as well.

"KenSheridan via AccessMonster.com" wrote:

[snip]
Assuming a location might have more than one sample taken a table of the
structure you cite:

location ID | northing | easting | region name | sample number |

contains redundancies because for each sample per location we are told the
easting, northing and region of the location. This allows for
inconsistencies as there is nothing to prevent different value of one or more
of these attributes being entered in separate rows for the same location.
Separate Locations, Regions and Samples tables are needed, e.g.


I checked into this and I've been told that there will only ever be one
sample per location, because if the geos are going to take a new rock sample
they will take a new GPS reading for the new location. Does that change any
of your suggestions?

[snip]
Turning to the assay results a suitable table for this would be:

AssayResults
....SampleNumber
....ResultCategory
....Result

The primary key of this table would be a composite one made up of the two
columns SampleNumber and ResultCategory, each of which are foreign key
columns, the former referencing the primary key of Samples, the latter the
primary key of a ResultCategories table:

ResultCategories
....Resultcategory

This table would have one row for each type of assay result, so the values
might be Au_ppm, Pt_ppm etc.

When it comes to importing the results data from the .csv file you'd link to
the file and use a set of 'append' queries to insert rows into AssayResults,
with a separate query per result category, so for gold you'd use:

INSERT INTO AssayResults
(SampleNumber, ResultCategory, Result)
SELECT [sample number], "Au_ppm", Au_ppm
FROM [TheLinkedCSVFile]
WHERE Au_ppm IS NOT NULL;

The "Au_ppm" in quotes is a constant which inserts the text value 'Au_ppm'
into the ResultCategory column, the Au_ppm without quotes is the column in
the linked file which contains the result for gold ppm. You might have a
similar append query for platinum for instance:

INSERT INTO AssayResults
(SampleNumber, ResultCategory, Result)
SELECT [sample number], "Pt_ppm", Au_ppm
FROM [TheLinkedCSVFile]
WHERE Pt_ppm IS NOT NULL;


If I wanted to bring in all the result columns for a particular assay, would
I have to set an individual append query like the above up for each column,
like the following:

INSERT INTO AssayResults
(SampleNumber, ResultCategory, Result)
SELECT [sample number], "Column_name", Column_name
FROM [TheLinkedCSVFile]
WHERE Column_name IS NOT NULL;

or is there a way to bring in all the assay data for each sample?

Whenever you receive a .csv file with new assay results it simply a case of
linking to the .csv file and executing the set of append queries, which can
easily be automated so that they can all be run at a single click of a button
on a form. Even if the same queries were accidentally executed more than
once for the same samples no harm would be done as the violation of the
composite primary key of assay results would prevent the same row being
inserted more than once.

When it comes to making the data available to MapInfo I have no experience of
that particular product. In my own work with environmental data of a broadly
similar structure to yours we used ArcInfo as the GIS. However, given a set
of correctly normalized tables as outlined above it should a simple task to
create a query to return the data in a format compatible with MapInfo's
requirements.


So it would be a matter of creating a query that would in effect combine all
the tables I broke out into separate entities into one query result, and then
exporting that result into a .csv (MapInfo has no trouble with .csv or .xls
or any of that sort of data, which is nice).

I have a couple of screenshots of how I've set things up so far; if they
would be more helpful in seeing what I'm working with, I can post the links.