View Single Post
  #19  
Old August 17th, 2007, 10:24 AM posted to microsoft.public.access.tablesdbdesign
Jamie Collins
external usenet poster
 
Posts: 1,705
Default How do you mulitply in a field?

On 17 Aug, 05:26, wrote:
Normalization is the level of efficiency your data model stores data
that is INPUT. A View is an OUTPUT of the normalized data and has no
buisness being part of the data storage model ... even if the data
that is OUTPUT to the View is stored in your db


Normalization only applies to INPUT? Base tables = INPUT, virtual
tables = OUTPUT?

INSERT INTO MyView...
UPDATE MyView SET ...
DELETE FROM MyView WHERE...

Where did you get such notions?!

If your datamodel is dependant upon on OUTPUT


My physical implementation in SQL is dependent on my logical model,
not the other way around. My logical model does not directly consider
INPUT and OUTPUT, those are for the font end.

My SQL implementation sometimes relies on VIEWs. Example: one
application requires a time-valid state (history table with periods)
and a second application is only interested in the current state,
therefore I create a VIEW (...WHERE end_date IS NULL, 'hide' the
base table while 'exposing' the VIEW via privileges, use INSTEAD OF
triggers and WITH CHECK OPTION to manage updates to the VIEW. Though
the users of the second application may discover it's a VIEW rather
than a base table they should not care because logically a table is a
table is a table.

(ie a Make Table query
in Access), that block of data is NOT part of the data storage model
for your application.


I don't get your 'Make Table query' example. IMO no application should
be creating permanent tables on the fly. Because though Access/Jet SQL
does not support the CREATE TEMPORARY TABLE or similar syntax (unless
the Help is to believed and I suggest it isn't:
http://office.microsoft.com/en-gb/ac...314411033.aspx) I can
see the temptation, however I'm told that dropping an re-creating
tables causes file bloat.

I have tables in my SQL implementation that are not part of my logical
model. Example: permanent load tables I can populate, operate on using
SQL to populate other tables then clear down; the most frequent use it
to normalize 'non-relational' data (e.g. csv) therefore such tables
are denormalized out of necessity. I say that tables that are part of
the logical mode should be normalized (unless there's a good reason to
denormalize, always ensuring data integrity).

I have permanent auxiliary tables that are denormalized. Example:
Calendar table with columns for date (instant), year, month, day,
week, is_weekday, etc. Theses tables are to help me and the users
write simply, easy to maintain queries. There's always costs when
denormalizing but here the benefits outweigh them.

I have permanent base tables that are denormalized to work around a
physical limitations in the SQL DBMS. Example: I must handle a real
life enterprise key, for which the trusted source is a government
agency, comprising one thousand ASCII characters (the civil servant
who designed it has since offered me a personal apology g), the SQL
DBMS's I use employs indexes to implement unique constraints where
there's a limit on the number of bytes an index value can comprise, so
I have to store a hash of the real key value so that my integrity
constraint (CHECK, trigger, etc) used to ensure uniqueness can use the
non-unique index on the hash for performance reasons. The hash does
not exist in the logical model so I figure I haven't denormalized
that. In the physical implementation I need the performance *and* I
need the uniqueness and all is fine by me because the hash is
constrained to be a true hash of the actual value i.e. can't do out of
sync. You can consider my table to denormalized but my conscience is
clear g.

I would stick my neck out and
claim that you are not normal.


In my country (UK), it is the custom to ROFL when a devastatingly good
pun is made.

Jamie.

--