A Microsoft Office (Excel, Word) forum. OfficeFrustration

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » OfficeFrustration forum » Microsoft Access » Database Design
Site Map Home Register Authors List Search Today's Posts Mark Forums Read  

Effect of no Primary Key



 
 
Thread Tools Display Modes
  #1  
Old October 21st, 2004, 12:14 PM
Roger Spencelayh
external usenet poster
 
Posts: n/a
Default Effect of no Primary Key

I've inherited an Access 2000 database, and the client is having a few
problems with it. The first is that it is very slow, and secondly,
we're getting data added multiple times from some code which imports
data from an Excel workbook.

There are 2 tables contributing to the problem. The first contains
information about the Workbook - PK is the workbook name, and the other
holds information read from the workbook - a maximum of 10 records from
each workbook.

The data is read in from code which reads the contents of a folder
using the FileSystemObject, and writes the file name plus a couple of
fields from the book into the first table. It then reads some other
data from the book, and writes the file name, an ID from 1 to 10, and
other information from the book into the second table.

Run the code by hand in debug mode, and all is fine. Let it run
normally from a button on a form, and it may add the detail once,
twice or three times.

The crunch is that this second table has no primary key. Could that
contribute to the problem? What is the effect of not having a primary
key?

I will make FileName + ID the PK, but I'm interested in the real world
effect of having no PK

--
Roger

  #2  
Old October 21st, 2004, 01:17 PM
Lynn Trapp
external usenet poster
 
Posts: n/a
Default

The crunch is that this second table has no primary key. Could that
contribute to the problem? What is the effect of not having a primary
key?



It's hard to say for sure without knowing the actual data but, yes it could
easily contribute to the problem. The absence of a primary key means there
is no guarantee that the records can be uniquely identified by the database.

--
Lynn Trapp
MS Access MVP
www.ltcomputerdesigns.com
Access Security: www.ltcomputerdesigns.com/Security.htm


  #3  
Old October 21st, 2004, 04:38 PM
Roger Spencelayh
external usenet poster
 
Posts: n/a
Default

In article , Lynn Trapp wrote:
It's hard to say for sure without knowing the actual data but, yes it could
easily contribute to the problem. The absence of a primary key means there
is no guarantee that the records can be uniquely identified by the database.


Thanks for that Lynn. Is there anywhere you know of on the 'net where I can get
a good overview of exactly what the PK does?

Thanks.

--
Roger

  #4  
Old October 22nd, 2004, 08:43 PM
Amanda Payton
external usenet poster
 
Posts: n/a
Default

Roger -

Lynn is right. It's hard to say without seeing the table itself.

I've had problems with multiple data entries myself before... (I program for
a company that has programs that sound like they work in a similar manner.

Your duplicate data is probably coming from one of two places ... The first
is that there is a malfunction in the code sequence that is importing the
data - and it's grabbing a record more than once. Is there a variable
somewhere controlling a loop that changes unexpectedly? Are you getting the
same number of multiple copies of records for EVERY record in a set? Do you
see any patterns to what records are being duplicated and which aren't?

The second is that the data record exists more than once in your
spreadsheet, and access is just happily cutting and pasting - ignorant of the
fact that it has duplicate data in it's "hands". From what you have said -
this sounds less likely. (easy to check... either look at it directly - or if
there are copious amounts of records - make a seperate database, import the
entire workbook as a table, and run a "find duplicates" query on it. if you
have duplicate records in your source, you'll see them.


As far as primary keys are concerned... having an index of any kind makes
refrencing records easier and faster (though the database gets larger).
Their main purpose is to keep related information together, and to make sure
that when you tell Access "go find records related to "XXXXXXX" - it has
something difinitive to look for.

Frankly - this really sounds more like a coding problem. I don't know how
familiar you are with the debug window and the watch window.

If you open up the code module and click in the far left margin, a red line
and a little stop sign will appear - highlighting the corresponding line.
When you run code - either from the debug window, or from the form -
execution will pause when it gets to the line(s) you have highlighted, and
will only continue when you push the play button. This is useful for
checking the values of variables, and the path that Access takes through the
code sequence.

Additionally, if you use the Watch window (the other filetab on the debug
window), you can have Access keep a running display of what variables have
what values... you can then see if a variable goes out of bounds, or
increments a counter one too many times - etc.

Other questions worth thinking about - if you execute the code repetitively
- do you get more duplicates with each execution? I.E. first run, get 2 sets
of duplicates, 2nd run, get 3 or 4, etc....

Feel free to holler back, and I'll see if I can help you narrow down your
problem.

Amanda
"Roger Spencelayh" wrote:

I've inherited an Access 2000 database, and the client is having a few
problems with it. The first is that it is very slow, and secondly,
we're getting data added multiple times from some code which imports
data from an Excel workbook.

There are 2 tables contributing to the problem. The first contains
information about the Workbook - PK is the workbook name, and the other
holds information read from the workbook - a maximum of 10 records from
each workbook.

The data is read in from code which reads the contents of a folder
using the FileSystemObject, and writes the file name plus a couple of
fields from the book into the first table. It then reads some other
data from the book, and writes the file name, an ID from 1 to 10, and
other information from the book into the second table.

Run the code by hand in debug mode, and all is fine. Let it run
normally from a button on a form, and it may add the detail once,
twice or three times.

The crunch is that this second table has no primary key. Could that
contribute to the problem? What is the effect of not having a primary
key?

I will make FileName + ID the PK, but I'm interested in the real world
effect of having no PK

--
Roger


  #5  
Old October 23rd, 2004, 09:50 AM
Roger Spencelayh
external usenet poster
 
Posts: n/a
Default

In article ,
=?Utf-8?B?QW1hbmRhIFBheXRvbg==?= wrote:
Lynn is right. It's hard to say without seeing the table itself.

I've had problems with multiple data entries myself before... (I program for
a company that has programs that sound like they work in a similar manner.


Thanks for your reply Amanda.

Your duplicate data is probably coming from one of two places ... The first
is that there is a malfunction in the code sequence that is importing the
data - and it's grabbing a record more than once. Is there a variable
somewhere controlling a loop that changes unexpectedly? Are you getting the
same number of multiple copies of records for EVERY record in a set? Do you
see any patterns to what records are being duplicated and which aren't?


Now here is the problem. Breakpoint on first line of code and step through the
code and it behaves itself, working exactly as planned. Delete the data just
imported, remove the breakpoint and run it, and the problem appears. The strange
part is that on some workbooks it imports one copy of the data, on others 2
copies and on one or two it imports 3 copies. If we then delete all the imported
data and run it again, we get exactly the same duplications as the first run.
When go back in there on Tuesday, I'm going to add an Index to prevent the
duplicates, add a few counter variables and turn off the error handler and see
what happens. That should prove whether it's the code or the table causing the
problem.

The second is that the data record exists more than once in your
spreadsheet, and access is just happily cutting and pasting - ignorant of the
fact that it has duplicate data in it's "hands". From what you have said -
this sounds less likely. (easy to check... either look at it directly - or if
there are copious amounts of records - make a seperate database, import the
entire workbook as a table, and run a "find duplicates" query on it. if you
have duplicate records in your source, you'll see them.


There are a maximum of 10 records per workbook, so that's easy to check by
looking at the workbook, and it's not the problem. And unfortunately I can't
just import the worksheet as each 'record' spans 2 rows.

As far as primary keys are concerned... having an index of any kind makes
refrencing records easier and faster (though the database gets larger).
Their main purpose is to keep related information together, and to make sure
that when you tell Access "go find records related to "XXXXXXX" - it has
something difinitive to look for.


I've inherited this system, and wall be adding PK's as appropriate. I guess I
was really hoping that someone knowledgeable would say that kind of duplication
was bound to happen, or couldn't possibly happen, just because there wasn't a
PK.

Other questions worth thinking about - if you execute the code repetitively
- do you get more duplicates with each execution? I.E. first run, get 2 sets
of duplicates, 2nd run, get 3 or 4, etc....


No, because there's a code check that stops it importing a workbook more than
once.

Thanks again for you reply.

--
Roger

  #6  
Old October 28th, 2004, 11:09 PM
Amanda Payton
external usenet poster
 
Posts: n/a
Default

Roger - Have you thought about coming up with a macro in Excel to copy your
2nd line of your record and appending it to the end of the first, and then
deleting the unneeded line?

It's not something I've looked into too far myself, but Access VB can call
and trigger Excel VB and Excel Macros. (and any other office product for
that matter) Since each worksheet /workbook always has 10 records, you can
record a macro to copy and paste your data, and set it to execute 10 times,
then return control to Access, and let Access import the worksheet as data to
append to your table.

Just a thought... Best of luck!

Amanda

"Roger Spencelayh" wrote:

In article ,
=?Utf-8?B?QW1hbmRhIFBheXRvbg==?= wrote:
Lynn is right. It's hard to say without seeing the table itself.

I've had problems with multiple data entries myself before... (I program for
a company that has programs that sound like they work in a similar manner.


Thanks for your reply Amanda.

Your duplicate data is probably coming from one of two places ... The first
is that there is a malfunction in the code sequence that is importing the
data - and it's grabbing a record more than once. Is there a variable
somewhere controlling a loop that changes unexpectedly? Are you getting the
same number of multiple copies of records for EVERY record in a set? Do you
see any patterns to what records are being duplicated and which aren't?


Now here is the problem. Breakpoint on first line of code and step through the
code and it behaves itself, working exactly as planned. Delete the data just
imported, remove the breakpoint and run it, and the problem appears. The strange
part is that on some workbooks it imports one copy of the data, on others 2
copies and on one or two it imports 3 copies. If we then delete all the imported
data and run it again, we get exactly the same duplications as the first run.
When go back in there on Tuesday, I'm going to add an Index to prevent the
duplicates, add a few counter variables and turn off the error handler and see
what happens. That should prove whether it's the code or the table causing the
problem.

The second is that the data record exists more than once in your
spreadsheet, and access is just happily cutting and pasting - ignorant of the
fact that it has duplicate data in it's "hands". From what you have said -
this sounds less likely. (easy to check... either look at it directly - or if
there are copious amounts of records - make a seperate database, import the
entire workbook as a table, and run a "find duplicates" query on it. if you
have duplicate records in your source, you'll see them.


There are a maximum of 10 records per workbook, so that's easy to check by
looking at the workbook, and it's not the problem. And unfortunately I can't
just import the worksheet as each 'record' spans 2 rows.

As far as primary keys are concerned... having an index of any kind makes
refrencing records easier and faster (though the database gets larger).
Their main purpose is to keep related information together, and to make sure
that when you tell Access "go find records related to "XXXXXXX" - it has
something difinitive to look for.


I've inherited this system, and wall be adding PK's as appropriate. I guess I
was really hoping that someone knowledgeable would say that kind of duplication
was bound to happen, or couldn't possibly happen, just because there wasn't a
PK.

Other questions worth thinking about - if you execute the code repetitively
- do you get more duplicates with each execution? I.E. first run, get 2 sets
of duplicates, 2nd run, get 3 or 4, etc....


No, because there's a code check that stops it importing a workbook more than
once.

Thanks again for you reply.

--
Roger


  #7  
Old October 29th, 2004, 08:06 PM
Roger Spencelayh
external usenet poster
 
Posts: n/a
Default

In article ,
=?Utf-8?B?QW1hbmRhIFBheXRvbg==?= wrote:
Roger - Have you thought about coming up with a macro in Excel to copy your
2nd line of your record and appending it to the end of the first, and then
deleting the unneeded line?


A possibility, but I decided to take the easy way and write the import process
out to a log file. Only one set of data is written to the table.

I'm convinced it's a server problem corrupting the tables. When it takes 27
minutes to make a copy of the backend mdb file on the server, 9 minutes to
download it to a local PC and 2.5 minutes to make a copy on the local PC, I
get worried that the server is overloaded, under spec'd or just plain crap.

--
Roger

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Fixing up structure with primary, natural, surrogate, etc. Fred Boer Database Design 11 September 17th, 2004 05:53 PM
Using Primary Keys Jodie General Discussion 1 July 14th, 2004 08:49 PM
Primary Key not sorted Mike Database Design 6 June 16th, 2004 11:11 PM


All times are GMT +1. The time now is 01:58 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 OfficeFrustration.
The comments are property of their posters.