If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#1
|
|||
|
|||
Q: Access DE via network
Hi,
We're using access 2000 here on a network for a data entry screen, and we'll soon be adding users on another network. Right now, the users for the mdb are on the same network, so it's a nice fat 100T connection. The new users are on a T-1, so accessing the database is much slower. I know I can split the database into a data/forms database and give each of the forms to the users that they can put locally. Is there anything else I can do to speed things up? Basically, all I want transfered between servers is data, no overhead for refreshing forms, etc. Thanks! -Mark |
#2
|
|||
|
|||
Access DE via network
"MarkD" wrote in message
... Hi, We're using access 2000 here on a network for a data entry screen, and we'll soon be adding users on another network. Right now, the users for the mdb are on the same network, so it's a nice fat 100T connection. The new users are on a T-1, so accessing the database is much slower. I think you can do the above math! 100T to 1T is 100 times. So, something that takes 8 seconds, will now take 800 (that is 13 minutes now!). If a report takes 30 seconds, then now it will take 3000 second (that is 50 minutes now). The math is very easy to do! I know I can split the database into a data/forms database and give each of the forms to the users that they can put locally. You need to do the above with any multi user access application (you REALLY must avoid having multiple users in the front end). Further, you should be distributing a mde to each user. This is a issue of stability. Is there anything else I can do to speed things up? Basically, all I want transfered between servers is data, no overhead for refreshing forms, etc. Really , we are dealing with a very simply grade school math problem: 1T is 100 times slower then 100T You can read up on the solutions to using a wan he http://www.attcanada.net/~kallal.msn/Wan/Wans.html -- Albert D. Kallal (Access MVP) Edmonton, Alberta Canada http://www.attcanada.net/~kallal.msn Thanks! -Mark |
#3
|
|||
|
|||
Access DE via network
Hi Albert,
Yes, I understand the difference between 100baseT and T-1. I learned a bit early on in getting my math degree the differnce between 1 and 100. I was just stressing that since we will have end users on different servers that what once worked now will not. I also know that splitting the database will remove some of the overhead you have when dealing with only one database. What I really wanted to know is whether splitting the database removes all overhead outside of actual data transfer and, if not, if there was any other ways to lessen the bandwidth. -Mark -----Original Message----- "MarkD" wrote in message ... Hi, We're using access 2000 here on a network for a data entry screen, and we'll soon be adding users on another network. Right now, the users for the mdb are on the same network, so it's a nice fat 100T connection. The new users are on a T-1, so accessing the database is much slower. I think you can do the above math! 100T to 1T is 100 times. So, something that takes 8 seconds, will now take 800 (that is 13 minutes now!). If a report takes 30 seconds, then now it will take 3000 second (that is 50 minutes now). The math is very easy to do! I know I can split the database into a data/forms database and give each of the forms to the users that they can put locally. You need to do the above with any multi user access application (you REALLY must avoid having multiple users in the front end). Further, you should be distributing a mde to each user. This is a issue of stability. Is there anything else I can do to speed things up? Basically, all I want transfered between servers is data, no overhead for refreshing forms, etc. Really , we are dealing with a very simply grade school math problem: 1T is 100 times slower then 100T You can read up on the solutions to using a wan he http://www.attcanada.net/~kallal.msn/Wan/Wans.html -- Albert D. Kallal (Access MVP) Edmonton, Alberta Canada http://www.attcanada.net/~kallal.msn Thanks! -Mark . |
#4
|
|||
|
|||
Access DE via network
splitting the database removes all overhead outside of
actual data transfer and No it does not remove locking signalling and index transfers. But it is probably the data transfers which will now make your app unusable. (david) "MarkD" wrote in message ... Hi Albert, Yes, I understand the difference between 100baseT and T-1. I learned a bit early on in getting my math degree the differnce between 1 and 100. I was just stressing that since we will have end users on different servers that what once worked now will not. I also know that splitting the database will remove some of the overhead you have when dealing with only one database. What I really wanted to know is whether splitting the database removes all overhead outside of actual data transfer and, if not, if there was any other ways to lessen the bandwidth. -Mark -----Original Message----- "MarkD" wrote in message ... Hi, We're using access 2000 here on a network for a data entry screen, and we'll soon be adding users on another network. Right now, the users for the mdb are on the same network, so it's a nice fat 100T connection. The new users are on a T-1, so accessing the database is much slower. I think you can do the above math! 100T to 1T is 100 times. So, something that takes 8 seconds, will now take 800 (that is 13 minutes now!). If a report takes 30 seconds, then now it will take 3000 second (that is 50 minutes now). The math is very easy to do! I know I can split the database into a data/forms database and give each of the forms to the users that they can put locally. You need to do the above with any multi user access application (you REALLY must avoid having multiple users in the front end). Further, you should be distributing a mde to each user. This is a issue of stability. Is there anything else I can do to speed things up? Basically, all I want transfered between servers is data, no overhead for refreshing forms, etc. Really , we are dealing with a very simply grade school math problem: 1T is 100 times slower then 100T You can read up on the solutions to using a wan he http://www.attcanada.net/~kallal.msn/Wan/Wans.html -- Albert D. Kallal (Access MVP) Edmonton, Alberta Canada http://www.attcanada.net/~kallal.msn Thanks! -Mark . |
#5
|
|||
|
|||
Access DE via network
"MarkD" wrote in message
... I was just stressing that since we will have end users on different servers that what once worked now will not. I also know that splitting the database will remove some of the overhead you have when dealing with only one database. What I really wanted to know is whether splitting the database removes all overhead outside of actual data transfer and, if not, if there was any other ways to lessen the bandwidth. Splitting does quite well reduce the overhead. However, my comments actually where written and stated in terms of a split database (I as a general rule just assume this is the case). So, the issue(s) of split change none of what I had to say. So, splitting helps, but does really very little to fix the problem of such a large drop in bandwidth. The main problem (or enemy) here is that the windows file system comes into play and also must travel across the network. If you ever tried to simply browse directories across a VPN on a dial up line..you will see what I mean! (it is VERY slow...). In fact, try browsing directories across the T1 line to the server (it will be VERY sluggish..and you not even transferring data!..but just browsing directories). So, to get any useable speed here, you will have to consider thin client....or a ms-access application optimized for sql server. I wish there was some other solution, but down in the t1 range...it just requires a different technology then what a file share offers. -- Albert D. Kallal (Access MVP) Edmonton, Alberta Canada http://www.attcanada.net/~kallal.msn |
#6
|
|||
|
|||
Access DE via network
Thanks Albert,
We use SQL Navigator here against an Oracle database and it seems that fetching rows is alot faster than having a non-split database. Having a split database seems to speed things up significantly to a degree that my manager is happy with, but I was wondering if there was any other ways I could squeeze a couple more % increase in speed. And yeah, I know how it is browing directories across a T- 1. I gotta do that everyday and it's soooooo slooooooooow. Oh well, I can't change the network infrastructure. Thanks again for all your help, -Mark -----Original Message----- "MarkD" wrote in message ... I was just stressing that since we will have end users on different servers that what once worked now will not. I also know that splitting the database will remove some of the overhead you have when dealing with only one database. What I really wanted to know is whether splitting the database removes all overhead outside of actual data transfer and, if not, if there was any other ways to lessen the bandwidth. Splitting does quite well reduce the overhead. However, my comments actually where written and stated in terms of a split database (I as a general rule just assume this is the case). So, the issue(s) of split change none of what I had to say. So, splitting helps, but does really very little to fix the problem of such a large drop in bandwidth. The main problem (or enemy) here is that the windows file system comes into play and also must travel across the network. If you ever tried to simply browse directories across a VPN on a dial up line..you will see what I mean! (it is VERY slow...). In fact, try browsing directories across the T1 line to the server (it will be VERY sluggish..and you not even transferring data!..but just browsing directories). So, to get any useable speed here, you will have to consider thin client....or a ms-access application optimized for sql server. I wish there was some other solution, but down in the t1 range...it just requires a different technology then what a file share offers. -- Albert D. Kallal (Access MVP) Edmonton, Alberta Canada http://www.attcanada.net/~kallal.msn . |
#7
|
|||
|
|||
Access DE via network
"MarkD" wrote in message
... Thanks Albert, We use SQL Navigator here against an Oracle database and it seems that fetching rows is alot faster than having a non-split database. Having a split database seems to speed things up significantly to a degree that my manager is happy with, but I was wondering if there was any other ways I could squeeze a couple more % increase in speed. Why not move the access back end tables to oracle? You can keep the ms-access front end..and most stuff will work, and place the data on the oracle database. This approach most certainly can work on a t1 line..... -- Albert D. Kallal (Access MVP) Edmonton, Alberta Canada http://www.attcanada.net/~kallal.msn |
#8
|
|||
|
|||
Access DE via network
That's a perfectly logical and reasonable thing to do, but
my company doesn't like logical and reasonable. The oracle tables are for reporting and not transaction processing; I don't have update/append privileges. In any case, whether the back end is Oracle or Access shouldn't affect performance much, should it? If anything, I'd think an Access front end to an Access back end would be faster (our recordsets are 10,000 max, so Oracle isn't going to speed things up much). Thanks, -Mark -----Original Message----- "MarkD" wrote in message ... Thanks Albert, We use SQL Navigator here against an Oracle database and it seems that fetching rows is alot faster than having a non-split database. Having a split database seems to speed things up significantly to a degree that my manager is happy with, but I was wondering if there was any other ways I could squeeze a couple more % increase in speed. Why not move the access back end tables to oracle? You can keep the ms-access front end..and most stuff will work, and place the data on the oracle database. This approach most certainly can work on a t1 line..... -- Albert D. Kallal (Access MVP) Edmonton, Alberta Canada http://www.attcanada.net/~kallal.msn . |
#9
|
|||
|
|||
Access DE via network
"MarkD" wrote in message
... The oracle tables are for reporting and not transaction processing; I don't have update/append privileges. In any case, whether the back end is Oracle or Access shouldn't affect performance much, should it? Yes, in fact, this will effect performance VERY much. In my article, one of the possible solutions here is to move the back end data from a simple file to a true database server. (that server can be the free desktop sql server engine included on the office cd for use with ms-access, or that server can be sql-server, or in your case you have Oracle as a database server). Using a server based system can dramatically effect performance. (it is a different architecture) Right now you have: Ms-access-JET engine ------------ t1 line ----------- mdb file on server To retrieve a record, jet has to: Process sql command (no t1 traffic) open table (this occurs across the t1 line...and is slow) read/load index (this occurs across the t1 line...and is slow) Of course, very little information from the index file is read from the mdb file on the server, but it is slow The index information is then used to "resolve" the location of the record on the disk drive in the mdb file The record is then read from the mdb file. Note that ANY kind of read on the mdb file has to go through the t1 line. Virtually 100% of the processing of ALL DATA occurs ON THE CLIENT pc. ZERO % of the data processing occurs on the server side (in fact, you never did need to install ms-access or the JET engine on the server side...did you?). Now, contrast the above to a true client to server setup: Ms-access - JET engine --------- t1 line------------ oracle database server To retrieve a record, jet has to: Load and Process sql command (no t1 traffic) send sql to Oracle (this request occurs across the t1 line) at this point, ms-access is waiting. Oracle at this point (or any data base server) can now load and read the index file (no t1 activity occurs) Oracle at this point can scan and read as much as the file data it needs to grab that one record (no t1 activity occurs) Oracle finishes loading the record, closes the table ...AND THEN transmits the ONE record to ms-access/JET You can see that the searching, indexing, and processing of data occurs on the server side. In both cases, one record was sent down the wire, but as you note, browsing, and using the file system across a t1 line is very slow. With a true database server, the program that reads the hard disk and works with the data occurs on the server side (this is simply a high speed machine and all disk reads occur on the server /oracle side). Thus, with a true database server, then ALL searching and processing usually can occur on the server side. Further, when you start involving sql joins, and relational data, then the server has much more freedom to read data, create temp tables. grab huge amounts of memory and disk space and have at the problem until the final records are processed, loaded and setup ready to be transmitted to the awaiting client (in our case ms-access). Further, if you accidentally make a query on a field that is NOT indexed, in a file share, the whole table much be sent down the wire to the client computer IF THE index cannot be used (or there is no one present, or it would be faster to NOT use a index). With a true database server, even when a index can't be used, the disk drive and memory of the server might get a real serous workout as it screams away at the disk drive to get the data, but NO t1 activity occurs during this dance. That server can read, and gobble data in anyway shape or form because it don't care or need a network to read the data on ITS OWN disk drive. Finally, once the record is found, then that is ALL that is trans mitted down the wire. I mean, when you order a book on Amazon.com, a lot of disk drive activity etc occurs..but it don't occur on YOUR pc. With a file share virtually 100% of the processing MUST occur on your local pc, and often that will result in additional traffic as the database engine reads information from the file. If anything, I'd think an Access front end to an Access back end would be faster (our record are 10,000 max, so Oracle isn't going to If you eliminate the network altogether, then your assuming is true. And, in fact when no network is involved, the JET engine (the one that access uses) is likely to be FASTER then oracle. However, as soon as you have a network involved, then the true database server wins, as it is free to read and munch and crunch the data to get the desired result...and THEN transmits this result to the waiting client. The munching and crunching part does not involve network (t1) activity...and it is this reduction in overhead as to why a web based system (which is a server also), or server based database system can function at such low bandwidth. I mean, a t1 line is PLENTY for a web browser, and after all, most data you see on the web is driven by server based database system (but all activry is hiddn from your client...in this case the web browser). So, in fact, the performance of using oracle will be MUCH better then a JET to a simple file shared on the server. -- Albert D. Kallal (Access MVP) Edmonton, Alberta Canada http://www.attcanada.net/~kallal.msn |
#10
|
|||
|
|||
Access DE via network
Try to persuade your boss to make an Oracle User (schema) for your app or a
terrminal server for your WAN users... Pieter "Albert D. Kallal" wrote in message ... "MarkD" wrote in message ... The oracle tables are for reporting and not transaction processing; I don't have update/append privileges. In any case, whether the back end is Oracle or Access shouldn't affect performance much, should it? Yes, in fact, this will effect performance VERY much. In my article, one of the possible solutions here is to move the back end data from a simple file to a true database server. (that server can be the free desktop sql server engine included on the office cd for use with ms-access, or that server can be sql-server, or in your case you have Oracle as a database server). Using a server based system can dramatically effect performance. (it is a different architecture) Right now you have: Ms-access-JET engine ------------ t1 line ----------- mdb file on server To retrieve a record, jet has to: Process sql command (no t1 traffic) open table (this occurs across the t1 line...and is slow) read/load index (this occurs across the t1 line...and is slow) Of course, very little information from the index file is read from the mdb file on the server, but it is slow The index information is then used to "resolve" the location of the record on the disk drive in the mdb file The record is then read from the mdb file. Note that ANY kind of read on the mdb file has to go through the t1 line. Virtually 100% of the processing of ALL DATA occurs ON THE CLIENT pc. ZERO % of the data processing occurs on the server side (in fact, you never did need to install ms-access or the JET engine on the server side...did you?). Now, contrast the above to a true client to server setup: Ms-access - JET engine --------- t1 line------------ oracle database server To retrieve a record, jet has to: Load and Process sql command (no t1 traffic) send sql to Oracle (this request occurs across the t1 line) at this point, ms-access is waiting. Oracle at this point (or any data base server) can now load and read the index file (no t1 activity occurs) Oracle at this point can scan and read as much as the file data it needs to grab that one record (no t1 activity occurs) Oracle finishes loading the record, closes the table ...AND THEN transmits the ONE record to ms-access/JET You can see that the searching, indexing, and processing of data occurs on the server side. In both cases, one record was sent down the wire, but as you note, browsing, and using the file system across a t1 line is very slow. With a true database server, the program that reads the hard disk and works with the data occurs on the server side (this is simply a high speed machine and all disk reads occur on the server /oracle side). Thus, with a true database server, then ALL searching and processing usually can occur on the server side. Further, when you start involving sql joins, and relational data, then the server has much more freedom to read data, create temp tables. grab huge amounts of memory and disk space and have at the problem until the final records are processed, loaded and setup ready to be transmitted to the awaiting client (in our case ms-access). Further, if you accidentally make a query on a field that is NOT indexed, in a file share, the whole table much be sent down the wire to the client computer IF THE index cannot be used (or there is no one present, or it would be faster to NOT use a index). With a true database server, even when a index can't be used, the disk drive and memory of the server might get a real serous workout as it screams away at the disk drive to get the data, but NO t1 activity occurs during this dance. That server can read, and gobble data in anyway shape or form because it don't care or need a network to read the data on ITS OWN disk drive. Finally, once the record is found, then that is ALL that is trans mitted down the wire. I mean, when you order a book on Amazon.com, a lot of disk drive activity etc occurs..but it don't occur on YOUR pc. With a file share virtually 100% of the processing MUST occur on your local pc, and often that will result in additional traffic as the database engine reads information from the file. If anything, I'd think an Access front end to an Access back end would be faster (our record are 10,000 max, so Oracle isn't going to If you eliminate the network altogether, then your assuming is true. And, in fact when no network is involved, the JET engine (the one that access uses) is likely to be FASTER then oracle. However, as soon as you have a network involved, then the true database server wins, as it is free to read and munch and crunch the data to get the desired result...and THEN transmits this result to the waiting client. The munching and crunching part does not involve network (t1) activity...and it is this reduction in overhead as to why a web based system (which is a server also), or server based database system can function at such low bandwidth. I mean, a t1 line is PLENTY for a web browser, and after all, most data you see on the web is driven by server based database system (but all activry is hiddn from your client...in this case the web browser). So, in fact, the performance of using oracle will be MUCH better then a JET to a simple file shared on the server. -- Albert D. Kallal (Access MVP) Edmonton, Alberta Canada http://www.attcanada.net/~kallal.msn |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Images in a database | Franz | General Discussion | 10 | October 7th, 2004 09:35 AM |
Access License In Terminal Server | JIM.H. | General Discussion | 9 | July 19th, 2004 12:32 AM |
Running Access from network | Alan Fisher | Using Forms | 4 | July 3rd, 2004 02:37 AM |
Access' Network Capability | Tony | General Discussion | 1 | June 17th, 2004 10:01 PM |
Access network traffic... | Paul | Using Forms | 1 | June 8th, 2004 08:29 PM |