say, hyperthetically, i have a table with 1 million entries. fairly basic, few integer columns and a few char (fixed length) columns, and some timestamp columns.
if one of the column is a "id" column, and there are about 200,000 unique id's in this table (so, each id has aobut 5 entries)..
how slow will it be if i do a query like
SELECT * FROM bigtable WHERE id='123542'.
will the database's optimizer find that easy since itl only be picking 5 things out, and matching them via integers, or what?
would it be significantly faster to split the table up into say 20 tables, one per "Region" and going to the extra effort to write the code to read across tables?
thanks in advance
David
if one of the column is a "id" column, and there are about 200,000 unique id's in this table (so, each id has aobut 5 entries)..
how slow will it be if i do a query like
SELECT * FROM bigtable WHERE id='123542'.
will the database's optimizer find that easy since itl only be picking 5 things out, and matching them via integers, or what?
would it be significantly faster to split the table up into say 20 tables, one per "Region" and going to the extra effort to write the code to read across tables?
thanks in advance
David