Monday, 15 August 2011

database design - About SQL and complexity -



database design - About SQL and complexity -

i've created sql table field x , field date. compute sum of x date < d. i've 2 possibilities :

create new field, "sum of x", represents sum of x when date < date[field] compute sum of x "manually", sql request

is there method superior ? or, if not, assume depends on table size. approximate size equalizes 2 methods ?

thank much.

is there method superior ?

no. in general, stick simplest solution until find doesn't work more. simplest solution in case calculate derived field each time need it.

or, if not, assume depends on table size.

yes. , number of insertions vs. number of reads. , whether dates monotonically increasing or not. , space (memory/disk) vs. time (processing power) tradeoff of scheme , requirements. , number of other things too.

what approximate size equalizes 2 methods ?

how long piece of string? many other variables reply that.

to repeat: stick simple until forced otherwise. maintaining cached values introduces complexity , corner cases: transactional boundaries? happens if add together new row? presumably triggers subsequent transaction observe , update affected rows. happens if roll original transaction?

that's lot more logic. much more potential go wrong. stick yagni until find do.

hth.

sql database-design aggregate-functions data-modeling

No comments:

Post a Comment