c# - How significant is a performance penalty for using Int64/bigint instead of Int32/int in a C#4/T-SQL2008 application under 32-bit Windows XP? -
An application developed for me in a scientific project (C # 4 and T-SQL) is possible to handle only very likely I
Large amounts of very simple records display simple operation with them (a scientific simulation engine, not a linear time-series crystal). I would like to use 64-bit integer as the primary key for better efficiency.
I integrate the unit framework, processing POCO archive and arrays and practically use T-SQL stored procedures. I'm going to do
I am going to store a database on a SQL Server 2008 Access it from multiple application instances together for distributed processing.
SQL Server and application instances are running on 32-bit Windows XP systems, sometimes at full 64-bit-unknown hardware.
Which penalties are I facing to use 64-bit integer types as primary keys?
Unless you stick to those numbers to read and write (i.e., any arithmetic, just databases Questions), the performance hit will be negligible. It would be like using 2 int
s as a parameter instead of 1.
Once you start arithmetic on them, it starts messing up. Addition and subtraction is usually 3 times slow because the general int
s is multiplication and division is very slow, on an order of magnitude. I have posted the code on this site to multiply 2 64-bit numbers on 32-bit CPUs, if you wish I can see it, but it is more than 3 pages long.
Talking about the id field you are looking at, you should not make any arithmetic on them? So you should be fine.
Comments
Post a Comment