From: Andy Polyakov Date: Mon, 14 Nov 2011 21:14:53 +0000 (+0000) Subject: ec_cvt.c: performance update from HEAD. X-Git-Tag: OpenSSL_1_0_1-beta1~73 X-Git-Url: https://git.librecmc.org/?a=commitdiff_plain;h=17674bfdf75bffa4e225f8328b9d42cb74504005;p=oweals%2Fopenssl.git ec_cvt.c: performance update from HEAD. --- diff --git a/crypto/ec/ec_cvt.c b/crypto/ec/ec_cvt.c index bbed1da455..bfcbab35fe 100644 --- a/crypto/ec/ec_cvt.c +++ b/crypto/ec/ec_cvt.c @@ -78,7 +78,32 @@ EC_GROUP *EC_GROUP_new_curve_GFp(const BIGNUM *p, const BIGNUM *a, const BIGNUM const EC_METHOD *meth; EC_GROUP *ret; +#if defined(OPENSSL_BN_ASM_MONT) + /* + * This might appear controversial, but the fact is that generic + * prime method was observed to deliver better performance even + * for NIST primes on a range of platforms, e.g.: 60%-15% + * improvement on IA-64, ~25% on ARM, 30%-90% on P4, 20%-25% + * in 32-bit build and 35%--12% in 64-bit build on Core2... + * Coefficients are relative to optimized bn_nist.c for most + * intensive ECDSA verify and ECDH operations for 192- and 521- + * bit keys respectively. Choice of these boundary values is + * arguable, because the dependency of improvement coefficient + * from key length is not a "monotone" curve. For example while + * 571-bit result is 23% on ARM, 384-bit one is -1%. But it's + * generally faster, sometimes "respectfully" faster, sometimes + * "tolerably" slower... What effectively happens is that loop + * with bn_mul_add_words is put against bn_mul_mont, and the + * latter "wins" on short vectors. Correct solution should be + * implementing dedicated NxN multiplication subroutines for + * small N. But till it materializes, let's stick to generic + * prime method... + * + */ + meth = EC_GFp_mont_method(); +#else meth = EC_GFp_nist_method(); +#endif ret = EC_GROUP_new(meth); if (ret == NULL)