|
| BigInteger (uint32_t *digits, unsigned capacity) |
|
BigInteger & | operator= (BigInteger const &b) |
|
int | getSign () const |
|
unsigned | getSize () const |
| getSize returns the number of digits in the value of this integer.
|
|
unsigned | getCapacity () const |
|
uint32_t const * | getDigits () const |
| getDigits returns the underlying digit array.
|
|
void | setToZero () |
| setToZero sets this integer to zero.
|
|
void | setTo (int64_t x) |
| setTo sets this integer to the given signed 64 bit integer value.
|
|
void | setTo (uint64_t x) |
| setTo sets this integer to the given unsigned 64 bit integer value.
|
|
void | negate () |
| negate multiplies this integer by -1.
|
|
BigInteger & | add (BigInteger const &b) |
| add adds b to this integer.
|
|
BigInteger & | subtract (BigInteger const &b) |
| subtract subtracts b from this integer.
|
|
BigInteger & | multiplyPow2 (unsigned n) |
| multiplyPow2 multiplies this integer by 2ⁿ.
|
|
BigInteger & | multiply (BigInteger const &b) |
| multiply multiplies this integer by b.
|
|
BigInteger
is an arbitrary precision signed integer class. It is intended to be used in applications which need relatively small integers, and only supports addition, subtraction and multiplication.
Internally, a BigInteger consists of a sign and an unsigned magnitude. The magnitude is represented by an array of 32 bit digits, stored from least to most significant. All non-zero integers have at least one digit, the most significant of which is non-zero. Zero is defined to have no digits.