Programs
General

Comments
Constants
Exceptions
Execution Environments
Identifiers
Relaxations
Spacing
T-Tables
Text
Variables

Lines
Expressions
Types
Operators
Binary Arithmetic Operators
Boolean Operators
Conversion Operators
Relational Operators
Structural Operators
Unary Arithmetic Operators

Literals
Comments
Constants
Exceptions
Execution Environments
Identifiers
Relaxations
Spacing
T-Tables
Text
Variables

Lines
Expressions
Types
Operators
Binary Arithmetic Operators
Boolean Operators
Conversion Operators
Relational Operators
Structural Operators
Unary Arithmetic Operators

Literals
A variable of the type decimal can hold decimal numbers.

The numbers are floating point. This means that the fraction (mantissa) and exponent are separate.

The numbers can be negative and positive.

A decimal variable can hold each whole value between +/-1 quadrillion (-10^15). Above 1 quadrillion and below one, it can hold 15 decimal digits and an exponent of two digits.

- Fraction (Mantissa): 15 decimal digits and sign
- Exponent: 2 decimal digits and sign

This table defines the minimum and maximum values:

Value Type | Value |
---|---|

max | `+0.999 999 999 999 999*10^+99` |

min | `-0.999 999 999 999 999*10^+99` |

min positive | `+0.000 000 000 000 001*10^-99` |

max negative | `-0.000 000 000 000 001*10^-99` |

smallest change* | `0.000 000 000 000 001*10^sxx` |

zero | `0` |

* `sxx`

is the current exponent, one sign and two digits. For example, if the current variable value is `0.1*10^+10`

, then the smallest change is `0.000 000 000 000 001*10^+10`

.

Question | Answer |
---|---|

Can be Not a Number (`NaN` )? | No |

Can be Infinity? | No |

Can be Negative Infinity? | No |

Can be Undefined or Null? | No |

Can be negative zero? | No |

In summary, a decimal variable can only hold numbers.

All computations must be done using a certain precision. The computations must give the same result if the decimal type has exactly 15 digits precision and if it has any higher precision. In other words, do not assume that the precision is, for example, 16 or 31 digits. If a computation gives an answer with 15 digits, it must produce the same result with 17 or 100 digits of precision.

The Progsbase system may be able to detect dependencies on the precision by

- running the tests with different precisions
- using program analysis.

Computing the nth decimal digit of some operators might require a high number of iterations for some values. Thus, such computations belong in specialized libraries.

If the actual calculations on the decimal variable type is done using double precision IEEE Standard for Floating-Point Arithmetic (IEEE 754). Then, the following calculation would result in `y = true`

, but will result in `y = false`

if using 100 decimal digit precision.

```
x = 1 / 10
x = x * 10
x = x - 1
x = x * 1000000000
x = x * 100000000
x = round(x)
y = x == 6
```

The following calculation works as long as the precision of the calculations used are at least 15 decimal digits floating point with three decimal digit exponent. This will always result in `y = true`

as long as the requirements of the decimal variable type are met.

```
x = 1 / 10
epsilon = 0.00001
y = |x - 0.1| < epsilon
```

This is known as an *epsilon comparison*.

An important note about finite precision (including binary and decimal) floating point numbers is that they are an approximation of the actual value you are storing. Hence, they must always be rounded before considered. e.g. when storing `34.65 / 10`

in a double, you are actually storing `3.464999999999999857891452847979962825775146484375`

, which must first be rounded to 15 decimal digits before it can be considered, i.e. it must be rounded to `3.46500000000000`

. Hence, if we now are to round to 2 decimal digits, to get a currency amount, we get the correct `3.47`

.

The same problems happens with decimal floating points: `3.465/27*27 = 3.464999999999991`

which if rounded directly to two digits gives `3.46`

. The correct way is to round once to get the actual number: `3.465000000`

, and then again to get the rounded number: `3.47`

.

The following is an elaboration with examples. The parts of the process are:

- Decide on a precision
- Enter the value and do calculations
- Stored value
- Get the actual value using the precision
- Round the value

First, decide on a precision that the final calculated value will have. This depends on

- the precision of the underlying hardware, which in this system is 15 digits.
- the amount of calculations you are doing and their type.

For example, we decide on 10 decimal digits.

The field of mathematics with the theory for this is called the *Calculus of Errors*.

For more information, see arithmetic expressions.

The value you give to the program. Almost exclusively, numbers are written as decimal, even when stored as binary.

For example

- Example 1:
`34.65 / 10`

- Example 2:
`3.465 / 27 * 27`

The value stored, e.g. in memory or on disk.

For example:

- Example 1 (53 binary digits):
`3.464999999999999857891452847979962825775146484375`

- Example 2 (15 decimal digits):
`3.464999999999991`

After calculations have been done on the stored value, we want to read out the value to use it. The first stage is to calculate the actual value. This means that we round to the precision we have determined is sufficient. For the example we are following, we determined that 10 digits was sufficient.

The actual value is stored in the same number system as is was entered, in this example decimal.

- Example 1:
`3.465000000`

- Example 2:
`3.465000000`

After the actual value has been determined, we can do further calculations if necessary. In this example, we are calculating with money, so we want to round so we get two digits after the decimal point.

- Example 1:
`3.47`

- Example 2:
`3.47`

We would be more than happy to help you. Our opening hours are 9–15 (CET).

📞 (+47) 93 68 22 77

Nils Bays vei 50, 0876 Oslo, Norway

Copyright © 2018-22 progsbase.com by Inductive AS.